Google I/O Keynote (Google I/O ’17)

you you haha good morning welcome to Google i/o I love you guys too can’t believe it’s one year already it’s a beautiful day we’ve been joined by over 7,000 people and we are live-streaming this as always – over 400 events in 85 countries last year was a tenth year since Google I have started and so we moved it closer to home at Shoreline back where it all began seems to have gone well I checked the Wikipedia entry from last year there were some mentions of sunburn so we have plenty of sunscreen all around it’s on us use it liberally it’s been a very busy year since last year no different from my 13 years at Google that’s because we’ve been focused evermore on our core mission of organizing the world’s information and we are doing it for everyone and we approach it by applying deep computer science and technical insights to solve problems at scale that approach is served us very very well this is what has allowed us to scale up seven of our most important products and platforms to over a billion monthly active users each and it’s not the not just the scale at which these products are working users engage with them very heavily YouTube’s not just s over a billion users but every single day uses watch over 1 billion hours of videos on YouTube Google Maps every single day uses navigate over 1 billion kilometers with Google Maps so the scale is inspiring to see and there are other products approaching the scale we launched Google Drive five years ago and today it is over 800 million monthly active users and every single week there are over 3 billion objects uploaded to Google Drive two years ago at Google i/o we launched photos as a way to organize users photos using machine learning and today we are over 500 million active users and every single day users upload 1.2 billion photos to Google so the scale of these products are amazing but they’re all still working up their way towards Android which I’m excited as of this week we crossed over 2 billion active devices of Android as you can see that the robot is pretty happy too behind me so it’s a privilege to serve users of this scale and this is all because of the growth of mobile and smartphones but computing is evolving again we spoke last year about this important shift in computing from a mobile first to a AI first approach mobile made us reimagine every product we were working on we had to take into account that the user interaction model it fundamentally changed with multi-touch location identity payments and so on similarly in AI first world we are rethinking all our products and applying machine learning and AI to solve user problems and we are doing this across every one of our products so today if you use Google search we rank differently using machine learning or if you’re using Google Maps Street View automatically recognizes restaurant signs street signs using machine learning duo with video calling uses machine learning for low bandwidth situations and smart reply in a low last year had great reception and so today we are excited that we are rolling out smart reply to over 1 billion users of Gmail it works really well here’s a sample email if you get a email like this the machine learning systems learn to be conversational and it can reply and find what Saturday or whatever so it’s really nice to see just like with every platform shift how users interact with computing changes mobile brought multi-touch we evolved beyond keyboard and mouse similarly we now have wise envision as new to new important modalities for computing humans are interacting with computing in more natural and immersive ways let’s start with voice we’ve been using voice as an input across many of our products that’s because computers are getting much better at understanding speech we have had significant breakthroughs but the pace and even since last year has been pretty amazing to see our word error rate continues to improve even in very noisy environments this is why if you speak to Google on your phone or Google home we can pick up your voice accurately even in noisy environments when we were shipping Google home we had originally planned to include eight microphones so that we could accurately locate the source of where the user was speaking from but thanks to deep learning use a technique called neural beamforming we were able to ship it with just two microphones and achieve the same quality deep learning is what allowed us about two weeks ago to announce support for multiple users in Google home so that we can recognize up to six people in your house and personalize the experience for each and every one so voice is becoming an important modality in our products the same thing is happening with vision similar to speech we are seeing great improvements in computer vision so when we look at a picture like this we are able to understand the attributes behind the picture we realize it’s your boy in a birthday party there was cake and family in wall and your boy was happy so we can understand all that better now and our computer vision systems now for the task of image recognition are even better than humans so it’s astounding progress and we using it across our products so if you use the Google pixel it has the best-in-class camera and we do do a lot of work with computer vision you can take a low-light picture like this which is noisy and we automatically make it much clearer for you or coming or coming very soon if you take a picture of your daughter at a baseball game and there is something obstructing it we can do the hard work remove the obstruction and have the picture of what matters to you in front of you we are clearly at an inflection point with vision and so today we are announcing a new initiative called Google lens Google lens is a set of vision based computing capabilities that can understand what you’re looking at and help you take action based on that information we will ship it first in Google assistant and photos and I will come to other products so how does it work so for example if you run into something and you want to know what it is say a flower you can invoke google lens from your assistant point your phone at it and we can tell you what florida’s it’s great for someone like me with allergies or if you’ve ever been at a friend’s place and you’ve crawled under a desk just to get the username and password from a Wi-Fi router you can point your phone at and we can automatically do the hard work for you or if you’re walking in a street downtown and you see a set of restaurants across you you can point your phone because we know where you are and we have our knowledge graph and we know what you’re looking at we can give you the right information in a meaningful way as you can see we are beginning to understand images and videos all of Google was built because we started understanding text and webpages so the fact that computers can understand images and videos has profound implications for our core mission when we started working on search we wanted to do it at scale this is why we rethought our computational architecture we designed our data centers from the ground up and we put a lot of effort in them now that we are evolving for this machine learning and AI will we are rethinking our computational architecture again we are building what we think of as AI first data centers this is why last year we launched the tensor processing units they are custom hardware for machine learning they were about 15 to 30 times faster on 30 to 80 times more power efficient than CPUs and GPUs at that time we use DP use across all our products every time you do a search every time you speak to Google in fact DP user what powered alphago in its historic match against laser at all as you know machine learning has two components training that is how we build a neural @v we you know training is very computationally intensive and inference is what we do at real-time so that when you show it a picture we recognize whether it’s a dog or a cat and so on last year’s TPU software optimized for inference training is computationally very intensive to give you a sense each one of our machine translation models takes a training of over three billion words for a week on about hundred GPUs so we’ve been working hard and I’m really excited to announce our next generation of TP use cloud TP use which are optimized for both training and inference what you see behind me is one cloud TPU board it has four chips in it and each board is capable of 180 trillion floating point operations per second and you know we have designed it for our data center so you can easily stack them you can put 64 of these into one big supercomputer we call these TPU parts and each part is capable of 11.5 peda flops it is an important advance in technical infrastructure for the AI era the reason we enable the named it cloud TPU is because we are bringing it through the Google cloud platform so cloud TP users are coming to Google compute engine as of today we want Google Cloud to be the best cloud for machine learning and so we want to provide our customers with a wide range of hardware beats CPUs GPUs including the great GPUs Nvidia announced last week and now cloud GPUs so this lays the foundation for significant progress so we have focused on driving the shift and applying AI to solving problems at Google we are bringing our AI efforts together under Google dot a I it’s a collection of efforts and teams across the company focused on bringing the benefits of AI to everyone Google dot a I will focus on three areas state-of-the-art research tools an infrastructure like tensor flow and cloud TP use and applied AI so let me talk a little bit about these areas talking about research we are excited about designing better machine learning models but today it is really time consuming it’s a painstaking effort of a few engineers and scientists mainly machine learning PhDs we want it to be possible for hundreds of thousands of developers to use machine learning so what better way to do this than getting neural nets to design better neural nets we call this approach Auto mo it’s learning to learn so the way it works is we take a set of candidate neural nets think of these as little baby neural nets and we actually use a neural net to iterate through them till we arrive at the best neural net we use a reinforcement learning approach and it’s the results are promising to do this is computationally hard but cloud tipi use put it in the realm of possibility we are already approaching state-of-the-art in standard tasks like safar image recognition so whenever I spend time with the team and think about neural nets building their own neural nets it reminds me of one of my favorite movies inception and I tell them we must go deeper so we are taking all these AI advances and applying them to newer harder problems across a wide range of disciplines one such area is healthcare last year I spoke about our work on diabetic retinopathy it’s a preventable cause of blindness this year we published a paper in the Journal of American Medical Association and verily is working on bringing products to the medical community another such areas pathology pathology is a very complex area if you take an area like breast cancer diagnosis even amongst highly trained pathologist agreement on some forms of breast cancer can be as low as 48 percent that’s because they’re each pathologist is reviewing the equivalent of thousand ten megapixel images for every case this is a large data problem but one which machine learning is uniquely equipped to solve so we built neural nets to detect cancer spreading to adjacent lymph nodes it’s early days but our neural nets show a much higher degree of accuracy 89 percent compared to previous methods of 73 percent there are important caveats we do have higher false positives but already giving this in the hands of pathologists they can improve diagnosis in general I think this is a great approach for machine learning providing tools for people to do what they do better and we are applying it across even basic sciences take biology we are training neural nets to improve the accuracy of DNA sequencing D period is a new tool from Google da di that identifies genetic variants more accurately than state-of-the-art methods reducing errors is important applications we can more accurately identify whether or not a patient as genetic disease and can help with better diagnosis in Freedman we are applying it to chemistry we are using machine learning to predict the properties of molecules today it takes an incredible amount of computing resources to hunt for new molecules and we think we can accelerate timelines by orders of magnitude this opens up possibilities in drug discovery or material sciences I am entirely confident one day I will invent new molecules with that behave in predefined ways not everything we are doing is so profound you know we are doing even simple and fun things like a simple tool which can help people draw we call this order draw just like today when you type in Google we give you suggestions we can do the same when you are trying to draw even I can draw it this time so it may look like fun and games but pushing computers to do things like this is what helps them be creative and actually gain knowledge so we’re very excited about progress even in these areas as well so we’re making impressive progress in applying machine learning and we applying it across all our products but the most important product we are using this is for Google search in Google assistant we are evolving Google search to being more assistive for our users this is why last year at Google i/o we spoke about the assistant and since then we’ve launched on Google pixel and Google home and today it’s available on over 100 million videos Scott and team are going to talk more about it but before that let’s take a look at that many amazing ways people have been using the Google assistant ok Google hey Google play some dance music sure this is fresh air my guests will be Kimmy Schmidt on Netflix ok Google count to 100 sure 1 2 3 play vacuum harmonica on my CV 71 72 73 play the Wonder Woman trailer hey Google talk to dominate dr.

Lonely Planet talk to Korra show me my folders from last weekend your car is parked at 22b today in the news turn the living room lights on okay turning on the lights back baby hey Google drop a beat flip the coin call Jill set a timer talked to heads base and then just for a moment I’d like you to let go of any focus at all just let your mind do whatever it wants to do okay everyone last year at i/o we introduced the Google assistant a way for you to have a conversation with Google to get things done in your world today as sundar mentioned we’re well on our way with the assistant available on over a hundred million devices just as Google search simplified the web and made it more useful for everyone your Google assistants simplifies all the technology in your life you should be able to just express what you want throughout your day and the right thing should happen that’s what the Google assistant is all about it’s your own individual Google that video we saw really captures the momentum of this project we’ve made such big strides and there’s so much more to talk about today the assistant is becoming even more conversational always available wherever you need it and ready to help get even more things done first we fundamentally believe that the Google assistant should be hands down the easiest way to accomplish tasks and that’s through conversation it comes so naturally to humans and now Google is getting really good at conversation – almost 70% of requests to the assistant are expressed in natural language not the typical keywords that people type in a search box and many requests are follow-ups that continue the conversation we’re really starting to crack the hard computer science challenge of conversation ality by combining our strengths in speech recognition natural language understanding and contextual meaning now recently we made the assistant even more conversational so each member of the family gets relevant responses just for them by asking with their own voice and we’re continuing to make interacting with your assistant more natural for example it doesn’t always feel comfortable to speak out loud to your assistant so today we’re adding the ability to type to your assistant on the phone now this is great when you’re in a public place and you don’t want to be overheard the assistants also learning conversation beyond just words with another person it’s really natural to talk about what you’re looking at sundar spoke earlier about how AI and deep learning have led to tremendous strides in computer vision soon with the smarts of Google lens your assistant will be able to have a conversation about what you see this is really cool and Ibrahim is here to help me show you a couple examples of what we’ll launch in the coming months so last time I traveled to Osaka I came across a line of people waiting to try something that smelled amazing I don’t speak Japanese so I couldn’t read the sign out front but Google Translate knows over a hundred languages and my assistant will help with visual translation I just tap the google lens icon point the camera and my assistant can instantly translate them into English and now I continue the conversation what does it look like these pictures should match all right it looks pretty yummy now notice I never had to type the name of the dish my assistant used visual context and answered my question conversationally let’s look at another example some of the most tedious things I do on my phone stem from what I see a business card I want to save details from a receipt I need to track and so on with google lens my assistant will be able to help with those kinds of tasks too I love live music and sometimes I see info for shows around town that look like fun now I can just tap the google lens icon and point the camera at the Markey my assistant instantly red-eyes is what I’m looking at now if I wanted to I could tap to hear some of this band songs and my assistant offers other helpful suggestions right in the viewfinder there’s one to buy tickets from tick Astor and another to add the show to my calendar with just a tap my assistant adds the concert details to my schedule saving event save stone foxes for May 17th at 9 p.m.

Awesome my assistant helped me keep track of the event so I won’t miss the show and I didn’t have to open a bunch of apps or type anything thanks he drew him so that’s how the assistant is getting better at conversation by understanding language and voices with new input choices and with the power of google lens second the assistant is becoming a more connected experience that’s available everywhere you need help from your living room to your morning jog from your commute to errands around town your assistant should know how to use all of your connected devices for your benefit now we’re making good progress and bringing the assistant to those 2 billion phones and other devices powered by Android like TVs wearables and car systems and today I’m excited to announce that the Google assistant is now available on the iPhone so no matter what smartphone you use you can now get help from the same smart assistant throughout the day at home and on the go the assistant brings together all your favorite Google features on the iPhone just ask to get package delivery details from Gmail watch videos from your favorite YouTube creators get answers from Google search and much more you can even turn on the lights and heat up the house before you get home now Android devices and iPhones are just part of the story we think the assistance should be available on all kinds of devices where people might want to ask for help the new Google assistant SDK allows any device manufacturer to easily build the Google assistant into whatever they’re building speakers toys drink mixing robots whatever crazy device all of you think up now can incorporate the Google assistant now we’re working with many of the world’s consumer brands and their suppliers so keep an eye out for the badge that says Google assistant built-in when you do your holiday shopping this year now obviously another aspect of being useful to people everywhere is support for many languages I’m excited to announce that starting this summer the Google assistant will begin rolling out in French German Brazilian Portuguese and Japanese on both Android phones and iPhones by the end of the year will also support Italian Spanish and Korean so that’s how the assistant is becoming more conversational and how it will be available in even more context finally the assistant needs to be able to get all kinds of useful things done for people you know people sometimes ask if the assistant is just a new way to search now of course you can ask your assistant to get all sorts of answers from Google search but beyond finding information users are also asking the assistant to do all sorts of things for them now as you’ve already seen the assistant can tap into capabilities across many google apps and services but Google’s features are just part of the story we also opened the assistant to third-party developers who are building some really useful integrations I’ll turn it over to Valerie to share more about how the developer platform is getting stronger hi okay so the actions on Google platform it’s been awesome to see how developers like you have been gauging with the Google assistant like honestly you have built some really cool integrations like I can ask food network about the recipe that’s on TV right now I can work out with fitstar ask CNBC about the news or my husband and I could play name that tune with song pop which he is surprisingly good at until now these experiences have been available through the assistant on Google home but today we’re also bringing them to Android phones and iPhones it’s over 100 million devices on android alone so now people can get to Google features and third-party services from anywhere and they can even pick up are they left off across devices so not only are third-party integrations available in more places they’ll be able to do more starting today actions on Google will be supporting transactions it’s a complete end-to-end solution for developers including payments identity notifications receipts even account creation the platform handles all the complexity let me show you how one will work hi how can I help I’d like delivery from Panera huh this is vedera I’ll need your delivery address which one can I get from Google will go at 1600 Amphitheatre what can I get you started with of a strawberry poppy seed salad with steak instead of chicken got it how about one of these cold drinks and here I can just swipe through my options see what looks good I got eliminated great are you ready to check out yep okay the total is eighteen dollars and forty cents are you ready to place the order yes I’ll to scan my fingerprint to pay with Google and that’s it thanks you’re all set yeah super-easy like I was talking to someone at the store so here I was a new Panera customer I didn’t have to install anything or create an account you’ll also probably noticed I didn’t have to enter my address or my credit card I just saved those earlier with Google and Panera used built-in platform calls to request the information now I was in control over what I shared every step of the way so the developer platform is also getting much stronger for home automation integrations actions on Google can now support any smart home developer that wants to add conversational control today over 70 smart home companies work with the Google assistant so now on my Google home or from my phone I can lock my front door with August locks control a range of LG appliances or check in on my son’s room by putting the nest cam on TV alright now that we’re talking about making your home smarter we also have a lot of news to share today about Google home our own smart speaker with the Google assistant built-in here to tell you more is Rishi Chandra thanks Valerie you know it’s really hard to believe we launched Google home a little over six months ago and we’ve been really busy ever since since launch we’ve added 50 new features including some my favorites like to support for google shopping where I can use my voice to order items from Costco right to my front door or I can get step-by-step cooking instructions from over five million recipes or I can even play my favorite song just by using the lyrics now in April we launched in the UK to some great reviews and starting this summer we’re going to be launching in Canada Australia France Germany and Japan and with support for multiple users we can unlock the full potential Google home to offer a truly personal experience so now you can schedule a meeting set a reminder or get your own daily briefing with my day by using your own voice and get your commute your calendar appointments and your news sources now today like a short share for new features will be rolling out over the coming months so first we’re announcing support for proactive assistance coming to Google home home is great at providing personally relevant information for you when you ask for it but we think it’d be even more helpful it can automatically notify you of those timely and important messages and we do this by understanding the context of your daily life and proactive looking for that really helpful information and providing it for you in a hands-free way so for example let’s say I’m relaxing and playing game with the kids well I can see that the Google home lights just turned on hey Google what’s up hi Ritchie traffic’s heavy right now so you’ll need to leave in 14 minutes to get to shoreline athletic fields by 3:30 p.m.

That’s pretty nice the assistant saw the game coming up on my calendar and got my attention because I had to leave earlier than normal so now my daughter can make it to that soccer game right on time now we’re going to start simple with really important messages like reminders traffic delays and flight status changes and with multiple user support you have the ability to control the type of proactive notifications you want over time all right second another really common activity we do in the home today is communicate with others and a phone call is still the easiest way to reach someone so today I’m excited to announce hands-free calling coming to Google home it’s really simple to use just ask the Google assistant to make a call and we’ll connect you you can call any landline or mobile number in the US or Canada completely free and it’s all done in a hands-free way for example let’s say I forgot to call my mom on Mother’s Day well now I can call her while I’m scrambling to get the kids ready for school in the morning I just need to say hey Google call mom sure calling mom oh yeah finally calling my best day was three days ago yeah sorry about that they may be reversed for IO on Mother’s Day speaking of which you’re on stage right now say hi to everyone so hopefully this makes up for not calling right no it doesn’t you don’t need to visit and bring flowers okay I’m on it bye bye it’s that simple we’re just making a standard phone call through Google home so mom didn’t need to learn anything new she just need to answer her phone there’s no additional setup apps or even phone required and since the assistant recognized my voice we called my mom if my wife had asked we would have called her mom we can personalize calling just like everything else and now anyone the home can call friends family even businesses maybe on a local florist to get some flowers for your mom now by default we’re going to call it with a private number but you also have the option to link your mobile number to the Google assistant and we’ll use that number whenever we recognize your voice so whoever you call what’s notes coming from you now we’re rolling out hands-free calling the us to all existing Google home devices over the next few months it’s the ultimate hands-free speakerphone no setup required call anyone including personal contacts or businesses and even dial out with your personal number when we detect your voice we can’t wait for you to try it out okay third let’s talk a little bit about entertainment we designed Google home to be a great speaker one that you can put in any room in the house or wirelessly connect to other chromecast built-in speaker systems well today we’re announcing that spot in addition to their subscription service will be adding their free music service to Google home so it’s even easier to play your Spotify playlist we’ll also be adding support for SoundCloud and deezer to the largest global music services today and these music services will join many of the others already available through the assistant and finally we’ll be adding bluetooth support to all existing Google home devices so you can play any audio from your iOS or Android device but Google home can do much more than just audio last year we launched the ability to use your voice to play YouTube Netflix and Google Photos right on your TV and today we’re announcing additional partners including HBO now so just say what you want to watch and we’ll play it for you all on a hands-free way with Google home we want to make it really easy to play your favorite entertainment okay finally I want to talk a little bit how we see the assistant evolving to help you in a more visual way voice responses are great but sometimes that picture’s worth a thousand words so today we’re announcing support for visual responses with Google home now to do that we need a scream well fortunately many of us already have a ton of screens in our home today our phones our tablets even our TVs the Google assistant should smartly take advantage of all these different devices to provide you the best response on the right device for example what Google home I can easily get location information okay Google where’s my next event your Pokemon go hike is at Rancho San Antonio reserved for my kids my kids relax but if I want to view the directions the best place to do it is on my phone well soon you could just say okay Google let’s go all right I’m sending the best route to your phone it will automatically your phone notify your phone whether it’s Android or iOS and take you straight to Google Maps so you can glance at directions interact with the map or just start navigation it’s really simple now TVs are another natural place to get help from the Google assistant and we have a great place to start with over 50 million chromecast and chromecast built-in devices so today we’re announcing that we’ll be updating chromecast to show visual responses on your TV when you ask for help for Google home for example I can now say ok Google show my calendar for Saturday showing it on your TV it’ll show up right on the TV screen I’ll immediately get results from the assistant and since the assistant detected my voice we’re showing my calendar others will see their calendar by using their voice we can personalize the experience even on the TV they can continue to fall off the conversation looks like I have a biking trip to Santa Cruz what’s the weather in Santa Cruz this weekend this weekend in Santa Cruz it will be clear and sunny most of the time so it’s really easy it’s all hands-free your assistant can provide a visual response to a TV to a lot of different types of questions you know we talked how easy it is to play what you want to watch on the TV screen but what about those times you don’t know what to watch well so you could just ask hey Google what’s on YouTube here you go and show me my personalized results right on the TV screen if I don’t like any of the options I can continue the conversation with my voice show my watch later list alright play send my love playing send my love from YouTube it’s really simple again no remotes or phone required in a short conversation I found something really interesting to watch using go home I can you do with other things ok Google what’s on my DVR here you go here we’re showing how it works with YouTube TV a new live TV deeming service that gives you live sports and shows from popular TV networks and YouTube TV includes a cloud DVR so I can easily play my saved episodes play Modern Family ok playing modern family from YouTube TV you guys have a too easy nowadays you can just lay around even sex okay google dim the kitchen lights short thing go ahead dad ok Google show me a video of a kangaroo playing behmen with a pirate short playing on YouTube what I had a similar reaction the first time I saw it everything can be done in a hands-free way all from the comfort of my couch and over time we’re going to bring all those developer actions that Valerie already talked about right to the TV screen so we do even more over time with Google home and when you’re done just say ok Google turn off the TV sure and that’s our update for Google home proactive assistance to bring important information to you at the right time simple and easy hands-free calling more entertainment options and evolving the assistant provide visual responses in the home next up is an Elin is going to talk about Google photos two years ago we launched Google photos with Anna deixa skull to be the home for all your photos automatically organized and brought to life so that you could easily share and save what matters in doing so we took a fundamentally different approach we built a product from the ground up with AI at its core and that’s enabled us to do things in ways that only Google can like when you’re looking for that one photo you can’t find Google photos organizes your library by people places and things simply type a mail pineapple Hawaii and instantly find this gem or when you come home from vacation overwhelmed by the hundreds of photos you took Google photos will give you an album curated with only the best shots removing duplicates and blurry images this is the secret ingredient behind Google photos and the momentum we’ve seen in these two short years is remarkable as sundar mentioned we now have more than half a billion monthly active users uploading more than 1.2 billion photos and videos per day and today I’m excited to show you three new features we’re launching to make it even easier to send and receive the meaningful moments in your life now at first glance it might seem like photo sharing is a solved problem after all there’s no shortage of apps out there that are great at keeping you and your friends and family connected but we think there’s still a big and different problem that needs to be addressed let me show you what I mean if there’s one thing you know it’s that you’re a great photographer if there’s a second thing you know it’s that you’re kind of a terrible person what yeah you heard me the only photo of the birthday girl in focus never sent it the best picture of the entire wedding kept it to yourself this masterpiece of your best friend we are gonna send it but then you were like oh remember that sandwich I love that sandwich if only something could say hey Eric looks great in these do you want to send them to him and you could be like great idea well it can wait it can yep with Google photos so today to make us all a little less terrible people we’re announcing suggested sherry because we’ve all been there right like when you’re taking that group photo and you insist that it be taken with your camera because you know if it’s not on your camera you are never seeing that photo ever again now thanks to the machine learning in Google photos will not only remind you so you don’t forget to share will even suggest the photos and people you should share with in one tap you’re done let’s have a look at suggested sharing in action I’m once again joined on stage by my friend and Google Photos product lead David Lieb alright so here a bunch of photos Dave took while bowling with the team last weekend he was too busy enjoying the moment so he never got around to sharing them but this time Google photo sentiment reminder via notification and also by badging the new sharing tab the sharing tab is where you’re gonna be able to find all of your google photo sharing activity and at the top your personal suggestions based on your sharing habits and what’s most important to you here is the sharing suggestion that Dave got from his day bowling Google Photos recognized this was a meaningful moment it’s selected right shots and it figured out who he should send it to based on who was in the photos in this case it’s John V Jason and a few others who are also at the event Dave can now review the photos elected as well as update the recipients or if he’s happy with it he can just tap send and that’s it Google photos will even send an SMS or an email to anyone who doesn’t have the app and that way everyone can view and save the full resolution photos even if they don’t have Google Photos accounts and because Google photo sharing works on any device including iOS let’s have a look at what john VIII sees on her iPhone she receives a notification and ting on it lets her quickly jump right into the album and look at all those the Davis shared with her but notice here at the bottom she’s asked to contribute the photo she took from the event Google photos automatically identifying and suggesting the right ones John we can review the suggestions and then simply tap add now all of the photos are finally pulled together in one place and Dave get some photos he’s actually in which is great because a home for all your photos really should include photos of you now even though suggested sharing takes the work out of sharing sometimes there’s a special person in your life who you share just about everything with your what partner your best friend your sibling would it be great if Google photos automatically shared photos with that person for example I would love it if every photo I ever took of my kids was automatically shared with my wife and that’s why today we’re also announcing shared libraries let me show you how it works so here we’re now looking at my Google Photos account from the menu I now the option to go ahead and share my library which I’m going to go ahead and do with my wife Jess importantly I have complete control over which photos I automatically are I can share them all or I can share a subset like only photos of the kids or only photos from a certain date forward like when we first met in this case I’m gonna go ahead and share all we did not meet today and that’s all there is to it I’ve now gone ahead and shared my library with my wife Jess so let’s switch to her phone to see what the experience looks like from her and she receives a notification and after accepting she can now go to see all the photos that I’ve shared with her we can access Lea easily from the menu if she sees something she likes she can go ahead and select those photos and save them to her library we’ll even notify her priyada Clee as i take new photos now this is great but what if just doesn’t want to have to keep coming back to this view and checking if I shared new photos for her with her she just wants every photo I take of her or the kids to automatically be safe to her library just as if she took the photos herself with shared library she do just that choosing to audit a photos specific people now anytime I take photos of her or the kids without either of us having to do anything they’ll automatically appear in the main view of a wrap let me show you now I couldn’t justify pulling the kids out of school today just to have their photo taken but I do have the next best thing right maitre de su to Ava and Lily all righty here so I’m gonna go ahead take a photo with the girls smile kids ah fantastic and since this is too good of an opportunity I’m gonna have to take one with all of you here too all right there we go Oh brilliant all right okay so thank you girls much appreciated back to school we go alright so using nothing more than the standard camera app on my phone I’ve gone ahead and taken one photo with my kids in one photo with all of you here in the audience Google photos is going to back these two photos up it’s going to share them with Jess and then it’s going to recognize the photo that has kids in them and automatically save just that one to her like like you can see right here now finally Jess and I can stop worrying about whose phone we’re using to take the photos all the photos of family are in my Google Photos app and they automatically appear in hers too and best of all these family photos are part of both of our search results and they’re included in the great collages movies and other fun creations that Google photos makes for us but notice how only the photos with the kids showed up in Jess’s main view but because I shared my entire library with her I can Slee go to the menu and just can now see all the photos including the one with all of you and that’s how easy sharing can be in Google Photos spend less time worrying about sharing your memories and more time actually enjoying them suggested sharing and shared libraries will be rolling out on Android iOS and web in the coming weeks finally we know sharing doesn’t always happen through apps and screens there’s still something pretty special about looking at and even gathering around an actual printed photo but printing photos and albums today is hard you have to hunt across devices and accounts to find the right photos select the best among the duplicates and blurry images upload them to a printing service and then arrange them across dozens of pages it can take hours of sitting in front of a computer just to do one thing thankfully our machine learning in Google Photos already does most of this work for you and today we’re bringing it all together with the launch of photo books they’re beautiful high quality with a clean and modern design but the best part is that they’re incredibly easy to make even on your phone what used to take hours now only takes minutes I recently made a book for Jess on Mother’s Day and let me show you just how easy and fast that was first thanks to unlimited storage all my life’s moments are all up here in Google photos no need to upload them to another website or app now my favorite way start a book is to use people search since this is a Mother’s Day gift I’m gonna simply find photos of Jess Ava and Lily there they are alright I thought I took more photos alright so why don’t we just go and pick another set of photos Dave if that one’s not coming up just it’ll be a fun Mother’s Day gift for her she’ll get a different surprise so I’ll select a bunch of fussier and the good news is I don’t have to figure out which are the right photos and which are the good ones because this is where Google photos really shines I’m just going to go ahead and hit plus select photo book I’m going to pick a hardcover book we offer both a soft cover and a hard cover and notice what happens Google photos is going to make the best photos for me automatically automatically suggesting photo for in this case how else is that and it’s even going to go ahead and lay them all out for me all that’s left for me to do is make a couple of tweaks check out and in a few days I’ll end up with one of these beautiful printed photo books and soon we’ll make it even easier to get started applying machine learning to create personalized photo books you’ll love so when you go to photo books from the menu you’ll see pre-made books tailored just for you your trip to the Grand Canyon time with your family during the holidays or your pet or even your kids artwork all easily customizable we’ll even notify you when there are new photo book suggestions photo books are available today in the US on photos google comm and they’ll be rolling out on Android and iOS next week and we’ll be expanding to more countries soon I am really excited about this launch and I want all of you to be the first to try it out and that’s why everyone here at i/o will be receiving a free hardcover photo book it’s a great example of machine learning at work so those are the three big updates related to sharing in Google photos suggested sharing shared libraries and photo books three new features built from the ground up with AI at their core I can’t wait for all of you to try them out real soon now before I go I want to touch on what sundar mentioned earlier which is the way we’re taking photos is changing instead of the occasional photo with friends and family we now take 30 identical photos of a sunset we’re also taking different types of photos not just photos to capture a personal memory but as a way to get things done whiteboards we want to remember receipts we need to file books would like to read and that’s where Google lends and its vision based computing capabilities comes in it can understand what’s in an image and help you get things done Scott showed how Google lens in the assistant can identify what you’re looking at and help you on the fly but what about after you’ve taken the photo there are lots of photos you want to keep and then look back on later to learn more and take action and for that we’re bringing google lens right into Google photos let me show you so let’s say you took a trip to Chicago there’s some beautiful architecture there and during your boat tore down the Chicago River you took lots of photos but it’s hard to remember which building is which later on now by activating lens you can identify some of the cool buildings in your photos like the second tallest skyscraper in les Willis Tower you can even pull up directions and get the hours for the V deck and later welcoming the Art Institute you might take photos of a few paintings you really love in one tap you can learn more about the painting and the artist and the screen shot that your friend sent you of that bike rental place just adding lens you can tap the phone number and make the call right from the photo lenz we’ll be rolling out in google photos later this year and we’ll be continually improving the experience so it recognizes even more objects and lets you do even more with them and those are the updates for google photos now let’s see what’s next from you two look at that oh my god Wow check this out open the hatch Oh we are one species sharing one profoundly interconnected world don’t let them convince you that you’re small because a lot of small things coming together can do big things love is love is love is love is love cannot be killed oh shut the sight now fill the world with music love and pride all right good morning everyone I am thrilled to be here at my first ever IO on behalf of YouTube okay so that opening video that we all just saw that’s a perfect glimpse into what makes YouTube so special the incredible diversity of content a billion people around the globe come to YouTube every month to watch videos from new and unique voices and we’re hard at work to make sure that we can reach the next billion viewers which you’ll hear about in a later i/o session today we want to give everyone the opportunity to watch the content on YouTube so YouTube is different from traditional media in a number of ways first of all YouTube is open anyone in the world can upload a video that everyone can watch you can be a vlogger broadcasting from your bedroom a gamer live-streaming from your console or citizen journalist documenting events live from your phone on the front lines and what we’ve seen is that openness leads to important conversations that help shape society from advancing LGBTQ rights to highlighting the plight of refugees to encouraging body positivity and we’ve seen in our numbers that you just really want to engage with this type of diverse content we are proud that last year we passed a billion hours a day being watched on YouTube and our viewership is not slowing down the second way that YouTube is different from traditional media is that it’s not a one-way broadcast it’s a two-way conversation viewers interact directly with their favorite creators via comments mobile live-streaming fan polls animated gifs and VR and these features enable viewers to come together and to build communities around their favorite content so one of my favorite stories of a YouTube community is the enable network a few years ago an engineering professor named Jon Schull saw a YouTube video about a carpenter who had lost two of his fingers the carpenter worked with a colleague for over a year to build an affordable 3d printed prosthesis that would enable him to go back to work they then applied this technology for a young boy who was born without any fingers so inspired by this video the professor posted a single comment on the video asking for volunteers with 3d printers to help print affordable prosthesis the network has since grown into a community of over 6,000 people who have designed printed and distributed these prosthetics to children in over 50 countries so today thousands of children have regained the ability to walk touch play and all because of the one video one comment and that incredible YouTube community that formed to help and that’s just one example of the many passionate communities that are coming together on YouTube around video so the third feature of this new medium is that video works on demand on any screen over 60% of our watch time now comes from mobile devices but actually our fastest growing screen isn’t the one in your pocket it’s the one in your living room our watch time in our living room is growing at over six ninety percent a year so let’s now welcome Sarah Ali head of living room products to the stage to talk about the latest features in the living room Thank You Susan so earlier today you heard from Rishi about how people are watching YouTube on the TV via the assistant but another way people are enjoying video is through the YouTube app which is available on over half a billion Smart TVs game consoles and streaming devices and that number continues to grow around the world so when I think about why YouTube is so compelling in the living room it isn’t just about the size of the screen it’s about giving you an experience that TV just can’t match first YouTube offers you the largest library of on-demand content second our recommendations build channels and lineups based on your personal interests and what you enjoy watching and third it’s a two-way interactive experience with features like voice control and today I’m super excited to announce that we are taking the interactive experience a step further by introducing 360 video in the YouTube app on the big screen and you know that you can already watch 360 videos on your phone or in your daydream headset but soon you’ll be able to feel like you’re in the middle of the action right from your couch and on the biggest screen you own now one of my personal interests outside of work is to travel and one place I’d love to visit is Alaska to check out the Northern Lights so let’s do a voice search aurora borealis 360 great let’s choose that first video and now using my TV remote I’m able to pan around this video checking out this awesome view from every single angle traveling is great especially when I don’t have to get on a flight but 360 is now a brand new way to attend concerts I didn’t make it to Coachella but here I can experience it like I was on stage and he enhanced the experience even further we are also introducing live 360 in the living room soon you’ll be able to witness moments and events as they unfold in a new truly immersive way so whether you have a Sony Android TV or an Xbox one console soon you’ll be able to explore 360 videos right from the comfort of your couch and along with your friends and family and now to help show you another way we’re enabling interactivity please join me in welcoming Barbara McDonald who’s the lead of something we call super chat good morning io and to everybody on the livestream as Susan mentioned what makes YouTube special is the relationships that creators are able to foster with their fans and one of the best ways to connect with your fans is to bring them live behind the scenes of your videos offering up can’t-miss content in the past year the number of creators live streaming on YouTube has grown by 4x this growth is awesome and we want to do even more to deepen the connection between creators and their fans during live streams that’s why earlier this year we rolled out a new feature called super chat when a creator is live-streaming fans can purchase super chats which are highlighted fun chat messages not only des fans love the recognition but creators earn extra money from it in the past three months since launch we’ve been amazed by the different ways creators are using super chat even April our favorite pregnant raff who unfortunately could not be here with us today has raised tens of thousands of dollars for her home the animal adventure park ok ok we can glad for that but enough talking for me we are going to do a live stream right here right now to show all of you how super chat works and to help me I am very excited to introduce top YouTube creators with 9 million subscribers and over 1 billion lifetime channel views on the grass back there the slow mo guys Wow hey happy to be in how’s it going it’s great to have you so let’s pull up their live stream and just look chat is flying now I love the slow mo guys and I want to make sure that they see my message so I’m going to super chat them pulled up the stream and right from within live chat I’m able to enter my message select my amount make the purchase and send boom see how much that message stands out and it gets pinned to the top cool right yeah thanks Barbara it’s actually lovely in a minute although I feel that there’s a high chance of showers right local showers like specifically to this stage yeah I wonder I wonder well because we know developers are incredibly creative we wanted to see what you can do to make super chat even more interactive so we’ve launched an API for it and today we’re taking it to the next level with a new developer integration that triggers actions in the real world this means that when a fan sends a super chat to a creator things can happen in real life such as turning the lights on or off in the creator studio flying a drone around or pushing buttons on their toys and gadgets the slow mo guys are going to create their next slow motion video using super chats API we have now rigged things up so that when I send my super chat it will automatically trigger the lights and a Bighorn in this amphitheater okay and that is going to signal our friends back there on the lawn to unleash a truckload of water balloons at the slow mo guys I’m scared yeah that’s right but every dollar we’re gonna take another balloon so more money means more balloons although I did hear a guy over here go we’re gonna totally nail these guys all right that’s got to be at least four dollars right there so yeah each dollar donate goes to the course that Susan mentioned earlier the enable network okay so how much do you think we can send I can start at $1 and go anywhere upwards from there so it’s for charity how do we think 100 how’s that sound okay higher higher 200 200 how about $500 for 500 balloons $500 I can do that I can do that okay it’s like it’s in my super chat and hit Send I’m honored that was amazing thank you everybody for your help so this obviously just scratches the surface of what is possible using super chats open api’s and we are super excited to see what all of you will do with it next so Susan how about you come back out here and let’s check out the video we all made Wow Thank You slow-mo guys Thank You Barbara I’m really happy to announce that YouTube is going to match the slo-mo guy super chat earnings from today a hundred ex to make sure that we’re supplying prosthetics to children in need around the world so that 360 living room demo and the super chat demo those are just two examples of how we are working to connect people around the globe together with video now I hope that what you’ve seen today is that the future of media is a future of openness and diversity a future filled with conversations and community and a future that works across all screens together with creators viewers and partners we are building the platform of that future Thank You IO and please please welcome me enjoy and in Dave Burke joining us to talk about Android my design Nami and Yasha Oh everybody it’s great to be here at Google i/o 2017 as you can see we found some new ways to Hardware accelerate Android this time with jetpacks but seriously two billion active devices is incredible and that’s just smartphones and tablets we’re also seeing you momentum in areas such as TVs and cars and watches and laptops and beyond so let me take a moment and give you a quick update and how Android is doing in those areas Android wear 2.0 launched earlier this year with a new update for Android and iPhone users and with new partners like Emporio Armani Lovato and New Balance we now enable 24 of the world’s top watch brands Android auto being a 10x user growth since last year it’s supported by more than 300 car models and the out auto mobile app and just this week howdy in Volvo and as that their next-generation nav systems will be powered by Android for a more seamless connected car experience Android TV we’ve partnered with over a hundred cable operators and hardware manufacturers around the world and now we’re now seeing 1 million device activations every two months and they’re more than three thousand Android TV apps in the Play Store this year we’re releasing a brand new launcher interface and bringing the Google assistant to Android TV Android things previewed late last year and already there are thousands of developers in over 60 countries using it to build connected devices with easy access to the Google assistant tensorflow and more the full launch is coming later this year Chromebooks comprise almost 60 percent of K to 12 laptops sold in the US and the momentum is growing globally and now with the added ability to run Android apps you get to target laptops too now of course platforms are only as good as the apps they run the Google Play ecosystem is more vibrant than ever Android users installed a staggering 82 billion apps and games in the last year that’s 11 apps for every person on the planet all right so let’s come back to smartphones and the real reason I’m here is talk about Android Oh two months ago we launched our very first Developer Preview so you could kick the tires and some of the new API is and of course it’s very much a work in progress but you can expect the release later this summer today we want to walk you through two themes you know that we’re excited about the first is something we call fluid experiences it’s pretty incredible what you can do in a mobile phone today and how much we rely on them as computers in our pockets but there are still certain things are tough to do in a small screen so we’re doing a couple of features you know that we think will help with this which I’ll cover in just a moment the second theme is something we call vitals and the concept here is to keep vital system behavior in a healthy state so we can maximize the users battery performance and reliability so let’s jump straight in and walk through four new fluid experiences whit live demos done wirelessly what could possibly go wrong alright and these days we do a lot of once on our phones whether it’s paying for groceries while reading a text message you just received or looking up guitar chords while listening to a new song but conventional multi-window techniques don’t translate well to mobile they’re just too fiddly to set up when you’re on the go we think picture and picture is the answer for many cases so let’s take a look my kids recently asked me to build a lemonade stand so I opened up YouTube and I started researching DIY videos and I found this one now at the same time I want to be able to jot down the materials I need to build for this lemonade stand so to multitask all I depress the home button and boom I get picture-in-picture you can think of it as a kind of automatic multi window I give it out of the way I can launch keep I can add some more materials so I know I need to get some wood glue like so then when I’m done I just simply swipe it away like that it’s brilliant picture-in-picture lets you do more with your phone it works great video-calling with duo for example maybe I need to check my calendar while planning a barbecue with friends and there are lots of other great news cases for example picture and picture for max navigation or watching Netflix in the background and a lot more and we’re also excited to see what you come up with for this feature we’re also making notification interactions more fluid for users from the beginning Android has really blazed a trail when it comes to its advance notification system you know we’re extending the reach of notifications with something we call notification dots it’s a new way for app developers to indicate that there’s activity in their app and to drive engagement so let’s take a look you’ll notice that the Instagram app icon has a dotted net and this is indicating that there’s a notification associated with the app so if I pull down the shade sure enough you can see there’s a notification in this case someone’s wanted on a photo and tagged in what’s really cool is I can long press the app icon and we now show the notification in place one of the things I really like about the notification mechanism is that it works with zero effort from the app developer we even extract the color of the dot from your icon oh and you get to raise the icon by simply swiping the notification like that so you’re always in control another great feature you know that helps make your experience more fluid is autofill now if you use Chrome you’re probably already familiar with autofill for quickly filling out a username and password or credit card information with a single tap widow we’ve extended autofill to apps let’s say I’m setting up a new phone for the first time and I open Twitter and I want to log in now because I use Twitter comm all the time on Chrome this system will automatically suggest my username I can simply tap it I get my password and then boom logged in it’s pretty awesome autofill takes the pain out of setting up a new phone or tablet once the user opts in autofill will work for most applications we also provide api’s for developers to customize autofill for their experience I want to show you one more demo of how we’re making Android more fluid by improving copy and paste the feature is called smart text selection so let’s take a look in Android you typically long press or double tap a word to select it for example I can open gmail I can start composing if I double tap the word bite it gets selected like so now we know from user studies that phone numbers are the most copy and pasted items the second most common are named entities like businesses and people and places you know we’re applying a device machine-learning in this case a fee for neural network to recognize these more complicated entities so watch this I can double tap anywhere on the phrase old coffee house and all of it is select for me no more filling around with hex selection handles it even works out for addresses so if I double tap on the address all of it is selected and what’s more there’s more what’s worse the machine learn model classifies this as an address and automatic suggests Maps so I can get directions to it with a single click and of course it works as you expect for phone numbers you get the phone dialer suggested and for email addresses you get Gmail suggested all of this neural networking processing happens on device in real time and without any data leaving the device it’s pretty awesome now on device machine learning helps make your phone spire and we want to help you build experiences like what you just saw so we’re doing two things to help first I’m excited to announce that we’re creating a specialized version of tensorflow Google’s open source machine learning library which we call tensorflow lite it’s a library for apps designed to be fast and small yet still enabling state of art techniques like convex LSTs second we’re introducing a new framework at Android to Hardware accelerate neural computation tests refer light will leverage a new neural network API to tap into silicon specific accelerators and over time we expect to see DSPs specifically designed for neural network inference and training we think these new capabilities will help our next generation of on device speech processing visual search augmented reality and more touch of low light will soon be part of the open source tensor flow project and the neural network API will be made available later in an update – oh this year ok so that’s a quick tour of some of the fluid experiences in oh let’s switch gears and talk about vitals so to tell you more I want to head over to Steph who’s been instrumental in driving this project thank you hi everyone okay so all the features they’ve talked about are cool but we think your phone’s foundations are even more important battery life security startup time and stability after all if your battery dies at 4 p.m.

None of the other features that Dave talked about really matter so a no we’re investing in what we call vitals keeping your phone secure and in a healthy state to maximize power and performance we’ve invested in three foundational building blocks security enhancements OS optimizations and tools to help developers build great apps first security Android was built with security in mind from day one with application sandboxing as Android has matured we’ve developed vast mobile security services now we use machine learning to continuously comb apps uploaded to play flagging potentially harmful apps then we scan over 50 billion apps every day scanning every installed app on every connected device and when we find a potentially harmful app we disable it or remove it and we found most Android users don’t know these services come built-in with Android devices with play so for greater peace of mind we’re making them more visible and accessible and doubling down on our commitment to security with the introduction of Google Play protect so here you can see play protect has recently scanned all your apps no problems found that’s Google Play protect it’s available out of the box on every Android device with Google Play second OS optimizations the single biggest visible change in o is boot time on pixel for example you’ll find in most cases your boot time is now twice as fast and we’ve made all apps faster by default we do this through extensive changes to our run time now this is really cool stuff like concurrent compacting garbage collection and code locality but all you really need to know is that your apps will run faster and smoother take Google sheets aggregate performance over a bunch of common actions is now over two times as fast and that’s all from the OS there are no changes to the app but we found apps could still have a huge impact on performance some apps were running in the background and they were consuming tons of system resources especially draining battery so in oh we’re adding wise limits to background location and background execution these boundaries put sensible limits on usage they’re protecting battery life and freeing up memory now our third theme is helping developers build great apps and here I want to speak directly to all the developers in the audience wouldn’t it be cool if androids engineering team could show you what causes performance issues today we’ve launched play console dashboards that analyze every app and pinpoint six top issues that cause battery drain crashes and slow UI for each issue the app has we show how many users are affected and provide guidance on the best way to fix now imagine if developers could also have a powerful profiler to visualize what’s happening inside the app in Android studio we’ve also launched new unified profiling tools for networked memory and CPU so developers can now see everything on a unified timeline and then dive into each profiler for an example on CPU can see every thread you can look at the call stack and the time every call is taking you can visualize where the CPU is going and you can jump to the exact line of code okay so that’s android vitals how we’re investing in your phone’s foundational security and performance later today you’ll see androids developer story from end to end our hard work to help developers build great apps at every stage writing code tuning launching and growing but there is one more thing one thing we think would be an incredible compliment to the story and it is one thing our team has never done for developers we have never added a new programming language to Android and today we’re making Kotlin an officially supported language in android so Colin Colin is one our developer community has already asked for it makes developers so much more productive it is fully Android runtime compatible it is totally interoperable with your existing code it has fabulous IDE support and it’s mature and production ready from day one we are also announcing our plans to partner with JetBrains creating a foundation for Kotlin I am so happy jetbrains CEO Mac Safarov is here today this new language is wonderful but we also thought we should increase our investment in our existing languages so we’re doing that too please join us at the developer keynote later today to hear our story from end to end okay so let’s wrap up there are tons more features in Android o which we don’t have time to go into today everything from redesigned settings to project treble which is one of the biggest changes to the foundations of Android to date – downloadable fonts with new emoji and much more if you want to try some of these features for yourself and you do I’m happy to announce we’re making the first beta release of o available today head over to Android comm slash beta but there’s more you probably thought we’re done talking about Android oh but I’d like you to hear some more about Android and from that please welcome Sameer thank you nice stuff hi everyone from the beginning androids mission has been to bring the power of computing to everyone and we’ve seen tremendous growth over the last few years from the high end to entry level devices in countries like Indonesia Brazil and India in fact there are now more users of Android in India than there are in the US and every minute seven Brazilians come online for the first time all this progress is amazing for those of us who have a smartphone we intuitively understand the profound impact that computing is having on our daily lives and that’s why our team gets so excited about how we can help bring this technology to everyone so we took a step back to think about what it would take to get smartphones to more people there are a few things that are clear devices would need to be more affordable with entry-level prices dropping significantly this means hardware that uses less power pack processors and far less memory than on premium devices but the hardware is only half the equation the software also has to be tuned for users needs around limited data connectivity and multilingual use we learned a lot from our past efforts here with projects felt and KitKat and the original Android 1 program but we felt like the time was right to take our investment to the next level so today I’m excited to give you a sneak peek into a new experience we’re building for entry-level Android devices internally we call it Android go Android go focuses on three things first optimizing the latest release of Android to run smoothly on entry-level devices starting with Android o second a rebuilt set of Google Apps that use less memory storage space and mobile data and third a version of the Play Store that contains the whole app catalog but highlights the apps designed by all of you for the next billion users and all three of these things will ship together as a single experience starting on Android o devices with one gigabyte or less of memory let’s take a look at some of the things we’re working on for Android go first let’s talk about the operating system for manufacturers to make more affordable entry level devices the prices of their components have to come down let’s take one example memory is an expensive component so we’re making a number of optimizations to the system UI and the kernel to allow an android o device built with the go configuration to run smoothly with as little as 512 megabytes to 1 gigabyte of memory now on device performance is critical but data costs and intermittent connectivity are also big challenges for users one person put it best to me when she said mobile data feels like currency and she wanted more control over the way she spent it so when these devices were putting data management front and center in quick settings and we’ve created an API that carriers can great with so you can see exactly how much prepaid data you have left and even top up right there on the device but beyond the OS the google apps are also getting smarter about data for example on these devices the chrome data saver feature will be turned on by default data saver transcodes content on the server and simplifies pages when you’re on a slow connection and now we’re making the savings more visible here in the UI in aggregate this feature is saving users over 750 terabytes of data every day I’m really excited that the YouTube team has designed a new app called YouTube go for their users with limited data connectivity feedback front on the new YouTube app has been phenomenal and we’re taking many of the lessons we’ve learned here and applying them to several of our Google apps let me show you some of the things I love about YouTube go first there’s a new preview experience so you can get a sneak peek inside a video before you decide to spend your data to watch it and when you’re sure this is the video for you you can select the streaming quality you want and see exactly how much mobile data that’s going to cost you but my favorite feature of YouTube go is the ability to save videos while you’re connected so you can watch them later when you might not have access to data and if you want to share any of those videos with a friend you can use the built-in peer-to-peer sharing feature to connect two of your devices together directly and share the files across without using any of your mobile data at all beyond data management the Google Apps will also make it easier to seamlessly go between multiple languages which is a really common use case for people coming online today for example G Board now supports over a hundred and ninety-one languages including the recent addition of 22 Indian languages and there’s even a transliteration feature which allows you to spell words phonetically on a QWERTY keyboard to type in your native language script a G board is super cool so I want to show it to you I grew up in the US so for any of my family that’s watching don’t get too excited by the demo I haven’t learned Hindi yet and I’m sorry mom okay so let’s say I want to send a quick note to my aunt in India I can open up a low and using G board I can type how it sounds phonetically boom guess awho which means how are you in Hindi and transliteration automatically gives me Hindi script that’s pretty cool now see I want to ask her how my IO speech is going but I don’t know how to say that Hindi at all I can use the built-in Google Translate feature to say how is this going and seamlessly I get Hindi script all built right into the keyboard why my family is apparently a tough audience all right well the kugel apps are getting go applied what is always propelled Android forward is the apps from all of you and no surprise many of our developer partners have optimized their apps already so to better connect users with these experiences we’ll be highlighting them in the Play Store one example is right here on plays home page to be eligible for these new sections we’ve published a set of best practices called building for billions which includes recommendations we’ve seen make a big difference in the consumer experience things such as designing a useful offline state reducing your apk size to less than 10 megabytes and using GCM or job scheduler for better battery and memory performance and also in building for billions you’ll find best practices for optimizing your web experience we’ve seen developers build amazing things with new technology such as progressive web apps we hope you can come to our developer keynote later today to learn a whole lot more ok that was a quick walk through of some of the things coming in Android go starting with Android o all devices with one gigabyte of RAM or less will get the go configuration and going forward every Android release will have a go configuration we’ll be unveiling much more later this year with the first devices shipping in 2018 we look forward to seeing what you’ll build and how we can bring computing for the next several billion users next up next up you’ll be hearing from clay on one of Google’s newest platforms that were really excited about VR and AR thank you Thank You Samir so send are talked about how technologies like machine learning and conversational interfaces make computing more intuitive by enabling our computers to work more like we do and we see VR and AR in the same light they enable us to experience computing just as we experience the real world virtual reality can be transporting you can experience not just what it’s like to see someplace but what it’s like to really be there an augmented reality uses your surroundings as context and puts computing into the real world a lot has happened since Google i/o last year and I’m excited to share a bit of what we’ve been up to so let’s start with VR last year we announced daydream are platformed from mobile virtual reality and then in October to kick-start the daydream ecosystem we released daydream view a VR headset made by Google that’s super comfortable it’s really easy to use and there’s tons to do with it you can play inside alternate worlds in games like virtual virtual reality you can see any part of our world with apps like Street View and you can visit other worlds with apps like Hello Mars there’s already a great selection of daydream phones out there and we’re working with partners to get daydream on even more first I’m pleased that LG’s next flagship phone which launches later this year will support daydream and there’s another I’m excited to announce that the Samsung Galaxy s8 and s8 plus will add daydream support this summer with a software update the Samsung of course they make many of the most popular phones in the world and we’re delighted to have them supporting daydream so great momentum in daydreams first six months let’s talk about what’s next so a dream we showed that you can create high quality mobile VR experiences with just a smart phone and a simple headset and there are a lot of nice things about smart phone VR it’s easy there aren’t a bunch of cables and things to fuss with you can choose from a bunch of great compatible phones and of course it’s portable you can throw your headset in a bag we asked how can we take the best parts of smartphone VR and create a kind of device with an even better experience well I’m excited to announce that an entirely new kind of VR device is coming to daydream what we call standalone VR headsets and we’re working with partners to make them so what’s a standalone headset well the idea is you have everything you need for VR built right into the headset itself there’s no cables no phone and certainly no big PC and the whole device is designed just for VR and that’s cool for a couple of reasons first it’s easy to use getting into VR is as easy as picking the thing up and it’s one step in two seconds and second presence by that I mean really feeling like you’re there by building every part of the device specifically for VR we’ve been able to optimize everything that displays the optics the sensors all to deliver a stronger sense of being transported and nothing heightens the feeling of presence like precise tracking how the headset tracks your movement and we’ve dramatically improved tracking with a technology that we call world sense so world sense enables what’s known as positional tracking with it your view in the virtual world exactly matches your movement in the real world it works by using a handful of sensors on the device that look out into your surroundings and that means it works anywhere there’s no setup there’s no cameras to install and with it you really feel like you’re there now just as we did with daydream ready smartphones we’re taking a platform approach with standalone headsets working with partners to build some great devices to start we worked with Qualcomm to create a daydream standalone headset reference design a sort of device blueprint partners can build from and working closely with two amazing consumer electronics companies to build the first headsets first HTC the company that created the vive we’re excited about it too they’re a leader in VR and we’re delighted to be working with them on a standalone VR headset for daydream and second lenovo we’ve been partners for years working together on tango and now we’re excited to work with them on VR these devices will start to come to market later this year so that’s the update on VR great momentum with apps more daydream ready phones on the way and a new category of devices that we think people are going to love so let’s turn to augmented reality a lot of us were introduced to the idea of AR last year with Pokemon go the app gave us a glimpse of AR and it showed us just how cool it can be to have digital objects show up in our world well we’ve been working in this space into 2013 with tango a sensing technology that enables devices to understand space more like we do two years ago in 2015 we released a Developer Kit then last year we shipped the first consumer ready tango phone and I’m excited to announce that the second-generation tango phone the Asus zenfone AR will go on sale this summer now looking at the slides you may notice a trend the devices are getting smaller and you can imagine far more devices having this capability in the future it’s been awesome to see what developers have done with the technology and one thing we’ve seen clearly is that AR is most powerful when it’s tightly coupled to the real world and the more precisely the better that’s why we’ve been working with the Google Maps team on a service that can give devices access to very precise location information indoors it’s kind of like GPS but instead of talking to satellites to figure out where it is your phone looks for distinct visual features in the environment and it triangulates with those so you have GPS we call this VPS Google’s visual positioning service and we think it’s going to be incredibly useful in a whole bunch of places for example imagine you’re at Lowe’s a home-improvement store that has basically everything and if you’ve been there you know it’s really big and we’ve all had that moment when you’re struggling to find that one weird random screwdriver thing imagine in the future your phone could just take you to that exact screwdriver and point it out to you on the shelf turns out we can do this with VPS let me show you how and this is working today so here we are walking down an aisle at Lowe’s and the phone will find these key visual feature points which you can see there in yellow by comparing the feature points against previously observed ones those colorful dots in the back the phone can figure out exactly where it is in space down to within a few centimeters so GPS can get you to the door and then VPS can get you to the exact item that you’re looking for further out further out imagine what this technology could mean to people with impaired vision for example VPS and an audio-based interface could transform how they make their way through the world and it combines so many things that Google is good at mapping computer vision distributed computing and we think precise location will be critical for camera based interfaces so VPS will be one of the core capabilities of google lens we’re really excited about the possibilities here so last thing I wanted to share is something that we’ve been working on that brings many of these capabilities together in a really important area and that’s education two years ago we launched expeditions which is a tool for teachers to take their classes on virtual reality field trips and 2 million students have used it today we’re excited to announce that we’re adding a new capability to expeditions AR mode which enables kind of the ultimate show-and-tell right in the classroom if we could roll the video please all right wants to see a volcano three two one look at that lah but like that’s coming out of that pretend you’re an airplane and fly over the tornado that’s what do you see we’re learning about DNA and genes things that we can’t see and so the most exciting thing for me with the air technology was that I could see kids get an aha moment that I couldn’t get by just telling them about it the minute I saw it pop up on the screen I mean when get up walk to it you actually get to turn around and look at things from all angles so it gave us a nice perspective see if you can figure out what that might be based on what you know about the respiratory system I got to see where the alveoli branched off and I can look inside them and see how everything works which I never saw before it was really really cool we’re just delighted with the response we’re seeing so far and we’ll be rolling this out later in the year so VR and a are two different flavors of what you might call immersive computing computing that works more like we do we think that’s a big idea and in time we see VR and AR changing how we work and play live and learn and all that I talked about here these are just the first steps but we can see where all this goes and we’re incredibly excited about what’s ahead thanks so much back to sundar we wanted to make machine learning at an open-source project so that everyone outside of Google could use the same system we’re using inside it’s incredible when you open source platform when you see what people can do on top of it are we really excited about the momentum behind tensorflow it’s already the most popular ml depository on github and we’re going to push it further we are also announcing the tensorflow research cloud we are giving away thousand plus which is 180 better flops of computing took academics and researchers for free so that they can do more stuff with it I’m always amazed by the stories I hear from developers when I meet them I want to highlight one young developer today Abu Carter from Chicago he is used tensorflow to help improve health for everyone let’s take a look my name is Abu I am a high school student 17 years old my freshman year I remember googling machine learning had no clue what it meant that’s a really cool thing about the Internet is that someone’s already doing it you can just YouTube it and it’s right there it was a minute I really saw what machine learning can do I kind of like hit something within me this like need to build things to help people my parents are immigrants from Afghanistan it’s not easy coming in the only reason we made it through some of the times that we did was because people showed acts of kindness seeing that at an early age was enough for me to understand that helping people always comes back to you and then it kind of hit me a way where I could actually genuinely help people mammograms are the cheapest imaging format there is it’s the most accessible to people all around the world but one of the biggest problems that we see in breast cancer is misdiagnosis so I decided I was going to build a system for early detection of breast cancer tumors that’s successful to everyone and that’s more accurate how was I going to do it machine learning the biggest most extensive resource that I’ve used is this platform called tensorflow I’ve spent so many hours going really deep into these open source libraries and just figuring out how it works eventually I wrote a whole system that can help really I’ll just make their decisions all right I’m by no means a wizard at machine learning I’m completely self-taught I’m in high school i YouTubed and just fought my way through it you don’t know about that kid in Brazil that might have a groundbreaking idea or that kid in Somalia you don’t know that they have these ideas but if you can open-source your tools you can give them a little bit of hope that they can actually conquer what they’re thinking of I will started this as a school project and it’s continued to build it on its own we are very very fortunate to have Abu and his family here with us today thank you for joining us enjoy IO we’ve been talking about machine learning in terms of how it will power new experiences in research but it’s also important we think about how this technology can have an immediate impact on people’s lives by creating opportunities for economic empowerment 46% of US employers say they face talent shortages and have issues filling open job positions while job seekers may be looking for openings right next door there is a big disconnect here just like we focused our contributions to teachers and students through Google for education we want to better connect employers and jobseekers through a new initiative Google for jobs Google for jobs is a commitment to use our products to help people find work it’s a complex multifaceted problem but we’ve been investing a lot over the past year and we’ve made significant profits last November we announced a cloud jobs API think of it as a first fully into end pre-trained vertical machine learning model through google cloud which we give to employers FedEx Johnson & Johnson health out carry builder and we are expanding to many more employers so in Johnson & Johnson’s carrier site they found that applicants were 18 percent more likely to apply to a job suggesting the matching is working more efficiently and so far or over four and a half million people have interacted with this API but as we started working on this we realized the first step for many people when they start looking for a job is searching on Google so it’s like other search challenges we have worked in the past so we built a new feature in search with a goal that no matter who you are or what kind of job you are looking for you can find the job postings that are right for you and as part of this effort we work hard to include jobs across experience and wage levels including jobs that have traditionally been much harder to searched and classified thank retail jobs Hospitality jobs etc to do this well we have worked with many partners LinkedIn Monster Facebook carry builder Glassdoor and many more so let’s take a look at how it works let’s say you’re come to Google and you start searching for retail jobs and you’re from Pittsburgh we understand that you can scroll down and click into this immersive experience and we immediately start showing the most relevant jobs for you and you can filter you can choose full time and as you can see you can drill down easily I want to look at jobs which are posted in the past three days so you can do that now you’re looking at retail jobs in Pittsburgh posted within the last three days you can also filter by job titles it turns out employees and employers use many different terminologies for example retail could mean a store clerk a sales representative store manager we use machine learning to cluster automatically and so that we can bring all the relevant jobs for you as you scroll through it you will notice that we even should commute times it turns out to be an important criteria for many people and we will soon add a filter for that as well and if you find something that’s of interest to you so maybe the retail position in drawers and you can click on it and you end up going to it right away and you’re one click away you can scroll to find more information if you want and you’re one click away from clicking and applying there it’s a powerful tool we are addressing jobs of every skill level and experience level and we are committed to making these tools work for everyone it’s part of building it we literally talked to hundreds of people so whether you’re in a community college looking for a barista job a teacher who’s relocating across the country and you want teaching jobs or someone who is looking for work in construction the products should do a great job of bringing that information to you we are rolling this out in the u.s.

In the coming weeks and then we are going to expand it to more countries in the future I’m personally enthusiastic for this initiative because it addresses an important need and taps our core capabilities as a company from searching and organizing information to AI and machine learning it’s been a busy morning you know we’ve talked about this important shift from a mobile first to AI first world and we are driving it forward across all our products and platforms so that all of you can build powerful experiences for new users everywhere it will take all of us working together to bring the benefits of technology to everyone I believe we are on the verge of solving some of the most important problems we face that’s our hope let’s do it together thanks for your time today and enjoy Google IO you you

Add Comment