good morning welcome to Google i/o it’s a beautiful day I think warmer than last year hope you’re all enjoying it thank you for joining us I think we have over 7,000 people here today as well as many many people via live streaming this to many locations around the world so thank you all for joining us today we have a lot to cover but before we get started you know I had one important business which I wanted to get over with towards the end of last year it came to my attention that we had a major bug in one of our core products it turns out we got the cheese strong in our burger emoji anyway we went hard to work I never knew so many people care about where the cheese’s we fixed it you know the irony of the whole thing is I’m a vegetarian in the first place so we fixed it but I hopefully we got the cheese right but as we were working on this this came to my attention I I did I don’t even want to tell you the explanation the team gave me as to why the foam is floating about the beer but we restored the natural laws of physics so all as well we can get back to business we can talk about all the progress since last year’s i/o I’m sure all of you would agree it’s been an extraordinary year on many fronts I’m sure you’ve all felt it we’re at an important inflection point in computing and it’s exciting to be driving technology forward and it’s made us even more reflective about our responsibilities expectations for technology vary greatly depending on where you are in the world or what opportunities are available to you for someone like me who grew up without a phone I can distinctly remember how gaining access to technology can make a difference in your lives and we see this in the work we do around the world you see it when someone gets access to a smartphone for the first time and you can feel it in a huge demand for digital skills we see that’s why we’ve been so focused on bringing digital skills to communities around the world so far we have trained over 25 million people and we expect that number to rise over 60 million in the next five years it’s clear technology can be a positive force but it’s equally clear that we just can’t be wide-eyed about the innovations technology creates there are very real and important questions being raised about the impact of these advances and the role they will play in our lives so we know the path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right that’s the spirit with which we are approaching our core mission to make information more useful accessible and beneficial to society I’ve always felt that we were fortunate as a company to have a timeless mission that feels as relevant today as when we started and very excited about how we can approach our mission with renewed vigor thanks to the progress we seen AI AI is enabling this to for us to do this in new ways solving problems for our users around the world last year at Google i/o we announced Google ai it’s a collection of our teams and efforts to bring the benefits of AI to everyone and we want this to work globally so we are opening AI centers around the world AI is going to impact many many fields and I want to give you a couple of examples today healthcare is one of the most important fields AI is going to transform last year we announced her work on diabetic retinopathy it’s a leading cause of blindness and we use deep learning to help doctors diagnose it earlier and we’ve been running field trials since then at Arvin and sank our hospitals in India and the field trials are going really well we are bringing expert diagnosis to places where trained doctors are scarce it turned out using the same retinal scans there were things which humans quite didn’t know to look for but our AI systems offered more in size your same eye scan turns out holes information with which we can predict the 5-year risk of you having an adverse cardiovascular event heart attack or strokes so to me the interesting thing is that you know more than what doctors could find in these I scans the machine learning systems offered newer insights this could be the basis for a new non-invasive way to detect cardiovascular risk and we are working we just published the research and we are going to be working to bring this to field trials with our partners another area where AI can help is to actually help doctors predict medical events turns our doctors have a lot of difficult decisions to make and for them getting advanced notice say 24 to 48 hours before a patient is likely to get very sick as tremendous difference in the outcome and so we put a machine learning systems to work we’ve been working with our partners using de-identified medical records and it turns out if you go and analyze over hundred thousand data points per patient more than any single doctor could analyze we can actually quantitatively predict the chance of readmission 24 to 48 hours before earlier than traditional methods it gives doctors more time to act we are publishing our paper on this later today and we are looking forward to partnering with hospitals and medical institutions another area where AI can help is accessibility you know we can make day-to-day use cases much easier for people let’s take a common use case you know you you come back home in the night and you turn your TV on it’s not that uncommon to see two people passionately two or more people passionately talking over each other imagine if you are hearing-impaired and you’re relying on closed captioning to understand what’s going on this is how it looks to you as you can see it’s gibberish you can’t make sense of what’s going on so we have machine learning technology called looking to listen it not only looks for audio cues but combines it with visual cues to clearly disambiguate the two voices let’s see how that can work maybe in YouTube okay you said it’s okay we have to talk about we have a lot to talk about but you can see how we can put technology to work to make an important day-to-day use case profoundly better you know the great thing about technology is it’s constantly evolving in fact we can even apply machine learning to a 200 year old technology Morse code and make an impact in someone’s quality of life let’s take a look hi I am Tanya this is my voice I use Morse code buying putting dots and dashes with switches mounted near my head as a very young child I used a communication word board I used a head stick to point to the words it was very attractive to say the least once Morse code was incorporated into my life it was a feeling of pure liberation and freedom I think napped is why I like skydiving so much it is the same kind of feeling through skydiving I met Ken the love of my life and partner-in-crime it’s always been very very difficult just to find Morris code devices to try Morris code this is why I had to create my own with the help from Ken I have a voice and more independence in my daily life but most people don’t have ken it is our hope that we can collaborate with the G board team to help people who want to tap into the freedom of using Morse code G board is the Google keyboard what we have discovered working on G or it is that there are entire pockets of population in the world and when I say pockets it’s like tens of millions of people who have never had access to a keyboard that works in their own language with tenia we’ve built support in G board for Morse code so it’s an input modality that allows you to type in Morse code and get text out with predictions suggestions I think it’s a beautiful example of where machine learning can really assists someone normal keyboard without artificial intelligence wouldn’t be able to I am very excited to continue on this journey many many people will benefit from this and that thrills me to no end it’s a very inspiring story we’re very very excited to have Tanya and can join us today Tania and Ken are actually developers they really worked with her team to harness the power of actually predictive suggestions in G mode in G board in the context of Morse code I’m really excited that G board with Morse code is available in beta later today it’s great to reinvent products with AI G board is actually a great example of it every single day we offer users and users choose over 8 billion order Corrections each and every day another example of a one of our core products which we are redesigning with ai is Gmail we just had a new fresher look for Gmail a recent redesign hope you’re all enjoying using it we are bringing out of the feature to Gmail we call it smart compose so it’s the name suggest we use machine learning to start suggesting phrases for you as you type all you need to do is to hit tab and keep order completing in this case it understands the subject is Taco Tuesday it suggests chips salsa guacamole it it takes care of mundane things like addresses so that you don’t need to worry about it you can actually focus on what you want to type I’ve been loving using it I’ve been sending a lot more emails to the company not sure what the company thinks of it but it’s been great we are rolling out smart composed to all our users this month and hope you enjoy using it as well another product which we built from the ground up using AI is Google Photos works amazingly well and it’s scales you know if you click on one of these photos what we call the photo viewer experience where you’re looking at one photo at a time so that you understand the scale every single day there are over five billion photos viewed by our users each and every day so we want to use the AI to help in those moments so we are bringing a new feature called suggested actions essentially suggesting smart actions right in context for you to act on say for example you went to a wedding and you’re looking through those pictures we understand your friend Lisa is in the picture and we offer to share the three photos with Lisa and with one click those photos can be sent to her so the anxiety where everyone is trying to get the picture on their phone I think we can make that better say for example if the photo in the same wedding if the photos are underexposed or AI systems offer a solution to fix the brightness right there one tap and we can fix the brightness for you or if you took a picture of a document which you want to save for later we can recognize convert the document to PDF and make it make it much easier for you to use later you know we want to make all these simple cases delightful by the way I can also deliver unexpected moments so for example if you have this picture cute picture of your kid we can make it better we can drop the background color pop the color and make the kid even cuter or if you happen to have a very special memory something in black and white maybe of your mother and grandmother we can recreate that moment in color and make that moment even more real and special all these features are going to be rolling out to Google Photos users in the next couple of months the reason we are able to do this is because for a while we’ve been investing in the scale of our computational architecture this is why last year we talked about our tensor processing units these are special purpose machine learning chips these are driving all the product improvements you’re seeing today and we’ve made it available to our cloud customers since the last year we’ve been hard at work and today I’m excited to announce our next generation TPU 3.0 these chips are so powerful that for the first time we’ve had to introduce liquid cooling in our data centers and we put these chips in the form of giant pods each of these parts is now 8 X more powerful than last year’s well over 100 peda flops and this is what allows us to develop better models larger models more accurate models and helps us tackle even bigger problems and one of the biggest problems we are tackling with the AI is the Google Assistant our vision for the perfect assistant is that it’s naturally conversational it’s there when you need it so that you can get things done in the real world and we are working to make it even better we want the assistant to be something that’s shrill and comfortable to talk to and to do that we need to start with the foundation of the Google assistant the voice today that’s how most users interact with our system our current voice is codenamed Holly she was a real person she spent months in our studio and then we stitched those recordings together to create voice but 18 months ago we announced a breakthrough from our deep mine team called wavenet unlike the current systems wavenet actually models the underlying raw audio to create a more natural voice it’s closer to how humans speed the pitch the pace even all the pauses that convey meaning we want to get all of that right so we bought target wavenet and we are adding as of today six new voices to the Google assistant let’s have them say hello good morning everyone I’m your Google assistant welcome to Shoreline Amphitheatre we hope you’ll enjoy Google i/o back to you sundar you know our goal is one day to get the right accents languages and dialects right globally you know wavenet can make this much easier with this technology we started wondering who we could get into the studio with an amazing voice take a look couscous a type of North African semolina in granules made from crushed durum wheat I want a puppy with sweet eyes and a fluffy tail who likes my haikus don’t we all happy birthday to the person whose birthday it is happy birthday to you John Legend he would probably tell you he don’t want to brag but he’ll be the best assistant you ever had can you tell me where you live you can find me on all kinds of devices phones Google homes and if I’m lucky in your heart that’s right John Legend’s voice is coming to the assistant clearly he didn’t spend all the time in the studio answering every possible question that you could ask but wavenet allowed us to shorten the studio time and the model can actually capture the richness of his voice his voice will be coming later this year in certain contexts so that you can get responses like this right now in Mountain View it’s sixty five with clear skies today it’s predicted to be 75 degrees and sunny at 10 a.m.
You have an event called Google i/o keynote then at 1:00 p.m. you have margaritas have a wonderful day I’m looking forward to 1 p.m. so John’s voice is coming later this year really excited we can drive advances like this with AI we are doing a lot more with the Google assistant and to talk to you a little bit more about it let me invite Scott onto the stage call Maddie okay dialing now hey Google booked a table for four sounds good hey Google call my brother hey Google call my brother can you text Carol for me too hey Kevin that was great we haven’t made you Google work yet so you have to say hey hey Google hey Google Play Sophia hey Google Play the next episode way to crown on Netflix all Channing Tatum movie ok no Google that was great um Chris get one where you say hey Google Google find my phone binding now whoa hey Google no Google lock the front door okay let’s just go with your Google then I’m sure the engineers would love to update everything yo hi what can I do for you two years ago we announced the Google assistant right here at i/o today the assistant is available on over 500 million devices including phones speakers headphones TVs watches and more it’s available in cars from more than 40 auto brands and it works with over 5,000 connected home devices from dishwashers to doorbells and people around the world are using it every single day for example we launched the assistant in India last year and the response has been incredible daily usage there has tripled since the beginning of the year now by the end of this year the assistant will support 30 languages and be available in 80 countries so we’ve made great progress but we’re just getting started today we’re going to share with you some important ways that the assistant is becoming more naturally conversational and visually assistive in order to help you do more and get fine back now as you heard from sundar new voices that you can choose from to make the Google assistant your own are an important aspect of making the conversation with your assistant more natural but to be a great conversation partner the assistant needs to deeply understand the social dynamics of conversation for example let’s be honest it gets a little annoying to say hey Google every time I want to get my assistants attention this grandma who you might have seen on YouTube was definitely feeling that way do you wanna know what is a way though to model in Flagler Beach tomorrow there will be showers for the high of 65 and a low of 56 well the assistant eventually worked for her but it shouldn’t be so hard now you won’t have to say hey Google every time check this out hey Google did the Warriors win yes the Warriors won 118 to 92 last Sunday against the pelicans nice when’s their next game the Warriors next game is today at 7:30 p.m.
Where they will be playing the pelicans great it’s gonna be fun to watch tonight he remind me to find my Kevin Durant Jersey when I get home sure I’ll remind you when you get home they’ll notice that I was able to have a natural back-and-forth conversation with my assistant without having to repeat hey Google for every follow up request even more helpful my assistant could understand when I was talking to it first is when I was talking to all of you we call this continued conversation and it’s been a top feature request you’ll be able to turn it on in the coming weeks yeah all right so let’s say it’s almost game time hey Google turn on the Warriors game and start the popcorn maker sure here’s the Golden State Warriors game on YouTube TV and turning the popcorn maker on oh and can you dim the lights in the family room and in the kitchen sure dimming the lights in the family room and in the kitchen now you’ll notice that in both of these examples I asked for multiple things at once something that feels really natural for us but it’s very difficult for computers to understand it’s something we call multiple actions and we’re rolling out this capability and the assistant now now you might think come on it’s easy just look for the end but it’s not always so straightforward in this example even though it has the word and it’s not two requests it’s actually just one but here even though it looks very similar the assistant has to break it apart to form the two requests in linguistics it’s called coordination reduction now multiple actions can get way more complicated so just for fun hey Google who was the Governor of California when Kevin Durant was drafted and what team drafted him Arnold Schwarzenegger was the governor in 2007 Kevin Durant was drafted by the Seattle SuperSonics alright just in case you were wondering alright so next we’ve also been working on improving the conversation with the Google assistant for families last fall we launched our family experience for the Google assistant it provides family friendly games activities and stories we’ve continued to grow our library and families have listened to over a hundred and thirty thousand hours of children’s stories in the last two months alone now as we continue to improve the experience for families a concern that we’ve heard from many parents including people on the team who have children is our kids learning to be bossy and demanding when they can just say hey Google to ask for anything they need it’s not a simple area but one step that we’ve been working on is something we call pretty please some of the parents on the team have been testing it out with their families take a look hey Google talk to voice thanks for saying please please what a nice way to ask me thanks for asking so nicely once upon a time there was a wacky walrus please help you’re very polite so the assistant understands and responds to positive conversation with polite reinforcements now we’ve been consulting with families and child development experts and we plan to offer pretty please as an option for families later this year so with new voices for your assistant continued conversation multiple actions and pretty please a is helping us make big strides so everyone can have a more natural conversation with their assistant and now I’d like to introduce Lillian who’s going to share some exciting things that are doing bringing voice and visual assistance together well thanks Scott and good morning everyone over the last couple of years the assistant has been focused on the verbal conversation that you can have with Google today we’re going to unveil a new visual canvas for the Google assistant across screens this will bring the simplicity of voice together with a rich visual experience now I’m going to invite Maggie to come up because we’re going to be switching to a lot of live demos we gave you an early look at our new smart displays at CES in January we’re working with some of the best consumer electronic brands and today I’m excited to announce that the first smart displays will go on sale in July today I’ll show you some of the ways that this new device can make your day easier by bringing the simplicity of voice with a glance ability of a touchscreen so let’s switch over to the live demos now this is one of the Lenovo smart displays the ambient screen integrates with Google photos and greets me with pictures of my kids Bella and Hudson those are really my kids and best way to start my day every morning now because the device is controlled by voice I can watch videos or live TV with just a simple command this makes it so easy to enjoy my favorite shows while multitasking around the house hey Google let’s watch Jimmy Kimmel Live okay playing Jimmy Kimmel Live on YouTube TV here’s something from my life I was driving my daughter to school this morning so that’s right on YouTube TV you will be able to watch all of these amazing shows from local news live sports and much more and they will be available on smart displays now of course you can also enjoy all the normal content from YouTube including how-to videos music and original shows like the brand new series Cobra Kai which we started binge watching this week because it’s so good now cooking is another instance where the blend of voice and visuals is incredibly useful Nick and I are always looking for simple family-friendly recipes hey Google show me recipes for Pizza bombs sure here are some recipes so we can choose the first one from tasty that one looks good you see all the recipe details come right up and we can just tap to start cooking sure here’s tasty so seeing a video demonstration along with the spoken instructions is a total GameChanger for cooking especially when you have your hands full thanks Maggie so we showed you a couple of ways that smartest place can make life at home easier but there are so many more from staying in touch with family with broadcast and do a video calling to keeping an eye on your home with all of our other smart home partners to seeing in advance what the morning commutes like with Google Maps we’re thoughtfully integrating the best of Google and working with developers and partners all around the world to bring voice and visuals together in a completely new way for the home now inspired by the smart display experiences we’ve also been working to reimagine the assistant experience on the screen that’s with us all the time our mobile phones so I’m gonna give you a sneak peek into how the assistant on the phone is becoming more immersive interactive and proactive so we’re gonna switch to another live demo hey Google tell me about camila cabello according to wikipedia karla camila cabello esther bow is a american singer and songwriter so as you can see we’re taking full advantage of the screen to give you a rich and immersive response here’s another turn down the heat sure cooling the living room down and for a smart home request what you can see here is we’re bringing the controls right into your fingertips and here’s one of my favorites hey Google order my usual from Starbucks hello welcome back to Starbucks that’s one tall nonfat latte with caramel drizzle anything else so no thanks and are you picking that up at the usual place so I’m gonna tap yes okay your orders in see you soon so we’re excited to share that we were working with Starbucks Dunkin Donuts store – Domino’s and many other partners on a new food pickup and delivery experience for the Google assistant we have already started rolling some of these out with many more partners coming soon now rich and interactive responses to my requests are really helpful but my ideal assistant should also be able to help in a proactive way so when I’m in the assistant now and swipe up I now get a visual snapshot of my day I see helpful suggestions based on the time my location and even my recent interactions with the assistant I also have my reminders packages and even note some lists organized and accessible right here I love the convenience of having all these details helpfully curated and so easy to get to now this new visual experience for the phone is thoughtfully designed with AI at the core it will launch on Android this summer an iOS later this year now sometimes the assistant can actually be more helpful by having a lower visual profile so like when you’re in the car let’s say you should stay focused on driving so let’s say I’m heading home from work I have Google Maps showing me the fastest route during rush hour traffic hey Google said Nick my ETA and place some hip-hop okay letting Nick know you’re 20 minutes away and check out this hip-hop music station on YouTube so it’s so convenient to share my ETA with my husband with just a simple voice command I’m excited to share that the assistant will come to navigation and Google Maps this summer so across smart displays phones and in maps this gives you a sense of how we’re making the google assistant more visually assistive sensing when to respond with voice and went to show a more immersive and interactive experience and with that I’ll turn it back to sundar thank you thanks Lillian it’s great to see the progress with that system as I said earlier our vision for our system is to help you get things done it turns out a big part of getting things done it’s making a phone call you may want to get an oil change schedule maybe call a plumber in the middle of the week or even schedule a haircut appointment you know we are working hard to help users through those moments we want to connect users to businesses in a good way businesses actually rely a lot on this but even in the u.s.
Sixty percent of small businesses don’t have an online booking system Sarah we think hey I can help with this problem so let’s go back to this example let’s say you want to ask Google to make you a haircut appointment on Tuesday between 10:00 and noon what happens is the Google assistant makes the call seamlessly in the background for you so what you’re going to hear is the Google assistant actually calling a real salon to schedule the appointment for you let’s listen what time are you looking for well at 12 p.m.
We do not have a quality available but closest we have to that is a 1:15 do you have anything between 10 a.m. and 12 p.m. depending on what service she would like what service is she looking for just a woman’s haircut for now okay we have a 10 o’clock 10:00 a.m. is fine okay what’s her first name the first name is Lisa I’ll play perfect so I will see Lisa at 10 o’clock on May 3rd ok great have a great day bye that was a real call you just turned the amazing thing is the assistant can actually understand the nuances of conversation we’ve been working on this technology for many years it’s called Google duplex it brings together all our investments over the years and natural language understanding deep learning text-to-speech by the way when we are done the assistant can give you a confirmation notification saying your appointment has been taken care of let me give you another example let’s say you want to call a restaurant but maybe it’s a small restaurant which is not easily available to book online the call actually goes a bit differently than expected so take a listen it’s for four people actually like a five people four people before you can come how long is the way usually to be seated for next Wednesday seven oh no it’s not too easy okay oh I got you Thanks a game that was real call we have many of these examples where the calls quite don’t go as expected but the assistant understands the context the nuance it knew to ask for wait times in this case and handle the interaction gracefully but we’re still developing this technology and we actually want to work hard to get this right get the user experience and the expectation right for both businesses and users but done correctly it will save time for people and generate a lot of value for businesses we really wanted to work in cases safe here a busy pattern in the morning and your kid is sick and you want to call for a doctor’s appointment so we’re gonna work hard to get this right there is a more straightforward case where we can roll this out sooner where for example every single day we get a lot of queries into Google where people are wondering on the opening and closing hours of businesses but it gets tricky during holidays and businesses get a lot of calls so we as Google can make just that one phone call and then update the information for millions of users and it’ll save a small business countless number of calls so we’re gonna get moments like this right and make the experience better for users this is going to be rolling out as an experiment in the coming weeks and so stay tuned in a common theme across all this is we are working hard to give users back time we’ve always been obsessed about that at Google search is obsessed about getting users to answers quickly and giving them what they want which brings me to another area digital wellbeing now based on our research we know that people feel tethered to their devices sure it resonates with all of you there is increasing social pressure to respond to anything you get right away people are anxious to stay to stay up-to-date with all the information out there they have FOMO fear of missing out we wanted we think there’s a chance for us to do better we’ve been talking to people and some people introduced to us the concept of Tomo the actual joy of missing out so so we think we can really help users with digital well-being this is going to be a deep ongoing effort across all our products and platforms and we need all your help we think we can help users with their digital well-being in four ways we want to help you understand your habits focus on what matters switch off when you need to and above all find balance with your family so let me give a couple of examples you’re gonna hear about this from Android a bit later in their upcoming release but one of my favorite features is dashboard in Android we actually give you going to give you full visibility into how you’re spending your time the apps where you’re spending your time the number of times you unlock your phone on a given day the number of notifications you got and we’re gonna really help you deal with this better you know apps can also help YouTube is going to take the lead and if you choose to do so it’ll actually remind you to take a break it’s a for example if you’ve been watching YouTube for a while maybe it’ll show up and say hey it’s time to take a break YouTube is also going to work to combine if users want to combined all the notifications in the form of a daily digest so that if you have for notification it comes to you once during the day YouTube is going to roll out all these features this week you know we’ve been doing a lot of work in this area family link is a great example where we provide parents tools to help manage kids screen time I think this is an important part of it we want to do more here we want to equip kids to make smart decisions so we have a new approach a Google design approach it’s called be internet awesome to help kids become safe explorers of the digital world we want kids to be secure kind mindful and online and we are pledging to train an additional 5 million kids this coming year all these tools you’re seeing is launching with our digital well-being site later today another area where we feel tremendous responsibility is news news is core to our mission also times like this it’s more important than ever to support quality journalism it’s foundational to how democracies work I’ve always been fond of news growing up in India I have distinct memory of I used to wait for the physical newspaper turns out my grandfather my grandfather used to stay right next to us there was a clear hierarchy he got his hands on the newspaper first then my dad and then my brother and I would go at it you know I was mainly interested in the sports section at that time but over time I developed a fondness for news and it stayed with me even till today it is challenging time for the news industry recently we launched Google News initiative and we committed 300 million dollars over the next three years we want to work with organizations and journalists to help develop innovative products and programs that help the industry we’ve also had a product here for a long time Google News it was actually built right after 9/11 it was a 20% project by one of our engineers who wanted to see news from a variety of sources to better understand what happened since then if anything the volume and diversity of content has only grown I think there is more great journalism being produced today than ever before it’s also true that people turned to Google in times of need and we have a responsibility to provide that information this is why we have reimagined our news product we are using AI to bring forward the best of what journalism has to offer we want to give users quality sources that they trust but we want to build a product that works for publishers above all we want to make sure we are giving them deeper insight and a full of perspective about any topic they are interested in I’m really excited to announce the new Google News and here’s Tristan to tell you more Thank You sundar with the new Google News we set out to help you do three things first keep up with the news you care about second understand the full story and finally enjoy and support the sources you love after all without news publishers and the quality journalism that they produce we’d have nothing to show you here today so let’s start with how make it easy for you to keep up with the news you care about as soon as I open Google News right at the top I get a briefing with the top five stories I need to know right now as I move past my briefing there are more story selected just for me our a I constantly reads the firehose of the web for you the millions of articles videos podcasts and comments being published every minute and assembles the key things you need to know Google News also pulls in local voices and news about events in my area it’s this kind of information that makes me feel connected to my community this article from The Chronicle makes me wonder how long it would take to ride across this new Bay Bridge what’s cool is I didn’t have to tell the app that I follow politics love to bike or want information about the Bay Area it works right out of the box and because we’ve applied techniques like reinforcement learning throughout the app the more I use it the better it gets and at any point I can jump in and say whether I want to see less or more or of a given publisher or topic and whenever I want to see what the rest of the world is reading I can switch over to headlines to see the top stories that are generating the most coverage right now around the world so let’s keep going you can see there are lots of big gorgeous images that make this apps super engaging and a truly great video experience let’s take a look this brings you all the latest videos from YouTube and around the wind all of our design choices focus on keeping the app light easy fast and fun our guiding principle is to let the story speak for themselves so it’s pretty cool right what we’re seeing here throughout the app is the new Google material theme the entire app is built using material design our adaptable unified design system that’s been uniquely tailored by Google later today you’ll hear more about this and how you can use material themes in your products we’re also excited to introduce a new visual format we call newscasts you are not going to see these in any other news app newscasts are kind of like a preview of the story and they make it easier if you get a feel for what’s going on check out this one on the Star Wars movie here we’re using the latest developments in natural language understanding to bring together everything from the solo movie trailer – news articles to quotes and from the cast and more in a fresh presentation that looks absolutely great on your phone newscast give me an easy way to get the basics and decide where I want to dive in more deeply and sometimes I even discover things I never would have found out otherwise for the stories I care about most or the ones that are really complex I want to be able to jump in and see many different perspectives so let’s talk about our second goal for Google News understanding the full story today it takes a lot of work to broaden your point of view and understand a news story in depth with Google News we set out to make that effortless full coverage is an invitation to learn more it gives a complete picture of a story in terms of how it’s being reported from a variety of sources and in a variety of formats we assemble full coverage using a technique we call temporal Col Ocala tea this technique enables us to map relationships between entities and understand the people places and things in a story right as it evolves we apply this to the deluge of information publish to the web at any given moment and then organize it around storylines all in real time this is by far the most powerful feature of the app and provides a whole new way to dig into the news take a look at how full coverage works with the recent power outage in Puerto Rico there are so many questions I had about this story like how did we get here could it have been prevented and are things actually getting better we built full coverage to help make sense of it all all in one place we start out with the set of top headlines that tell me what happened and then start to organize around the key story aspects using our real-time event understanding for news events that have played out like this one over weeks and months you can you can understand the origin of developments by looking at our timeline of the key moments and while the recovery has begun we can clearly see there’s still a long way to go there are also certain questions we’re all asking about a story and we pull those out so you don’t have to hunt for the answers we know context and perspective come from many places so we show you tweets from relevant voices and opinions analysis and fact checks to help you understand the story that one level deeper in each case our AI is highlighting why this is an important piece of information and what unique value it brings now when I use full coverage I find that I can build a huge amount of knowledge on the topics I care about it’s a true 360-degree view that goes well beyond what I get from just scanning a few headlines on top of this our research shows that having a productive conversation or debate requires everyone to have access to the same information which is why everyone sees the same content in full coverage for a topic it’s an unfiltered view of events from a range of trusted news sources thank you so I got to say I love these new features and these are just a few of the things we think make the new Google News so exciting but as we mentioned before none of this would exist without the great journalism new zooms produce every day which brings us to our final goal helping you enjoy and support the new sources you love we’ve put publishers front and center throughout the app and here in the newsstand section it’s easy to find and follow the sources I already love and browse and discover new ones including over 1,000 magazine titles like Wired National Geographic and people which all look great on my phone I can follow publications like USA Today by directly tapping the star icon and if there’s a publication I want to subscribe to say The Washington Post we make it dead simple no more forms credit card numbers or new passwords because you’re signed in with your Google account you’re set when you subscribe to a publisher we think you should have easy access to your content everywhere and this is why we developed subscribe with Google subscribe with Google enables you to use your Google account to access your paid content everywhere across all platforms and devices on Google search Google News and publishers owned sites we built this in collaboration with over 60 publishers around the world and it will be rolling out in the coming weeks thank you and this is one of the many steps we’re taking to make it easier to access dependable high-quality information when and where it matters most so that’s the new Google News it helps you keep up with the news you care about with your briefing and newscasts understand the full story using full coverage and enjoy and support the new sources you love by reading following and subscribing and now for the best news of all we’re rolling out on Android iOS and the web in a hundred and twenty seven countries starting today I think so too pretty cool it will be available to everyone next week at Google we know that getting accurate and timely information into people’s hands and building and supporting high-quality journalism is more important than it ever has been right now and we are totally committed to doing our part we can’t wait to continue on this journey with you and now I’m excited to introduce Dave to tell you more about what’s going on in Android Android started with a simple goal of bringing open standards to the mobile industry today it is the most popular mobile operating system in the world if you believe in openness if you believe in choice if you believe in innovation from everyone then welcome to Android hi everyone it’s great to be here at Google i/o 2018 I think that was ten years ago when we launched the first Android phone the t-mobile g1 it was with a simple but bold idea to build a mobile platform that was free and open to everyone and today that idea is thriving our partners have launched tens of thousands of smartphones used by billions of people all around the world and through this journey we’ve seen Android become more than just a smartphone operating system powering new categories of computing including wearables TV Auto ARV our IOT and the growth of Android over the last 10 years has helped fuel the shift in computing from desktop to mobile and as Souter mentioned the world is now the precipice of another shift ai is gonna profoundly change industries like healthcare and transport and is already starting to change ours and this brings meets the new version of Android we’re working on Android P Android B is an important first step towards his vision of AI at the core of the operating system in fact AI underpins the first of three themes in this release which are intelligence simplicity and digital well-being so starting with intelligence we believe smartphones should be smarter they should learn from you and they should adopt you technologies such as on device machine learning can learn your usage patterns and automatically anticipate your next actions saving you time and because it runs on device the data is kept private to your phone so let’s take a look at some examples of how we’re applying these technologies to Android to build a smarter operating system in pretty much every survey of smartphone users you’ll see battery life as the top concern and I don’t know about you but this is my version of Maslow’s hierarchy of needs and we’ve all been there you know your batteries being okay but then you have one of those outlier days where it’s draining faster than normal leave you to run to the charger what Android P we partnered with deepmind to work on a new feature we call adaptive battery it’s designed to give you a more consistent battery experience adaptive battery uses on device machine learning to figure out which apps you’ll use in the next few hours and which you won’t use until later if at all today and then with this understanding the operating system adapts to your usage patterns so that it spends battery only on the absent services that you care about and the results are really promising we’re seeing a 30% reduction in CPU wake-ups for apps in general and this combined with other performance improvements including running background processes on the small CPU cores is resulting in an increase in battery for many users it’s pretty cool another example of how the OS is adapting to the user is auto brightness now most modern smartphones will automatically adjust the brightness given the current lighting conditions but it’s a one-size-fits-all they don’t take into account your personal preferences and environment so often what happens is you then need to manually adjust the brightness slider resulting the screen later becoming too bright or too dim what Android P we’re introducing a new on device machine learning feature we call adaptive brightness adaptive brightness learns how you like to set the brightness slider given the ambient lighting and then does it for you in a power efficient way so you literally see the brightness slider move as the phone adapts to your preferences and it’s extremely effective in fact we’re seeing almost half of our test users now make fewer manual brightness adjustments compared to any previous version of Android we’re also making the UI more intelligent last year we introduced the concept of predicted apps a feature that places the next apps the OS anticipates you need on the path you’d normally follow to launch that app and it’s very effective when an almost 60% prediction rate would android p we’re going to beyond simply predicting the next act to launch to predicting the next action you want to take we call this feature after actions so let’s take a look at how it works at the top of the launcher you can see two actions one to call my sister Fiona and another to start a workout on Strava for my evening run so what’s happening here is that the actions are being predicted based on my usage patterns the phone is adapting to me and trying to help me get to my next task more quickly as another example if I connect my headphones Android will surface in action to resume the album I was listening to to support app actions developers just need to add an actions an XML file to their app and then action surface not just in the launcher but in smart text selection the Play Store Google search and the assistant take Google search we’re experimenting with different ways to surface actions for apps you’ve installed and use a lot for example I’m a big fan tango user so when I search for the New Avengers movie infinity war I get in addition to regular suggestions I get an action to the Fandango app to buy tickets pretty cool actions are a simple but powerful idea for providing deep links into the app given your contacts but even more powerful is bringing part of the app UI to the user right there and that we call this feature slices slices are a new API for developers to define interactive snippets of the wrap UI that can be surfaced in different places in the OS in Android Pete were laying the groundwork by showing slices first in search so let’s take a look let’s say I’m out and about and I need to get a ride to work if I type lift into the Google Search app I now see a slice from the lift app installed on my phone lyft is using the slice api’s rich array of UI templates to render a slice of their app in the context of search and then lift is able to give me the price for my trip to work and the slice is interactive so I can order the ride directly from it pretty nice the slice templates of versatile so developers can offer everything from playing a video to say checking into a hotel is another example if I search for Hawaii I’ll see you slice from Google photos with my vacation pictures and we’re working with some amazing partners on app actions and slices and we’ll be opening an early access program to developers more broadly next month so we’re excited to see how actions and in particular slices will enable a dynamic two-way experience where the apps UI can intelligently show up in context so that’s some of the ways that we’re making Android more intelligent by teaching the operating system to adapt to the user machine learning is a powerful tool but it can also be intimidating and costly for developers to learn and apply and we want to make these tools accessible and easy to use to those who have little or no expertise in machine learning so today I’m really excited to announce ml kit and you said of API is available through firebase with ml kit you get on device api’s to text recognition face detection image labeling and a lot more and ml kit also supports the ability to tap into Google’s cloud-based ml technologies architectural e you can think of ml kit as providing ready to use models built on tensorflow light and optimized for mobile and best of all ml kit is cross-platform so it runs on both Android and iOS we’re working with an early set of partners on ml kit and so with some really great results for example the popular calorie counting app lose it is using our text recognition model to scale nutritional information and ml kits custom model api’s to automatically classify 200 different foods through the camera you’ll hear about more about ml kit at the developer keynote later today so we’re excited about making your smart phone more intelligent but it’s also important to us that the technology fades to the back when one of our key goals over the last few years has been to evolve androids UI to be simpler and more approachable vote for the current set of users and the next billion Android users would Android P we put a special simple emphasis on simplicity by addressing many pain points where we thought and you told us the experience was more complicated in an altered being and you’ll find these improvements on any device that adopts Google’s version of the Android UI such as Google pixel and Android one devices so let me walk you through a few live demos on my phone what could possibly go wrong and for the 7,000 people in an amphitheater okay as part of Android P we’re introducing a new system navigation that we’ve been working on for more than a year now and the new design makes androids multitasking more approachable and easier to understand and the first striking thing you’ll notice is the single clean home button and the design recognizes a trend towards smaller screen bezels and places an emphasis on gestures over multiple buttons at the edge of the screen so when I swipe up I’m immediately brought to the overview where I can resume apps I’ve recently used I also get five predicted apps at the bottom of the screen to save me time now if I’d continued to swipe off or I swipe up a second time I get to all apps so architectural II what we’ve done is combine the all apps and overview spaces into one and the swipe of gesture works from anywhere no matter what app I’m in so that I can quickly get back to all apps an overview without losing the context and if you prefer you could also use the quick scrub gesture by sliding the home button sideways to scroll through your recent set of apps like so now one of the nice things about the larger horizontal overview is that the app content is not glanceable so you can easily refer back to information in a previous app even more is we’ve extended smart text selection to work in overview so for example if I tap anywhere on the phrase The Killers all of the phrase will be selected for me then I get an action to listen to it on Spotify like so and we’ve extended smart text selections neural network to recognize more entities like sports teams and music artists and flight codes and more I’ve been using this new navigation system for the last month and I absolutely love it it’s a much faster more powerful way to multitask on the go so changing our navigation works it’s a pretty big deal but sometimes small changes can make a big difference too so take volume control and we’ve all been there you try to turn down the volume before a video starts but instead you turn down the ringer volume and then the video blasts everyone around you so how are we fixing it well you can see the new simplified volume controls here they’re vertical and located you saw the hardware buttons so they’re intuitive but the key difference is that the slider now adjusts the media volume by default because that’s the thing you want to change most often and for the ringer volume all you really care about is on silent and off like so okay we’ve also greatly simplified rotation and if you’re like me and hate your device rotating at the wrong time you’ll love this feature so right now I’m in the locked rotation mode and let me launch an app and you’ll notice that when I rotate the device a new rotation button appears on the nav bar and then I can just tap on it and rotate under my own control we go all right so that’s a quick tour of some of the ways that we’ve simplified user experience in android PE and there’s lots more everything from a redesign work profile to better screenshots to improve notifications management and more speaking of notifications management we want to give you more control over demands on your attention and this highlights a concept that sundar alluded to earlier making it easier to move between your digital life and your real life to learn more about this important area and our third theme let me hand over to Sameer thanks hi everyone on a recent family vacation my partner asked if she could see my phone right after we got to our hotel room she took it from me walked over to the hotel safe locked it inside and turned and looked me right in the eye and said you get this back in seven days when we leave whoa I was shocked I was kind of angry but after a few hours something pretty cool happened without all the distractions from my phone I was actually able to disconnect be fully present and I ended up having a wonderful family vacation but it’s not just me our team has heard so many stories from people who are trying to find the right balance with technology as you heard from sundar helping people with their digital wellbeing is more important to us than ever people tell us a lot of the time they spend on their phone is really useful but some of it they wish they’d spent on other things in fact we found over 70% of people want more help striking this balance so we’ve been working hard to add key capabilities right into Android to help people find the balance with technology that they’re looking for one of the first things we focused on was helping you understand your habits Android pee will show you a dashboard of how you’re spending time on your device as you saw earlier you can see how many how much time you spent in apps how many times you’ve unlocked your device today and how many notifications you’ve received and you can drill down on any of these things for example here’s my Gmail data from Saturday and when I saw this it did make me wonder whether I should have been on my email all weekend but that’s kind of the point of the dashboard now when you’re engaging is one part of understanding but what you’re engaging with in apps is equally important it’s like watching TV catching up on your favorite shows at the end of a long day can feel pretty good but watching an infomercial might leave you wondering why you didn’t do something else instead many developers call this concept meaningful engagement and we’ve been working closely with many of our developer partners who share the goal of helping people use technology in healthy ways so in Android P developers can link to more detailed breakdowns of how you’re spending time in their app from this new dashboard for example YouTube will be adding a deep link where you can see total watch time across mobile and desktop and access many of the helpful tools that shouldn’t sundar shared earlier now understanding is a good start but Android P also gives you controls to help you manage how and when you spend time on your phone maybe you have an app that you love but you’re spending more time in it than you realized Android Piet lets you set time limits on apps and will nudge you when you’re close to your limit that it’s time to do something else and for the rest of the day that app icon is greyed out to remind you of your goal people have also told us they struggle to be fully present for the dinner that they’re at or the meeting that they’re attending because the notifications they get on their device can be distracting and too tempting to resist and come on we’ve all been there so we’re making improvements to do not disturb mode to silence not just the phone calls and texts but also the visual interruptions that pop up on your screen to make do not disturb even easier to use we’ve created a new gesture that we’ve affectionately codenamed shush if you turn your phone over on the table it automatically enters do not disturb so you can focus on being present no pings vibrations or other distractions of course in an emergency we all want to make sure we’re still reachable by the key people in our lives like your partner or your child’s school Android P will help you set up a list of contacts that can always get through to you with a phone call even if Do Not Disturb is turned on finally we heard from people that they often check their phone right before going to bed and before you know it an hour two has slipped by and honestly this happens to me at least once a week getting a good night’s sleep is critical and technology should help you with this not prevent it from happening so we created windown mode you can tell the Google assistant what time you aim to go to bed and when that time arrives it will switch on do not disturb and fade the screen to grayscale which is far less stimulating for the brain and can help you set the phone down it’s such a simple idea but I found it’s amazing how quickly I put my phone away when all my apps go back to the days before color TV don’t worry all the colors return in the morning when you wake up okay that was a quick tour of some of the digital well-being features we’re bringing to Android peek this fall starting with Google pixel digital well being is gonna be a long-term theme for us so look for much more to come in the future beyond the three themes of intelligence simplicity and digital well-being that Dave and I talked about there are literally hundreds of other improvements coming in Android P I’m especially excited about the security advancements we’ve added to the platform and you can learn more about them at the Android security session on Thursday but your big question is that’s all great how do I try some of this stuff well today we’re announcing Android P beta and with efforts and Android Oreo to make OS upgrades easier Android P beta is available on Google pixel and seven more manufacturer flagship devices today you can head over to this link to find out how to receive the beta on your device and please do let us know what you think okay that’s a wrap on what’s new in Android and now I’d like to introduce Jen to talk about Maps thank you it has changed I’m sure so much and you can actually be part of it being able to be armed with the knowledge of where you’re going you’re gonna build together like anybody else can two consecutive earthquakes hit Mexico City and Google map helped a response to emergency crisis like this the hurricane had turned Houston into islands and the roads were changing constantly we kept saying thank God for Google like what would we have done it’s really cool that this is helping people to keep doing what they love doing and keep doing what they need to do building technology to help people in the real world every day has been core to who we are and what we focused on at Google from the very start recent advancements in AI in computer vision have allowed us to dramatically improve long-standing products like Google Maps and have also made possible brand-new products like google lens let’s start with Google Maps Maps was built to assist everyone wherever they are in the world we’ve mapped over 220 countries and territories and put hundreds of millions of businesses and places on the map and in doing so we’ve given more than a billion people the ability to travel the world with the confidence that they won’t get lost along the way but we’re far from done we’ve been making map smarter and more detailed as advancements in AI have accelerated we’re now able to automatically add new addresses businesses and buildings that we extract from Street View and satellite imagery directly to the map this is critical in rural areas in places without formal addresses and in fast changing cities like Lagos here where we’ve literally changed the face of the map in the last few years hello Nigeria we can also tell you if the business you’re looking for is open how busy it is what the wait time is and even how long people usually spend there we can tell you before you leave whether parking is going to be easy or difficult and we can help you find it and we can now give you different routes based on your mode of transportation whether you’re riding a motorbike or driving a car and by understanding how different types of vehicles move at different speeds we can make more accurate traffic predictions for everyone but we’ve only scratched the surface of what maps can do we originally designed maps to help you understand where you are and to help you get from here to there but over the past few years we’ve seen our users demand more and more of maps they’re bringing us harder and more complex questions about the world around them and they’re trying to get more done today users aren’t just asking for the fastest route to a place they also want to know what’s happening around them what the new places at Rye are and what locals love in their neighborhood the world is filled with amazing experiences like cheering for your favorite team at a sports bar or a night out with friends or family at a cosy neighborhood Bistro we want to make it easy for you to explore and experience more of what the world has to offer we’ve been working hard on an updated version of Google Maps that keeps you in the know on what’s new and trending in the areas you care about and helps you find the best place for you based on your context and interests let me give you a few examples of what this is gonna look like with some help from Sophia first we’re adding a new tab to maps called for you it’s designed to tell you what you need to know about the neighborhood’s you care about new places that are opening what’s trending now and personal recommendations here I’m being told about a cafe that just opened in my area if we scroll down I see a list of the restaurants that are trending this week this is super useful because with zero work maps is giving me ideas to kick me out of my rut and inspire me to try something new but how do I know if a place is really right for me have you ever had the experience at looking at lots of places all with four star ratings and you’re pretty sure there’s some you’re gonna like a lot and others that maybe aren’t quite so great but you’re not sure how to tell which ones we’ve created a score called your match to help you find more places that you’ll love your match uses machine learning to combine what Google knows about hundreds of millions of places with the information that I’ve added restaurants I’ve rated cuisines I’ve liked and places that I’ve been to if you cook into the match number you’ll see reasons explaining why it’s recommended just for you it’s your personal score four places and our early testers are telling us that they love it now you can confidently pick the places that are best for you whether you’re planning ahead or are on the go and need to make a quick decision right now thanks so much Sophia the for you tab and the your match score are great examples of how we can help you stay in the know and choose places with confidence now another pain point we often hear from our users is that planning with others can be a real challenge so we wanted to make it easier to pick a place together here’s how long press on any place to add it to a shortlist now I’m always up for ramen but I know my friends have lots of opinions of their own so I can add some more options to give them some choices when you’ve collected enough places that you like share the list with your friends to get their input – you can easily share with just a couple of taps on any platform that you prefer then my friends can add more places if they want to or just vote with one simple click so we can quickly choose a group favourite so now instead of copying and pasting a bunch of links and sending text back and forth decisions can be quick easy and fun this is just a glimpse of someone what’s coming to maps on both Android and iOS later this summer and we see this as just the beginning of what maps can do to help you make better decisions on the go and to experience the world in new ways from your local neighborhood to the far-flung corners of the world this discovery experience wouldn’t be possible without small businesses because when we help people discover new places we’re also helping local businesses be discovered by new customers these are businesses like the bakery in your neighborhood or the barber shop around the corner these businesses are the fabric of our communities and we’re deeply committed to helping them succeed with Google every month we connect users to businesses nearby more than 9 billion times including over a billion phone calls and three billion Direction requests to their stores in the last few months we’ve been adding even more tools for local businesses to communicate and engage with their customers in meaningful ways you can now see daily posts on events or offers from many of your favorite businesses and soon you’ll be able to get updates from them the new for you stream – and when you’re ready you can easily book an appointment or place an order with just one click we’re always inspired to see how technology brings opportunities to everyone the reason we’ve invested over the last 13 years in mapping every road every building and every business is because it matters when we map the world community has come alive and opportunities arise in places we never would have thought possible and as computing evolves we’re gonna keep challenging ourselves to think about new ways that we can help you get things done in the real world I’d like to invite a part of the stage to share how we’re doing this both in Google Maps and beyond the cameras in our smartphones they connect us to the world around us in a very immediate way they help us save a moment capture memories and communicate but with advances in AI and computer vision that you heard sundar talk about we said what if the cameras can do more what if the cameras can help us answer questions questions like where am I going or what’s that in front of me let me paint the familiar picture you exit the subway you’re already running late for an appointment or a tech company conference that happens and then you’re the you phone says head south on Market Street so what do you do one problem you have no idea which way is south so you look down at the phone you’re looking at that blue dot on the map and you’re starting to walk to see if it’s moving in the same direction if it’s not you’re turning around they’ve all been there so we asked ourselves well what if the camera can help us here our teams have been working really hard to combine the power of the camera the computer vision with Street View and maps to reimagine walking navigation so here’s how it could look like in Google Maps let’s take a look you open the camera you instantly you instantly know where you are no futzing with the phone you all the information on the map the street names the directions right there in front of you notice that you also see the map so that way you stay oriented you can start to see nearby places so you see what’s around you and just for fun our team’s been playing with an idea of adding a helpful guide like that there so that it can show you the way oh there she goes pretty cool now enabling these kinds of experiences though GPS alone doesn’t cut it so that’s why we’ve been working on what we call VPS visual positioning system that can estimate precise positioning and orientation one one way to think about the key insight here is just like you and I when we are in an unfamiliar place you’re looking for visual landmarks looking for the storefront the building facades etc and it’s the same idea VPS uses the visual features in the environment to do the same so that way we help you figure out exactly where you are and get you exactly where you need to go pretty cool so that’s an example how we’re using the camera to help you in maps but we think the camera can also help you do more with what you see that’s why we started working on Google lense now people are already using it for all sorts of answers and especially when the questions are difficult to describe in words answers like oh that cute dog in the park that’s a labradoodle or this building in Chicago is the Wrigley building and it’s 425 feet tall or as my nine-year-old son says these days that’s more than 60 Kevin Durant’s now today lens is the capability in Google products like photos in the assistant but we’re very excited that starting next week lens will be integrated right inside the camera app on the pixel the new LG g g7 and a lot more devices this way it makes it super easy for you to use lands on things right in front of you already in the camera very excited to see this now like voice vision is a fundamental shift in computing for us and it’s a multi-year journey but we’re already making a lot of progress so today I thought I’d show you three new features in google lens that can you give you more answers to more types of questions more quickly shall we take a look all right okay first lens can now recognize and understand words words are everywhere if you think about it traffic signs posters restaurant menus business cards but now with smart text selection you can now connect the words you see with the answers and actions you need so you can do things like copy and paste from the real world directly into your phone just like that or let’s say you’re looking at or you can pay turn a page of words into a page of answers so for example you’re looking at a restaurant menu you can quickly tap around figure out every dish what it looks like what are all the ingredients etcetera by the way has a vegetarian good to know ratatouille just zucchini and tomatoes really cool now in these examples lens is not just understanding the shape of characters and their letters visually it sounds actually trying to get at the meaning in the context behind these words and that’s where all the language understanding that you heard Scott talk about really comes in handy okay the next feature I want to talk about is called style match and the idea is this sometimes your question is not what’s that exact thing instead your question is what are things like it you’re at your friend’s place you check out this trendy looking lamp and you want to know things that match that style and now lens can help you or if you see an outfit that catches your eye you can simply open the camera tap on any item and find out of course specific information like reviews itself of any specific item but you can also see all the things and browse around that match that style now there’s two parts to it of course lens has to search through millions and millions of items but we kind of know how to do that search but the other part actually complicates things which is there can be different textures shapes sizes angles lighting conditions etcetera so it’s a tough technical problem but we’re making a lot of progress here and really excited about it so the last thing I want to tell you about today is how we’re making lens work in real time so as you saw in the style match example you start to see you open the camera and you start to see lens surface proactively all the information instantly and it even anchors that information to the things that you see now this kind of thing where it’s sifting through billions of words phrases places things just in real time to give you what you need not possible without machine learning so we are using both on device intelligence but also tapping into the power of cloud TP use which we announced last year at i/o to get this done really excited and in over time what we want to do is actually overlay the live results directly on top of things like storefronts street signs or a concert poster so you can simply point your phone at a concert poster of Charlie puth and the music video just starts to play just like that this is an example of how the camera is not just answering questions but it is putting the answers right where the questions are and it’s very exciting so smart text selection style match real-time results all coming to lens in the next few weeks please check them out so those are some examples of how Google is applying AI in camera to get things done in the world around you when it comes to applying AI mapping and computer vision to solving problems in the real world well it doesn’t get more real than self-driving cars so to tell you all about it please join me in welcoming the CEO of vamo john krafchick thank you hello everyone we’re so delighted to join our friends at Google onstage here today and while this is my first time at Shoreline it actually isn’t the first time for our self-driving cars you see back in 2009 and the parking lot just outside this theater some of the very first tests of self-driving technology took place it was right here we’re a group of Google engineers roboticists and researchers set out on a crazy mission to prove that cars could actually drive themselves now back then most people thought self-driving cars were nothing more than science fiction but this dedicated team of dreamers believed that self-driving vehicles could make transportation safer easier and more accessible for everyone and so the Google self-driving car project was born now fast forward to 2018 and the Google self-driving car project is now its own independent alphabet company called way Moe and we’ve moved well beyond tinkering and research today wham-o is the only company in the world with a fleet of fully self-driving cars with no one in the driver’s seat on public roads now members of the public in Phoenix Arizona have already started to experience some of these fully self-driving rides – let’s have a look hey Denny one of self-driving are you ready the future is it going driving back home it’s pretty cool all of these people are part of what we call the way Moe early writer program where members of the public use our self-driving cars in their daily lives over the last year I’ve had a chance to talk to some of these early writers and their stories are actually pretty inspiring one of our early writers neha witnessed a tragic accident when she was just a young teen which scared her into never getting her driver’s license but now she takes away Moe to work every day and there’s Jim and Barbara who no longer have to worry about losing their ability to get around as they grow older then there’s the Jackson family waynebow helps them all navigate their jam-packed schedules taking Kyla and Joseph to and from school practices and meet ups with friends so it’s not about science fiction when we talk about building self-driving technology these are the people who are building it for in 2018 self-driving cars are already transforming the way they live and move so Phoenix will be the first stop for way Mo’s driverless transportation service which is launching later this year soon everyone will be able to call way mo using our app and a fully self-driving car will pull up with no one in the driver’s seat to whisk them away to their destination and that’s just the beginning because that way mo we’re not just building a better car we’re building a better driver and that driver can be used in all kinds of applications ride-hailing logistics personal cars connecting people to public transportation and we see our technology as an enabler for all of these different industries and we intend to partner with lots of different companies to make this self-driving future a reality for everyone now we can enable this future because of the breakthroughs and investments we’ve made in AI back in those early days Google was perhaps the only company in the world investing in both AI and self-driving technology at the same time so when Google started making major advances in machine learning with speech recognition computer vision image search and more wham-o is in a unique position to benefit for example back in 2013 we were looking for a breakthrough technology to help us with pedestrian detection luckily for us Google is already deploying a new technique called deep learning a type of machine learning that allows you to create neural networks with multiple layers to solve more complex problems so our self-driving engineers teamed up with researchers from the Google brain team and within a matter of months we reduced the error rate for detecting pedestrians by 100x that’s right not a hundred percent about a hundred times and today today AI plays an even greater role in our self-driving system unlocking our ability to go truly self-driving now to tell you more about how machine learning makes way mo the safe and skilled driver that you see on the road today I’d like to introduce you to Demetri good morning everyone it’s great to be here now at way mo AI touches every part of our system from perception to prediction to decision-making to mapping and so much more now to be a capable and safe driver our cars need a deep semantic understanding of the world around them our vehicles need to understand and classify objects interpret their movements reason about intent and predict what they will do in the future they need to understand how each object interacts with everything else and finally our cars need to use all that information to act in a safe in predictable manner so needless to say there’s a lot that goes into building a self-driving car and today I want to tell you about two areas where AI has made a huge impact perception and prediction so first perception detecting and classifying objects is a key part of driving pedestrians in particular poses a unique challenge because the common all kinds of shapes postures and sizes so for example here’s a construction worker picking out of a manhole with most of his body obscured here’s a pedestrian crossing the street concealed by a plank of wood and here we have pedestrians who are dressed an inflatable dinosaur costumes and now we haven’t taught our cars about the Jurassic period but can still classify them correctly we can detect and classify these pedestrians because we apply deep Nets to a combination of sensor data a traditionally in computer vision neural networks are used just on camera images and video but our cars have a lot more than just cameras we also have lasers to measure distance and shapes of objects and radars to measure their speed and by applying machine learning to this combination of sensor data we can accurately detect pedestrians in all forms in real time a second area where machine learning has been incredibly powerful for way more is predicting how people will behave on the road now sometimes people do exactly what do you expect them to and sometimes they don’t take this example of a car running a red light unfortunately we see this kind of thing more than we’d like but let me break this down from the cars point of view our car is about to proceed straight through an intersection we have a clear green light and cross traffic is stopped with a red light but just as we enter the intersection all the way in the right corner we see a vehicle coming fast our models understand that this is unusual behavior for a vehicle that should be decelerating we predict the car will run the red light so we preemptively slowed down which you can see here with this red fence and this gives the red light runner room to pass in front of us while it barely avoids hitting another vehicle now we can detect this kind of anomaly because we’ve trained our ML models using lots of examples today our fleet has self driven more than 6 million miles on public roads which means we’ve seen hundreds of millions of real-world interactions to put that in perspective we drive more miles each day than the average American tribes in the year now it takes more than Google good algorithms to build a self-driving car we’ll send you a really powerful infrastructure and at way mo we is the tensorflow ecosystem and Google’s data centers including TP use to train our neural networks and with GPUs we can now train our Nets up to 15 times more efficiently we also use this powerful infrastructure to validate our models in simulation and in this virtual world we’re driving the equivalent of twenty five thousand cars all day every day all told we’ve driven more than five billion miles in simulation and with this kind of scale both in training and validation of our models we can quickly and efficiently teach our cars new skills and once kindled skill we started to tackle is self-driving in difficult weather such as snow as you see here and today for the first time I want to show you a behind-the-scenes look at what it’s like for our cars to self-driving snow this is what our car sees before we apply any filtering now driving in a snowstorm can be tough because snowflakes can create a lot of noise for our sensors but when we apply machine learning to this data this is what our car sees we can clearly identify each of these vehicles even through all of the sensor noise and the quicker we can unlock these types of advanced capabilities the quicker we can bring ourselves running cars two more cities around the world into a city near you I can’t wait to make our self-driving cars available to more people moving us closer to a future where roads are safer easier and more accessible for everyone thanks everyone now please join me in welcoming back Jen to close out the morning session thanks Dimitri it’s a great reminder of how AI can play a role in helping people in new ways all the time I started at Google as an engineer as an engineering intern almost 19 years ago and what struck me from almost the very first day I walked in the door was the commitment to push the boundaries and what was possible with technology combined with a deep focus on building products that had a real impact on people’s lives and as the years have passed I’ve seen time and again who technology can play a really transformative role from the earliest days of things like search and maps to new experiences like the Google assistant as I look at the Google of today I see those same early values alive and well we continue to work hard together with all of you to build products for everyone and products that matter we constantly aspire to raise the bar for ourselves even higher and to contribute to the world in to society in a responsible way now we know that to truly build for everyone we need lots of perspectives in the mix and so that’s why we brought an i/o this year to include an even wider range of voices we’ve invited additional speakers over the next three days to talk to you all about the broader role that technology can play in everything from promoting digital well-being to empowering NGOs to achieve their missions along with of course the hundreds of technical talks that you’ve come to expect from us at i/o and that we hope you can enjoy and learn from as well welcome to i/o 2018 please enjoy and I hope you all find some inspiration in the next few days to keep building good things for everyone thank you