good Morning Welcome to Google I/O This is a wonderful day. I think this year is warmer than last year. I hope everyone will enjoy it and thank you for coming. I think we have more than 7,000 people here today. And a lot of viewers — We broadcast this speech to many places around the world. So, thank you all for coming today, we have a lot of topics to tell you. Before we started I have an important thing to solve first. At one of our core products at the end of last year I found a major mistake that caught my attention. It turns out that…
…we put the burger emoji cheese in the wrong position Anyway, we work hard I never knew that so many people would care where the cheese is placed. We fixed The irony of the whole thing is that I am a vegetarian at all. We fixed the desire to put the cheese in the right place. This caught my attention when we were studying I don't want to tell your team what to explain to me. The reason why the bubble floats above the beer But we have restored the laws of physics Everything is fine, we can get back to business. We can talk about all the progress after last year's I/O. I believe that all of you will agree This is an extraordinary year in many ways. I believe that you will all feel We are at an important turning point in computing science It’s exciting to push the technology forward. This makes us more reflective of our responsibility Our expectations for technology are very different It depends on where you are in the world.
Or what kind of opportunity do you have? We don’t have a phone when we grow up like me. I can remember clearly How can technology change your life? We can see this in our work around the world. You will see some people using their smartphones for the first time. You can feel the strong demand for digital skills. That's why we focus on Bring digital skills to communities around the world So far, we have trained more than 25 million people. We expect this number in the next five years More than 60 million people We know that technology can be a positive energy. But we are equally clear Can't open eyes on innovative technology Impact on these technological advances The role of technology in our lives We have raised very real and important questions We all know the road ahead Need to be carefully and deliberately navigated We feel a deep sense of responsibility to make it develop correctly This is the spirit close to our core mission.
Make information more useful, More accessible and beneficial to society I always think that we are very lucky. As a company, there is an eternal mission Today is as meaningful as we started. We are very excited about how we can rekindle To complete the mission Thank you for the progress we saw at AI AI allows us to do this in a whole new way Solve different problems for our users around the world At the Google I/O presentation last year, we released Google AI This is the result of our team's efforts Bring the benefits of AI to everyone We hope that artificial intelligence can operate globally. So we have opened artificial intelligence centers around the world. Artificial intelligence will affect many different fields Today I want to give a few examples.
Healthcare is one of the most important areas where artificial intelligence will change Last year we released work on diabetic retinopathy This is the main cause of blindness in diabetic patients We use deep learning to help doctors diagnose earlier Since then we have been at the Aravin Hospital in India. Field trial with Sankara Hospital The field test progressed quite smoothly. Where we bring the diagnostic experts Well-trained doctors are very limited It turns out that even using the same retinal scan Some humans don’t know what to look for Our AI system can provide more insights Proof of fact The same eye scan can grasp the information is We can predict you within five years Risk of cardiovascular disease Heart attack or stroke For me, what’s interesting is Doctors can find more clues in these eye scans Machine learning systems can provide newer insights This may be a new non-invasive method To detect the risk of cardiovascular disease We are working hard, we just published a research report.
We will continue to work hard with our partners. Conducting field trials Another area where AI can help is Practically help doctors predict medical events It turns out that doctors need to make a lot of tough decisions. Advance notice to them For example, a patient may feel very uncomfortable 24-48 hours before The result will be very different Because we use machine learning systems We have always been with partners Use medical records to remove personal information It turns out that if you analyze more than 100,000 data points per patient This way more data can be analyzed than any doctor. We can actually make a quantity forecast Patient re-admission 24-48 hours earlier than traditional methods This gives the doctor more time to take action We will publish related papers later today.
We look forward to working with different hospitals and medical institutions Another area where AI can help is accessibility We can make day-to-day use cases easier and easier Let's take a common use case as an example. You go home at night, turn on the TV Often see two or more people Warm conversation with each other Imagine if your hearing is impaired You have to rely on closed captions to understand what is going on. This is what it looks like (Two men talk to each other) As you can see, this is gibberish and you can’t understand what’s going on. So we have a machine learning technique called looking to listen It will not only look for audio tips And combine it with visual cues Clearly eliminate two sounds Let's see how it works on YouTube He is not the level of Danny Angie.
But he is above the level of Colange Road In other words, he understands enough… You said that deliberate failure is no problem. You said that deliberate failure is no problem. And to promote the fans It’s okay, you’re okay. We have nothing else to say. We have a lot to say You can see how we make technology work.
Make important everyday use cases better. The best thing about technology is that it will continue to evolve. In fact, we can even learn machine learning Applied to the 200-year-old technology Morse code And affect the quality of life of some people Let's see Hi, my name is Tania. This is my voice I pass the switch installed near the head Put a dot and a dash to use Morse code When I was a young child I used to use the communication board. I use a stick to pick different words. To say the least, this is very attractive When Morse code was incorporated into my life I have a feeling of liberation and freedom. – See you later – Love you.
I think this is why I really like skydiving. This is the same feeling Because of skydiving, I met the love of my life. And criminal partners This is always very difficult To find Morse code equipment Try to use Morse code This is why I need to create my own device. I have my own voice with the help of Ken. More independence in my daily life But most people are not willing We hope that we can work with the Gboard team. Help those who want to explore freedom by using Morse code Gboard is the Google keyboard We found that using Gboard Is a whole group of people in the world When I say "the whole group of people", it is tens of millions of people.
Keyboards that have never used their own language We have established Gboard support for Morse code through Tania. This is an input mode Allows you to enter Morse code And output text based on forecasts and suggestions I think this is a good example. Indicates that machine learning can really help humans Without artificial intelligence What the normal keyboard failed to do I am very excited to continue this journey. Many people will benefit This makes me very excited This is a very encouraging story. We are very happy that Tania and Ken are here today.
In fact, both Tania and Ken are developers. They really work with our team Master Gboard's real text prediction function together In the context of Morse code I am really happy to configure Morse Code Gboard A beta version will be released later today. It’s great to reshape different products with AI. Gboard is actually a great example every day We provide users with more than 8 billion automatic corrections every day Our other core product Redesigned using AI Is Gmail We just gave Gmail a new, fresher look. a redesign I hope that you all like to use it. We are adding another feature to Gmail We call it Smart Compose. As the name suggests We use machine learning when you type Recommend some phrases for you All you have to do is click on Tab and continue auto-complete. In this case, when it understands the subject as "Taco Tuesday" It would recommend French fries salsa, guacamole It is responsible for repeated things like addresses Make you don't have to worry Can really focus on the input word I always like to use this feature.
I have been sending more emails to company colleagues… But I don’t know what my colleagues think about this. It's really great This month we will launch Smart Compose for all users. I hope you like to use this feature. We use AI to build another product from scratch. Is Google photo It works very well And it can be scaled If you click on one of the photos We call it the "photo browser experience" You only see one photo at a time. To understand its size Every day our users see more than 5 billion photos every day At those moments we want to use artificial intelligence to help We are about to launch a new feature called Suggested Action. Basically suggest some small moves Suggest what action you want to take according to your situation Let's say you go to the wedding. Then you are browsing those wedding photos We know that your friend Lisa is also in the photo We would recommend sharing these three photos with Lisa.
And you just need to click on these photos to send them to her. Everyone is worried about whether they can get these photos on their mobile phones. I think we can do better. For example, photos of the same wedding If the photo is underexposed Our AI system will provide a suggestion Solve the problem of brightness, click We can solve the problem of brightness for you. Or, if you take a photo you want to save it for later use. We can recognize converting the document to PDF format Make it… …making it easier for you to use later We hope to make these simple cases enjoyable By the way, artificial intelligence can also provide unexpected moments. For example, you have a lovely photo of your child. We can make it more beautiful We can change the background color to change the color. Make your child look more lovely Or if you have a very special memory Some black and white images or your mother and grandmother The moment we can recreate the color Make that moment more real and special These features will be available in the coming months.
Available to Google photo users The reason we are able to do these functions Because we have been investing for a long time Our computing framework This is why we talked about the Tensor Processing Unit last year. These are machine learning chips with special applications. These chips are driving all the product improvements you see today. And we will provide it to all cloud customers. We have been working hard since last year Today, I am happy to announce our next generation TPU 3.0 These chips are very powerful For the first time we have to add liquid-cooled equipment to the data center.
We put these chips in long, narrow giant containers Now every container is 8 times stronger than last year. It has a computing level far exceeding 100 Pedaflops This allows us to develop better models Larger models, more accurate models Help us solve bigger problems One of the biggest problems we use AI to handle Is Google Assistant Our vision for a perfect assistant is It knows how to talk naturally When you need it, it is there So that you can get things done in the real world We are working hard to make it better We hope that the assistant becomes a natural and comfortable conversation partner. To do this We want to start with the basics of Google Assistant. Its sound Today this is how most users interact with their assistants. Our current voice code is "Holly" She is a real person, she stayed in our studio for a few months.
Then we combined those recordings to create a Voice. 18 months ago Our DeepMind team announced a breakthrough Called WaveNet Like the current system WaveNet actually simulates the underlying raw audio Create a more natural sound It is closer to how humans speak Tone, rhythm Even pause to express special meaning We hope to be completely correct We have been working hard to develop WaveNet As of today, we have added Six new sounds to Google Assistant Let them greet everyone good morning everyone I am your Google Assistant. Welcome to the coastline amphitheater We hope you enjoy the Google I/O presentation Handed it to you, Minda our target is One day it will speak the global correct accent, language and dialect WaveNet makes this goal easier to achieve With this technology We started to know We can ask which amazing sound to enter the studio Take a look Cuscus a semolina and granule of North Africa Made of hard wheat I want a puppy with sweet eyes and fluffy tail, and I like my haiku.
We are not? I wish you a happy birthday. Wish you …happy birthday♪ John Legend… He wants to tell everyone that he doesn't want to boast He will be the best assistant in history Can you tell me where you live? You can find me on all devices Mobile, Google Home and, if I am lucky …in your heart That's right, the sound of John Legend will be added to the Assistant. Obviously he didn't stay in the studio all the time. Answer every question you may ask WaveNet allows us to shorten the time spent in the studio The model actually captures his rich layers of sound. In certain circumstances his voice will be released later this year. You will get such a reply Good morning, 舜达 At present, the temperature in Mountain View is 65 degrees and the weather is fine. It is predicted that the temperature is now 75 degrees and the sun is abundant. At 10 am you have an event Google I/O keynote At 1 pm, you have to drink Margarita Have a good day I am looking forward to 1 pm John’s voice will arrive later this year.
I am really excited that we use AI to promote this progress. We are still doing more things with Google Assistant. In order to talk to you a little more Let me invite Scottrade to the stage. Hey Google, call McGrady Ok, dial now Hey Google, book a table for four It sounds pretty good Hey Google, call my brother Hey Google, call my brother Texting Carol Can you text Carol for me? Hey Google, who sent me a text message? – Yo Google – Stop! Kevin is great. We don't have a "Yo Google" command yet You must say "Hey" Hey Google – Hey Google – Play some Greek music Hey Google, play the next episode Play the crown of Netflix All of Channey Titan's movies Ok – Yo Google – Stop! awesome Can we only say "Hey Google" at a time? Hey Google, find my phone Looking for now Wow! – Hey Google – Hey Google Hey Google Yo Google, lock the front door stop! Ok, let's accept Yo Google. I am sure the engineers want to update…
Everything Yo ~ Hey, what can I do for you? Two years ago We are here to publish Google Assistant for I/O. Today, the assistant can be used on more than 500 million devices Including mobile phones, speakers, headphones TV, watch, etc. Can also be used in more than 40 car brands Suitable for over 5,000 connected home devices From the dishwasher to the doorbell People all over the world use it every day. For example, we launched an assistant in India last year. User response is excellent Daily use has tripled since the beginning of the year By the end of this year The Assistant will support 30 languages And can be used in 80 countries We have made great progress We are just getting started Today we want to share some important things with you.
The assistant will become more natural during the conversation And visually assist Help you do more and save time Now, as I said earlier, New sounds you can choose Make Google Assistant your own assistant Its dialogue is more natural and important But to be a great talker The assistant needs an in-depth understanding of the social dynamics of the conversation Give an example, tell the truth Every time I want to get the attention of the assistant It’s a bit annoying to say "Hey Google" This grandma, you have seen it on YouTube. Definitely have this feeling Googoo! 嘿Googoo! Googoo! 嘿Googoo! Hey Googoo Okay Googoo – He wants to know how the weather is… – Tomorrow tomorrow There will be showers at Flagler Beach tomorrow. The highest temperature is 65 degrees and the lowest temperature is 56 degrees. The assistant will eventually work for her But it shouldn’t be so hard Now you don't have to say "Hey Google" every time.
Look here Did the Hey Google Warriors win? Yeah, the Warriors won the game last Sunday. For the squad, the score is 118 to 92. Not bad! When is the next game? The next game of the Warriors is 7:30 this evening. They will confront the team Great, it’s fun to watch the game tonight. Hey, remind me to go to Kevin Durant’s jersey when I get home. Of course, I will remind you when you go home. Now, pay attention to my ability to work with the assistant. Conduct a natural and questionable dialogue No need to repeat "Hey Google" for each subsequent request More helpful is When I talk to the assistant, it will understand what I am talking about.
When I talk to you, you won't understand. We call it continuous dialogue This is an important functional requirement You can enable it in the coming weeks. Suppose now that the game is almost starting now. Hey Google Turn on the TV to watch the Warriors game and turn on the popcorn machine Ok, this is on YouTube TV. The Golden State Warriors game and opened the popcorn machine Oh, can you dim the lights in the kitchen and the kitchen? Of course, dim the lights in the kitchen and kitchen Now you will notice in these examples I will ask for more than one thing at a time. This is really natural for us. But the computer is hard to understand. We call it multiple actions Now we are launching this feature in the Assistant.
You may think "Less come, it's easy! Just look for 'and' words" (laughter) Not always that simple In this case even if it has the word "and" This is not two requests, actually only one it's here Even if it looks very similar The assistant will also separate it Form two requests In linguistics it is called "the reduction of the side-by-side structure" Multiple actions can become more complicated Just for fun Hey Google Who is the governor of California when Kevin Durant was enlisted? What team called him? In 2007, the governor was Arnold Schwarzenegger Kevin Durant was recruited by the Seattle SuperSonics Ok, just in case you want to know Ok next We have been working hard Modify for home users Talk to Google Assistant Last fall we were Google Assistant Post a home experience It offers games, events and stories for the entire family We continue to increase the stock of the library And every family has listened to more than 130,000 hours of children’s stories.
Only in the last two months Now we continue to improve the family experience We heard a lot of parents’ concerns. Including members of the team who have children The children are learning the tyrannical harshness When they can only say "Hey Google" to ask for something they need? This is not a simple matter But we have been working hard to build an instruction called "Pretty Please" Some parents on the team test with their family Look here Hey Google talk to me Dad, you forgot to say "please" Okay Google, please tell me a story.
Thank you for saying "please" Please play "Freeze Dance" How good are you saying this? Please tell me a story. Thank you for asking me politely. A long time ago, there was a weird walrus… Please help me with my homework. please… – Please… – Please… You are very polite I know The assistant can understand And respond to a positive and polite conversation We have been consulting various family and child development experts We plan to offer Pretty Please options for home users later this year. The new voice of your assistant Continuous dialogue Multiple Actions and Pretty Please Artificial intelligence is helping us make significant progress Everyone can talk to their assistant Conduct a more natural conversation Now I want to introduce Lily Ann. Share with us some of the things we are doing and exciting Put Voice and Visual Assistance together Thank you, Scott, everyone, good morning. In the past few years The assistant has been focusing on your verbal conversations with Google. Today we are going to introduce a new visual canvas. This is for the on-screen Google Assistant. This combines the simplicity and rich visual experience of voice I invite Maggie to come to power.
We are going to switch a lot of live demos Let us give you a quick look at the release of CES in January. New smart display We are working with some of the best consumer electronics brands Today, I am happy to announce the first smart display. Will be available for sale in July Today I will show you some methods. This new device makes your day easier and easier Put the simplicity of Voice Put together with the browsability of the touch screen Let's switch to live demo This is Lenovo's smart display Environment screen integrated with Google Photos And use my child's photo to greet me Bella and Hudson – this is really my child This is the best way to start a new day every morning.
Since the device is controlled by voice I only need to send a simple command to watch the video or live TV. This makes it easy for me to handle tasks at home. While watching your favorite shows Hey Google Let's see Jimmy Chicken Show Ok, play YouTube TV. Jimmy Chicken Show An interesting thing happened. This is one of my life — I drove my daughter back to school this morning… That's right On YouTube TV, you will be able to see a lot of exciting shows. From local news, live sports events or more They will be viewable on the smart display Now you can of course watch all the normal content of YouTube.
Including those teaching video music and original shows Like the new Cobra Kai series This week we will start to watch wildly in an unrestrained manner because it’s really great. Cooking shows are very useful for combining voice and visual effects. another example Nick and I often look for simple recipes for the family. Hey Google Give me a recipe for a pizza bomb Ok, here are some recipes. We can choose Tasty's first recipe, it looks good You see that the recipe is coming right away. You only need to click to start cooking. Ok, this is Tasty recipe. Video demo viewing is linked to verbal instructions Can really change the cooking skills Especially when you are full of things with your hands Thank you, Maggie We show you several ways How smart displays make family life easier and simpler There are more ways Stay in touch with your family with Broadcast and Dual Video Calling Keep an eye on your home with our smart home devices See the traffic in the morning in advance through Google Maps We thoughtfully integrate Google’s best products Work with developers and partners around the world Bring Voice and Visual together for your home device in a new way Now inspired by the smart display experience We have also been working hard to redesign our assistants.
The experience on the screen, this is what we have been doing all along. Our mobile phone I will give you a smart assistant on your mobile phone. Become more realistic, interactive and proactive We have to switch to another live demo Hey Google, tell me about Camilla Cabello According to Wikipedia Carla Camilla Capella Estraba is an American singer and composer As you can see, we took full advantage of the screen. Come to give you a rich and realistic response This is another example Turn off the heater Ok, let the living room cool down. Smart home request you can see here We bring the controller to your fingertips This is one of my favorite places Hey Google orders my usual Starbucks drink Hello, welcome to Starbucks. This is a medium cup of low fat milk latte and topped with caramel What more? No thanks Are you extracting in the usual place? I want to click yes Ok, it’s already See you later correct! We are very happy to tell you that we have been with Starbucks. Dunkin' Donut, DoorDash Domino Pizza and other partners Provide a new food extraction and delivery experience for Google Assistant We have started to roll out some services.
More partners are coming soon A fruitful and interactive response to my requests Really helpful But my ideal assistant should be able to help proactively Now when I launch the assistant and slide up Entered a visual snapshot of my day I saw useful suggestions. These suggestions are based on different times. My location, even recently made with the interaction of the assistant I also saw my reminder, parcel Even notes and lists are well organized and accessible here. I like these details to help plan And it’s easy to get This new visual experience for mobile phones Designed with AI as the core Will be launched on Android phones this summer and will be released on iOS later this year.
Sometimes the assistant is actually Can be more helpful by reducing the visual image When you are in the car, you should focus on driving. Suppose I am coming home from work. Google Maps shows me the fastest route to go home During peak traffic hours Hey Google sends my ETA to Nick and plays some hip hop music Ok, let Nick know that you still have 20 minutes. Then check out YouTube's hip hop music radio It’s very convenient to share my ETA with my husband.
Just a simple voice command I am very happy to tell you this summer that Assistant Will navigate in Google Maps Smart display, mobile phone and map This will let you understand how we make Google Assistant Visually more auxiliary Let you understand when to respond with a voice And when to show a more realistic and interactive experience Then I will return it to Yida, thank you all. Thank you, Lilian I am very happy to see the progress of our assistants. As I said earlier, our vision for the Assistant is Help you get things done It turns out that the most important part is Call You may want to book an oil change. Calling a plumber during the week Or even arrange an appointment for a haircut We are working hard to help users spend these hours We want to connect users and businesses together in a good way. In fact, companies rely heavily on this method. Even in the United States 60% of SMEs have not yet established an online booking system We think AI can help solve this problem Let's go back to this example Suppose you want to ask Google to help you arrange an appointment for a haircut.
On Tuesday from 10 am to noon What is happening is the Google Assistant. Call you seamlessly in the background What you will hear is The Google Assistant is actually calling you for a real barber shop. Schedule an appointment for you Let us listen Hello, how can I help you? Hey, I want to arrange a female haircut for our clients. I want to schedule an appointment on May 3. Ok, wait a moment. Ok… Ok, what time? 12 o'clock noon No time at 12 noon The closest is 1:15 pm Then 10 am to 12 am? It depends on what kind of service she wants. What service does she want? Currently only a female haircut Ok, 10 o'clock. – Ok, 10 am – OK, what is her name? Her name is Lisa. Ok, we will meet Lisa at 10 am on May 3. – Great, thank you – ok, I wish you a good day, goodbye. What you just heard is a conversation in a real phone call.
The amazing thing is that the assistant can really understand The nuances of the conversation We have been studying this technology for many years. Google Duplex It will give us an understanding of natural language for many years. All investments are brought together Deep learning Organized speech By the way, the assistant can send you when we are done. A confirmation notice says that the appointment has been completed for you Let me give another example Suppose you want to call a restaurant or it is a small restaurant. Online booking is not easy This call is actually different than expected.
Let us listen Hello, how can I help you? Hey, I want to book a table, Wednesday 7th Seven people's table? Well, four people Four people? When? Nowadays? tonight? Well, next Wednesday, 6 pm In fact, we have reserved more than five people. As for the four people, come on. How long does it usually take to take a seat? When? tomorrow? Or weekend, or ..? Next Wednesday, um, number seven Oh, we are not too busy. Four people can come, ok? – Understand, thank you – good, goodbye. Again, this is a conversation in a real phone call.
We have a lot of examples like these conversations are not as expected. But the assistant does understand the background details. It knows how long it takes to wait in this case. And elegantly interact with customer service We are still developing this technology We really work hard to make it right. For both business and users Do the user experience and expectations It can save people time by doing it right. Also valuable to the company We really hope that it can work in these situations. For example, you are a parent and very busy in the morning. Your child is sick, you want to call to make an appointment to see a doctor. We really work hard to make it right. There is a more direct example where we can launch this product faster. For example, many people go to Google every day. People want to know Company's business hours Things get tricky during the holidays And the company will receive a lot of calls. So we can make a call as Google. Then update the information for millions of users Then SMEs don’t have to pick up countless calls. We hope that it will work in these situations And provide users with a better experience Will be launched in the form of experimental products in the coming weeks Please continue to pay attention There is a common theme here.
We always strive to save time for our users. We Google has always been very persistent towards this goal. Searching for the user can get the answer quickly And give them what they want This brings me to another field of digital health According to our research We know that people feel the restraint of their devices. I believe in getting the resonance of all of you. Growing social pressure Everything has to respond immediately People eager to master in time all the information Everyone is suffering from FOMO– afraid to miss any message We think there is an opportunity to do better We are always talking to the user Some people introduced us to the concept of JOMO The joy of missing any message We think we can use digital health to really help users We will do our best and continuously Work hard on all products and platforms We need your help We think we can take advantage of digital health Help users in four ways We want to help you understand your habits Focus on what Turn off the device when needed The most important thing is to find a balance with your family.
Let me give a few examples You will hear this feature from Android later. In the upcoming release One of my favorite features is the dashboard On Android, in fact we give you a comprehensive understanding How do you use your time? What time is it in the app? The number of times the phone is unlocked on a particular day The number of notifications you received We will really help you deal with this problem better. The app can also help We will be the first to launch on YouTube. If you choose this new feature It will actually remind you to take a break For example, if you watch YouTube for at least a while Or it would say "Hey, it’s time to take a break" YouTube will also combine…
If the user wants — All their notices Deliver them to the daily summary If you have four notifications, it will be sent to you once during the day. These features of YouTube will be launched this week. We have done a lot of work in this area. Family Link is a great example We offer different tools for parents Help manage your child's screen time I think this is an important part. We want to do more We want our children to be equipped to make informed decisions. We adopt a new approach to Google’s design approach. Called Be Internet Awesome Helping children become safe explorers in the digital world We want children to be safe, kind, and mindful when they are online. We are committed to training in the coming year 5 million children All the tools you have seen Will be later today Launched on our Digital Health website We feel another area that should bear huge responsibility Is news News is our core mission In addition, at this moment More important than ever Go to support quality news This is the basis for the operation of a democratic country.
I always like to watch the news. I grew up in India. I have a vivid memory, I am used to waiting for the publication of physical newspapers. And my grandfather is used to staying beside us. Have a clear rating He will take the newspaper first Then come to my dad Then my brother and I At the time, I was mainly interested in the sports version. I began to like watching news over time. Still unchanged today This is a challenging moment for the journalism industry. Recently we launched a Google News Proposal In the next three years, we promise to invest 300 million US dollars. We hope to cooperate with different organizations and journalists Develop innovative products and programs Help the entire news industry We also have a long-awaited product here – Google News In fact it was launched after 9/11 Initially this was a project led by 20 people. One of our engineers Want to see news from different sources To better understand what is happening Since then, the amount and diversity of content has increased, if any I think today is more than ever There are more great news announcements It’s also true that people turn to Google when they need it.
We are also responsible for providing that information. This is why we have to rethink news products. We use artificial intelligence to bring the best content that journalism can provide. We want to give users a quality news source they trust. We also want to build a product to work for news media publishers. Most importantly, we want to make sure that the topics are of interest to them. Give them more insights And a more comprehensive perspective I am happy to announce that we have launched a new Google News. And Tristan will tell you more about it. Thank you, Tatsu. With the new Google News, we set out to help you with three things. First, keep up with the news you care about. Second, understand the complete story well. Finally, enjoy and support your favorite sources After all, there are no news media publishers. And the quality news they announced Today we have nothing to tell you. Let's start with how to make you easier Keep up with the news you care about As long as I open Google News at the top I will see the first five briefings I need to know.
When I quickly browse through these briefings There are more newsletters for me to choose from. Our AI keeps reading the web for you Millions of articles published every minute Videos, podcasts and reviews And gather the key things you need to know Google News also added local sounds and information in my area. This is this kind of information that makes me feel connected with the community. This article from The Chronicle made me want to know How long does it take to ride a bicycle through this brand new Bay Bridge? The best part is that I don't have to tell the app I am concerned about politics, love to ride a bicycle or want to know the information of the Gulf region. It can be used out of the box And because we applied some techniques, for example, to enhance learning throughout the application.
The more you use it, the better its performance At any time, I can participate to indicate if I want to see fewer or more Specific publisher or topic I want to see when other people are watching what news. I can switch to the headlines Check out the latest news The most widely covered news around the world let's continue You can see a lot of large beautiful images Make this app look super attractive And become a really great video experience Let's see This brings you YouTube and the web. All the latest videos All our design choices focus on keeping the app simple and lightweight Fast and fun Our guiding principle is to let the story speak for themselves.
This is cool, right? What we see throughout the app is the new Google Material Theme We use Material design to create the entire application This adaptable and uniform design system This is unique to our Google. You will hear more about this app later today. And how can you use the Material theme in your product? We are also happy to introduce a new visual format called Newscast. You won't see it in other news apps Newscast is a bit like a preview of a news story Make it easier for you to understand what is going on Check out this Star Wars movie We are using the latest technology in natural language here. To bring everything together Movie trailer from Solo To news articles, quotations – from actors and more Being presented in a brand new form looks great on your phone Newscast gives me a simple way to get basic information Decide if I want to know more Sometimes I even find something I will never find. As for the news topic that I care most about Or those very complicated stories I want to be able to understand deeply and see different perspectives.
Let’s talk about the second goal of Google News. Understand the complete story It takes a lot of effort today to broaden your point of view. And learn more about a news story Using Google news, we can easily achieve this goal. Full Coverage is to invite users to learn more It tells the whole story How does it report from various news sources? And use a variety of different formats We use a technique called temporal co-locality To combine Full Coverage This technique allows us to map relationships between entities Correct understanding when the story evolves People, places and things We will apply a lot of information And at any given moment, this information is posted to the network. Then organize the information around the story line These are real-time So far, this is the most powerful feature of the app. And provide a new way to mine news See how Full Coverage works Used to report recent blackouts in Puerto Rico I have a lot of questions about this story.
"How did we get here?" "Can it be prevented?" And "Is it really good?" We built Full Coverage to help understand all of this. In the same place We start from a series of headlines and they will tell me what happened. Then use our understanding of real-time events Start organizing key stories For news stories that have already aired like this, more than a few weeks and months You can understand the origin of the event. Just look at the timeline of our critical moments. At the beginning of recovery We can clearly see that we have to go a long way. We all ask some questions about this story. We all said it so you don't have to find the answer. We know that the situation and opinions come from many different places. We show you about tweets from related voices and opinions Analysis and fact checking Help you understand the story deeper In each case, our AI will emphasize that this is the reason for important information.
And the unique value Now when I use Full Coverage I found that I can build a lot of knowledge about the topics I care about This is a true 360 degree insight Far beyond the information I got from scanning headlines the most important is Our research shows fruitful dialogue or debate Require everyone to have access to the same information This is everyone in Full Coverage Reasons to see the same content for a particular topic This is an unfiltered event. They come from a range of trusted news sources. thank you all I must say that I really like these new features. This is just a small part of what makes us think of the new Google News. So exciting As we mentioned before No great news media announces news every day None of these will exist This brings us to the final goal — Help you enjoy and support your favorite sources We put news media publishers in front of and around the entire app.
And the section of Newsstand here It's easy to find and follow my favorite sources Browse and discover new sources Includes over 1,000 magazine publications Such as the connection , National Geographic magazine and people It looks great on my phone. Just click on the star icon and I can follow today USA If I want to subscribe to a publication For example, the Washington Post We make it very simple No need to fill out forms, credit card numbers or new passwords Because as long as you log in with your Google account Already set up When you subscribe to a news media publisher We think you should be able to read those content anytime, anywhere.
This is why we developed Subscribe with Google. Subscribe with Google to enable you to use your Google account Read your paid content anytime, anywhere On all platforms and devices, whether it’s Google Search or Google News And the news website publisher’s own website We work with more than 60 news media publishers around the world This will be launched in the next few weeks. thank you all This is one of the steps we are taking. Make it easier for everyone to visit in the most important places Reliable and high quality information This is brand new Google News This can help you use Briefing and Newscasts Keep up with the news you care about Understand the complete story, using Full Coverage And by reading, following and subscribing Enjoy and support your favorite news sources Now, the best news is Starting today, we are in 127 countries around the world.
Officially launched services on Android, iOS and the web I think so awesome. Everyone can use the service next week At Google, we believe that accurate and timely information will be Transmitted to people's hands And build and support high quality news Now more important than ever We are committed to accomplishing our mission We can't wait to continue your journey with you. Now I am very happy to introduce Dave. Tell you more about Android Android starts with a simple goal Bring open standards to the mobile industry Today it is the most popular mobile operating system in the world. If you believe in openness If you believe in choice If you believe in ideas from everyone Welcome to Android Hi, everyone, I am very happy to come to Google I/O 2018.
Ten years ago, we launched the first Android phone T-Mobile G1 We have a simple and bold idea Is to build a free and open mobile platform And today this idea develops Our partners have launched tens of millions of smartphones Billions of people around the world are using Through this journey we see Android Becomes not only a smartphone operating system but also supports new computing categories Including wearable devices, TV cars, ARVR, Internet of Things Android development in the past decade Helps drive calculations Transfer from desktop to mobile As Trent said, the world is now on the edge of another transformation. Artificial intelligence will change in depth Like industries such as healthcare and transportation It has begun to change our industry This brings us to the new version of Android we are researching. Android P Android P is an important first step The vision of implementing artificial intelligence is the core of the operating system In fact, artificial intelligence consolidates the first of the three themes in this version.
That is intelligent Simplicity And digital health We start with intelligence We believe that smartphones should be smarter They should learn from you that they should adapt to your habits. Machine learning on the device Can learn your usage patterns And automatically predict your next move Save time for you And because it runs on the device And the data will be saved on your phone. Let's look at some examples. How do we apply these technologies to Android? To build a smarter operating system In every survey for smartphone users You will see that battery life is the biggest concern for users.
I don't understand your needs. But this is my Maslow demand hierarchy version We are all there Your battery performance has been no problem, but there are always some unusual days. The battery is faster than normal, so you need to charge the charger. We use the Android P operating system to work with Deep Mind Developed a new feature we call adaptive battery Designed to give you a more consistent battery experience Adaptive battery using machine learning on the device Find out which apps you will use in the next few hours Which ones will not be used today, if any After initial understanding The operating system will adapt to your usage pattern This battery level will only be used Apps and services you care about The result is really promising We see a 30% reduction in CPU wakeup for a typical application.
This is combined with other performance improvements Includes running background processes on a small CPU core Increase battery life for many users Really great Another example of an operating system that adapts to user habits is Automatic brightness adjustment Most smartphones today are based on current lighting conditions. Automatic brightness adjustment But this is just a general solution Not considered Your personal preferences and environmental conditions So you often need to manually adjust the brightness slider Cause the screen to become too bright later Or too dark With Android P, we introduced new features in machine learning on the device. Adaptive brightness Adaptive brightness will learn you under specific ambient lighting Set the habit of the brightness slider Then in an energy-saving way Set for you When the phone adapts to your preferences You will really see the movement of the brightness slider This is really effective In fact, we see almost half of the test users Now reduce the manual adjustment of brightness, Compared to any previous version of Android We also make the UI more intelligent Last year we introduced the concept of predicting the use of applications.
This feature is what the operating system will expect from your needs. Start the application in the path that is usually followed And place the next application This is really effective Successful prediction rate is close to 60% With Android P We not only predict the launch of the next application Can also predict your next move This feature is called App Actions. Let's see how it works. At the top of the launcher you can see two actions First, call my sister Fiona Second, I started my evening running exercise in Strava. What happens here is that the forecast action is Based on my usage patterns The phone is adapting to my habits and trying to help me Complete the next task faster Give another example when I connect headphones Android will show an action to restore the music album I am listening to.
To support App Actions Developers only need to add the actions.xml file to the application. Then Actions is not only displayed in the launcher. Also in Smart Text Selection, Play Store, Google Search And display in the assistant Take Google search as an example We are trying different ways Provide actions for installed and used applications As an example, I am a fan of Fandango. When I searched for the new Avengers movie infinite war In addition to normal advice I also received the operation of the Fandango app. To buy tickets Really great Actions are some simple and powerful ideas According to your situation Provide deep links to your application More powerful, we bring the part of the application user interface to the user. Just there This feature we call Slices Slices is a brand new API An interactive section for developers to customize their application UI Can be displayed in different locations of the operating system In Android P, we first show Slices in the search to lay the foundation.
Let's see Suppose I am outside, I need to take a ride to work. If I type "lyft" in the Google search app I will see a part of the Lyft app installed on my phone. Lyft is using the Slice API to enrich the UI template Rendering part of their application in the context of the search Then Lyft can give me an estimate of the fare for work. And Slices is interactive, I can order tickets directly here. Very good Slice template is a versatile developer that can provide everything From playing videos to staying at the hotel Another example if I search for Hawaii I will see some of my holiday photos from Google Photos. We are working with some great partners Work on App Actions and Slices We will provide developers with next month.
a broader early experience program We are very happy to see Actions, especially Slices. How to achieve a dynamic two-way experience The application's user interface can be intelligently displayed in the background These methods make us make Android smarter Teaching an operating system To adapt to the habits of users Machine learning is a powerful tool May be daunting Developers’ learning and application costs are also high We want these tools Make little or no Developer of machine learning expertise Convenient and easy to use Today I am excited to announce the ML Kit. A new set of APIs will be available to everyone through Firebase With ML Kit You can make the API on the device Get text recognition, face detection Image tags, etc. ML Kit also supports Leverage Google's cloud ML technology In terms of architecture, you can imagine ML Kit Built on TensorFlow Lite Ready to use model And optimized for mobile devices The most important thing is that the ML Kit is cross-platform.
It runs on Android and iOS We are working with ML Kit for our early partners. We have some great results For example, a very popular calorie counting app, Lose It! Using our text recognition model Go to scan nutrition information And ML Kit custom model API Automatic classification by camera 200 different foods You will hear more about the ML Kit later in the developer's keynote speech later today. We are very excited that we can make your smartphone smarter. It is also important for us that technology is back to the scenes. One of our main goals in the past few years Is to develop Android UI into a simpler, more approachable Not just current users And the next billion Android users Use Android P We especially emphasize simplicity Solved a lot of pain points we thought, or you told us These experiences are more complicated than they should be.
You will find these improvements on all devices Adopted the Google version Android UI, such as Google Pixel And Android One device Let me show you a few live demonstrations on my phone. In front of 7000 spectators in the amphitheatre What could be wrong? Ok As part of Android P we launched a brand new system navigation We have been working hard for more than a year.
New design makes Android multitasking more accessible And easier to understand The first eye-catching thing Is a simple and clean home button This design is due to the fact that we are aware of the trend of smaller screen borders. And emphasizes the edges of multiple screens Apply gestures on buttons When I swipe the screen I will be taken to Overview immediately. Here I can recover recently used apps At the bottom of the screen, I also saw five applications that I predicted to use. Save time for me Now if I continue to swipe the screen or I slide up again I will see all the apps In terms of architecture, we have already completed Combine all applications with the Overview space The gesture of swiping up can be anywhere Or use in any application This way I can quickly get back to all the apps and Overview But will not lose the environment I am in If you like it You can also use Quick Scrub gestures.
Slide the home button left and right Browse recently opened apps One of the benefits of a larger horizontal overview is It's now easy to browse application content So you can easily browse the information in the previous app Even better, we extend the smart text selection to Overview. For example, when I click on the phrase anywhere in The Killers The entire sentence will be selected Then I will see an action to listen to songs on Spotify. We extend the neural network of intelligent text selection to Recognize more entities For example, the team, the musician's flight number, etc. I have been using this View navigation system last month. I really like it. This is a faster and more powerful way Handling multiple tasks simultaneously So changing the way navigation works is a big deal. Sometimes subtle changes can make major changes. Take volume control as an example We have tried before the video started. Try to turn down the volume Instead, turn down the ringer volume The video volume is very loud for everyone around me.
How do we solve this problem? You can see the new simplified volume controller here. They are vertical next to the hardware button This is intuitive The most critical difference is that the default setting is now the slider used to adjust the media volume. Because this is the volume you want to adjust often What you really care about for the ringer volume is On, off, mute Ok We also greatly simplified the rotation settings If you are like me Hate your device screen to rotate left or right at the wrong time You will like this new feature Now I am using the rotation lock mode I will start an app You will notice when I rotate the device A new spin button appears on the navigation bar I only need to click once to rotate the screen under my own control.
Really cool Ok, let's take a quick look. Some ways we simplify the user experience of Android P There are more ways Redesigned work profile Better screenshot Improve notification management, etc. Talking about notification management We want to give more control to your attention needs. This highlights a concept that Tida hinted earlier. Make it easier for you to live in numbers Moving between real life To learn more about this important area And the third theme I have to hand over to Shamir thank you all Hi, everybody In a recent family vacation My partner asked if she could check my phone. Just after we arrived at the hotel room She took it away Coming to the room safe Lock it inside Turned around and looked at me "You can get your phone back when we leave seven days later." Wow! I was shocked at the time. Also a little angry A wonderful thing happened a few hours later.
No cell phone interference I really can disconnect all connections Timely fun is living in the moment And I ended up having a wonderful family holiday. And not just me Our team has heard stories of many people. They are all trying to find a balance between life and technology. As you heard about what I said just now. Help users with digital health More important to us than ever Users tell us that many of the time they spend on their phones is really useful. Some of them time they want to spend on other things. In fact, we found more than 70% of users I hope to get more help to achieve balance We are continually working on it Add key features directly to Android To help users find a balance point in using technology And they are always looking for this balance point The first thing we care about is Help users understand their habits Android P will show you a dashboard Show how you spend time on the device As you saw before You can see how much time spent in the app How many times have you unlocked the device today? And how many notifications you receive You can explore these things in depth For example, this is my Saturday Gmail data.
When I saw it, it really surprised me if I should be on the weekend Handling my email This is the focus of the dashboard The time you are involved is part of understanding What are you involved in the app? Equally important It’s like watching TV Watch your favorite show at the end of a long day Really feels very good But watching a TV shopping may make you want to know Why are you not doing something else? Many developers call this concept "meaningful participation" We have been working closely with many developers Our common goal is Help users use technology in a healthy way In Android P Developers can link to more detailed decomposition From this new dashboard See how users spend time on their apps For example, YouTube will add a deep link You can see the phone and the desktop Total viewing time And visit a lot of Tida earlier Useful tools talked about Understanding is a good start Android P also gives you control Help you manage how and when to spend time on your phone Or you have a favorite app But you spend more time than you think.
Android P lets you set the time limit for using the app And when you are close to the time limit Remind you that it is time to do something else. In the rest of the time The color of the app icon will be grayed out To remind you of your goal The user also told us that they tried very hard Concentrate on dinner or attending a meeting But the notification on the device Distracted them And it’s too tempting Less, we have all experienced So we make improvements To do not disturb mode Not just phone and SMS adjustments to silent mode There is also visual interference popping up on the screen.
To make DND mode easier to use We created a new gesture We affectionately call it Shush If you turn the phone on the desk, it will automatically enter the DND mode. You can focus on what you see No ringing, vibration or other interference under emergency Of course, we all want to make sure that the key people in life can still find us. Such as your partner or schoolchild Android P will help you set up a list These contacts can always call you Even if DND mode is on Finally, we heard from some users They often watch the phone before going to bed. Before they noticed I have slipped away in an hour or two. Honestly, this happened to me at least once a week. Good sleep quality is very important Technology should be able to help you Instead of hindering its occurrence So we created the Wind Down mode You can tell the Google Assistant when you want to go to bed. When the time is up, it will start the DND mode. And fade the screen to gray This is much less irritating to the brain. And can help you put your phone down This is a very simple idea I found that I will put the phone down very quickly.
This is really great. All apps go back to the era of no color TV Don't worry, all colors will come back when you wake up in the morning. Ok, we have already scanned some digital health features quickly. We will launch this Android P this fall. Started by Google Pixel For us, digital health will be a long-term theme. More features will be available in the future Besides Dave and I talked about Three themes of intelligence, simplicity and digital health Indeed Android P There are hundreds of other improvements I am particularly excited about the security measures added to the platform.
You can go to the Android Security Conference on Thursday. Learn more about it Your biggest problem is Really great How can I try these things? Today we will release the Android P beta. And in the efforts of Android Oreo, in order to make the operating system upgrade easier Today Android P beta will be on Google Pixel And available on the flagship device of seven mobile phone manufacturers You can go to this link Learn how to receive a beta on your device Please tell us your thoughts. Ok, this is a summary of the new Android features. Now I want to introduce Jane to tell you about the map.
thank you all It has changed Nigeria so much that you can actually be part of it. Be able to arm and know where to go You will be able to get there just like anyone can Two consecutive earthquakes hit Mexico City And Google Maps helps everyone deal with such an emergency. Hurricane hits Houston into an island The road is constantly changing We have been saying, “Thank God for giving us Google!” What do we do? Help everyone continue to do what they like to do It’s really great to continue doing what they need to do. Create technology Help people every day in the real world Always our core value This is about who we are.
And what we Google focused on from the start Recent advances in AI and computer vision Let us significantly improve long-standing products Such as Google Maps And make some new products possible, such as Google Lens Let’s start with Google Maps. The map was created to help everyone No matter where they are in the world We have mapped maps of more than 220 countries and regions. And put hundreds of millions of businesses and places on the map In doing so, we gave more than one billion people. Ability to travel around the world So they don’t get lost with confidence We have not finished yet As AI progresses, we keep making Google Maps Smarter and more detailed Now we can automatically add new addresses, stores and buildings We extracted from Street View and satellite imagery and added it directly to the map In rural areas Where there is no official address Important in fast-changing cities such as Lagos here In the past few years we have actually modified the face of the map. Hello, Nigeria We can also tell you if the business you are looking for is open. Is it very busy? waiting time Even people usually stay there for long We can tell you before you leave.
It’s easy or hard to park We can help you find the answer Now we can give you different routes depending on your mode of transportation. Whether you are riding a motorcycle or driving a car By understanding the different types of cars moving at different speeds We can make more accurate traffic forecasts for everyone The ability to Google Maps we only touch the surface We originally designed to help you understand where you are. And help you walk from one place to another In the past few years We see more and more users demanding maps. In the world around us, users have brought us More difficult and more complicated problems, they are working hard to complete more answers Today our users don’t just ask the fastest route They also want to know what is going on around them.
What new place to play Preferred things in the local community in their community The world is full of wonderful experiences Like cheering for a team you like in a sports bar Or spend a night with friends or family in the cosy community bistro We want you to explore and experience more easily What the world can offer you We’re working hard to make a newer version of Google Maps. Let you know New trends and trends in the areas you care about According to your situation and interests Help you find the best place Let me give a few examples What would it look like with the help of Sofia? First we add a new tag to the map called For You It is designed to tell you in the community you care about Things you need to know New location opening Current trend And personal advice Here I saw a newly opened cafe in my community.
If we slide down I see a list of restaurants for this week's trend This is very useful because you don't need to work The map gives me some ideas to save me from being bored. And motivate me to try something new How do I know if a place is really right for me? Have you seen this experience in many places? All receive a four-star rating You are pretty sure there are some you will like very much. And some may not be very good But you don't know how to judge? We created a scoring standard called Your Match Help you find more places you will love Your Match uses machine learning Hundreds of millions of places that Google knows Combined with the information I added I give a rating to the restaurant My favorite food And where I have been If you click on the Match number You will see the reason for recommending it for you.
This is your personal score in these places Early testers told us they liked it very much Now you can confidently choose the place that suits you best. Whether you plan ahead Or you need to make a quick decision now on the road. Thank you, Sofia For You tag and Your Match score Are good examples of how we can help you keep an eye on new information. And choose the place with confidence Another user pain point we often hear is Planning with others can be a real challenge We want to choose a place together to become easier That's it Long press any place Add it to a short list Now I always choose to eat ramen I know my friends have a lot of different opinions. So I can add more options to give them some choices When you collect enough and you like it Share your list with your friends and get their opinions You can easily share the list with just a few clicks On any platform you like Then my friend can add more places Or vote with a single click, we can quickly choose the favorite place No need to copy and paste a bunch of links now Send text back and forth Decision making process can be fast, simple and fun This is just a glimpse later this summer.
Some new features to be launched on the Android and iOS platforms What we see is just a beginning. This is what Google Maps can do. Helping you make better decisions on the go And experience the world in new ways From your local community to the farthest corner of the world No SME This discovery experience is impossible to achieve When we help others discover new places We are also helping local SMEs to be discovered by new customers.
These SMEs are like a bakery near your home. Or a hairdresser at the corner These SMEs are our community structure We are deeply committed to helping them succeed with Google. Every month we connect users to nearby SMEs More than 9 billion times Including more than 1 billion calls And 3 billion requests for directions to their store We have added more tools in the past few months. Enable local businesses in a meaningful way Communicate with their customers Now you can view many of the favorite SMEs Daily event or offer And soon you can stream in the new For You See their updates When you are ready Easily make an appointment or place an order with a single click We are always motivated to see How does technology bring opportunities to everyone? In the past 13 years we have invested in drawing The reason for each road, every building and every business is They are very important The community is active when we draw a map of the entire world New opportunities have emerged in places we have never thought of With the development of computing technology We will continue to challenge ourselves to come up with new methods.
We can help you get the job done in the real world. I want to invite Aparna to come to the stage to share with you. How do we do this on Google Maps, or elsewhere? Camera in our smartphone Connect us to the world around us in a very direct way They help us save a moment, capture memories and communicate With the advancement of artificial intelligence and computer vision I heard what I have said before. We said: "Can the camera do more? Can the camera help us answer questions? " Questions such as "Where am I going?" or, "What is in front of me?" Let me give an example You leave the subway station You are already late for a date. Or a meeting of a technology company that will happen Your phone says, "Go south on Market Street" What would you do? The problem is – you don't know which direction is the south You look down on the phone Looking at the blue dot on the map You start walking to see if it moves in the same direction If not, then you have to turn around We have all experienced We asked ourselves: "Well, can the camera help us?" Our team has been working hard The ability to turn the camera, computer vision Combined with streetscape and map Reimagining walking navigation Here is what it looks like on Google Maps.
let us see You start the camera… You immediately know where you are. Don't worry about the phone All information on the map, street name, direction Just in front of you Note that you also see the map so you can stay oriented You start to look around to see what's around you. Just for fun Our team has always had this idea to add useful guides Just like there …so it can tell you the direction Oh, that's it! so cool Now enable these types of experiences GPS itself is not enough So we have been working hard to develop VPS Visual positioning system It can accurately estimate the orientation and direction One idea of this key insight is Just like you and me when we are in a strange place You are looking for visual landmarks You are looking for storefronts, buildings, etc. This is the same idea VPS uses visual features in the environment Do the same thing This way we can help you figure out where you are. And it’s really cool to send you exactly where you need to go. This is an example of how we use the camera Help you on the map We think the camera can be what you see Help you more This is why we developed Google Lens.
The user is already using it to find various answers Especially difficult to describe in words The problem is, hey, the cute dog in the park That is a Labrador Poodle Or, this building in Chicago is the Wrigley Building. 425 feet in height Or, as my 9-year-old son said "That is more than 60 Kevin Durant" Today Lens has such performance in Google products.
Such as photos and assistants — We are very excited, starting next week Will be integrated with the camera app on Pixel New LG G7 And more devices This way you can very easily Apply Lens to objects that are in front of you and are already in the camera We are very happy to see this Now, the same The vision of computational science has made a fundamental shift This is a journey of many years. We have made great progress Today I want to show you three new features of Google Lens.
It can target more types of problems Give you more answers faster Let's take a look? Ok First, now Lens can recognize and understand words. Words are everywhere If you think about traffic signs, posters Restaurant menu, business card Now through smart text selection You can see the words you see Connect with the answers and actions you need You can do some actions like copy and paste From the real world Bring it directly to your phone That's it Or let us assume that you are watching Or you can turn a page of words into a one-page answer For example, you are looking at the menu of a restaurant You can quickly find out about each dish. Its appearance, all the ingredients, etc. By the way, as a vegetarian, I am happy to know the ingredients of Provencal chowder.
Is zucchini and tomatoes Really cool In these examples Lens not only visually understands the shape of characters and letters Actually trying to understand the meaning and context behind these words This is Scott. The language that I once talked about really comes in handy. I want to talk about the next function. We call it Style Match This idea is like this Sometimes your question won't be "Oh, what is it?" Instead, your question is "What is like this?" You are at a friend's house, you look at this trendy fashion light. You want to know what is in match with this style. Now, Lens can help you. Or if you see a piece of clothing that catches your eye You can simply turn on the camera Click on any detail Find specific information For example, any specific details such as comments You can also browse around to see all the objects that match that style. Of course there are two parts Must search millions of objects We also know a little about how to search.
But another part In fact complicate things if they can be different textures Shape, size, angle lighting conditions, etc. This is a tricky technical issue But we have made great progress and really excited The last thing I want to tell you today Is how we make Lens operate in real time. As you saw in the Style Match example You start watching, you turn on the camera You start to see Lens Instant Take the initiative to expose all the information It can even anchor this information to what you see. Now, this kind of thing It is screening billions of word phrases, places, things Give you what you need in real time No machine learning can't do it.
We are using the intelligence on the device Also tap the capabilities of the cloud TPU We announced on I/O last year to accomplish this goal. We are very excited As time goes by, what we want to do is actually the real-time results Directly over the object, such as the storefront, street signs Or a concert poster You can simply point your phone at Charlie's concert poster. And the music video will start playing. like this This is an example of a camera that is not just used to answer questions. Put the answer to the question in the right place This is very exciting Smart text selection, Style Match, real-time results Will be launched in Lens in the coming weeks Please check it out These are all examples of how Google applies artificial intelligence to cameras. Do things right in the world around you When it comes to applying artificial intelligence maps and computer vision To solve problems in the real world Nothing is more real than driving a car.
In order to tell you about the information Please welcome CEO of John Kravtsk Waymo thank you all Hi, everybody Today we are very happy to come to the stage to join our friend Google. I first came to the coastline amphitheater In fact, our self-driving car is not the first time As early as 2009 Parking lot outside the theater It was the first test of autonomous driving technology there. Just here is a group of Google engineers. Robotics experts and researchers Started a crazy mission I want to prove that the car can actually drive by itself. At that time, most people thought that they were driving. Will only appear in science fiction This group of dream-like dedicated teams Believe that self-driving can make transportation safer Simpler and easier, more convenient So Google’s self-driving project was born. Now back to 2018 Google’s self-driving project Now belongs to its own company independent of Alphabet called Waymo We go far beyond repair and research Today Waymo is the only company in the world Have a fleet of fully self-driving cars There is no real person in the driver's seat.
Driving on public roads Now the public in Phoenix, Arizona Also start to experience these fully automated vehicles Let's take a look Ok, are you ready on the first day of driving? Start Oh, this is very strange. This is the future Yeah, she is like "Nobody driving the car?" I knew it! I am waiting for it You must never know that no one is driving this car. Yo! car! Selfie! Thank you, car Thank you, car so cool These people are part of what we call the Waymo early driver program. The public is in their daily lives Use our self-driving car last year I have a chance to talk to some early drivers. In fact their stories are inspiring One of our early drivers, Nisha When she was young, she witnessed an unfortunate accident. Make her never dare to take a driver's license Now she goes to work on Waymo every day. And Jim and Barbara. With age Don't worry about losing the ability to move around And the Jackson family Waymo helps them navigate the packed schedule Transfer to Kayla and Joseph to and from school Practice and gathering with friends This is not the situation of science fiction When we talk about establishing autonomous driving technology We are made for these people.
In 2018, self-driving has changed the way they live and move. Phoenix is the first stop Provide Waymo driverless transport service This is the service launched later this year. Soon everyone can use the app to summon a Waymo A fully self-driving car will stop No one is sitting in the driver's seat Take them to their destination This is just the beginning Because here at Waymo We don't just build a better car. We are building a better driver And that driver can be used in a variety of applications Taxi service, logistics, private car Connect people to public transportation See our technology Become a promoter of different industries We intend to work with many different companies.
For everyone, letting self-driving become a reality in the future Now we can enable the future Because of the breakthrough in artificial intelligence and our investment in this area. In those early days Google may be the only company in the world Invest in artificial intelligence at the same time And autonomous driving technology When Google began to make significant progress in machine learning Together with speech recognition, computer vision, image search, etc. Waymo is in a unique position to benefit For example, in 2013 We are looking for breakthrough technology Help us detect pedestrians We are fortunate Google has deployed a new technology called deep learning This is a machine learning that allows you to create neural networks Use multiple levels to solve more complex problems Our autonomous driving engineer Working with researchers from the Google Brain team Within a few months We detected pedestrian error rate reduced by 100 times Yes, not 100%, but 100 times Nowadays…
thank you all Today AI plays a more important role in our self-driving system. Use our ability to create a real self-driving car To tell everyone more about how machine learning makes Waymo Become a safe and skilled driver you see on the road today I want to introduce you to Dmitry. Thank you. Hello everyone, Good morning is very happy to be here. At Waymo artificial intelligence touches every part of our system From perception, prediction, decision making to maps More features To be a capable and safe driving driver Our car needs a deep semantic understanding of the world around us. Our vehicles need to understand and classify objects Explain their actions, the reasons for their intentions And predict what they will do in the future. They need to understand How each object interacts with other objects Finally, our car needs to use all of this information.
Driving safely and predictably Needless to say There is still a lot to do to make a self-driving car. Today I want to tell you two areas. Artificial intelligence has a huge impact Perception and prediction Say perception first Detecting and classifying objects is a key part of driving Especially pedestrians pose a unique challenge Because of various shapes, postures and heights For example, this is a construction worker Peeking from the manhole Most of his body is covered This pedestrian is crossing the road, he is covered by wooden boards.
And here Pedestrians here wear inflatable dinosaur costumes We have not taught cars to know the Jurassic period. But it can still be classified correctly We can detect and classify these pedestrians Because we apply deep network Combination of sensory data Traditionally, in computer vision Neural networks are only used in camera images and videos Our car is not just a camera We also have lasers to measure the distance and shape of objects. Use radar to measure the speed of an object Learning a combination of sensory data through application machines We can accurately detect all forms of pedestrians in real time. The second field of machine learning is very powerful for Waymo Predicting how pedestrians are on the road Sometimes, pedestrians will act according to your expectations.
Sometimes, it won't do this Take the red light car as an example Unfortunately, we see more things like this than we want. From the perspective of the car, let me slowly disassemble Our car is ready to go straight at the crossroads We clearly see the green light Crossroads are red lights should stop Just as we entered the intersection In the right corner, we saw a car driving fast to Our model understands This is unusual behavior for a car that should slow down. We predict that this car will be a red light.
So we slow down first Here you can see the red fence This gave the red light car space in front of us. And barely avoid hitting another car We can detect this anomaly We use a lot of examples to train ML models. Today our team is already on public roads. Self-driving more than 6 million miles This means we have seen millions of real-world interactions. From this perspective, our daily driving mileage More miles than the average American driving a year Not only do you need a good algorithm now To build a self-driving car We also need very strong infrastructure At Waymo we use the TensorFlow ecosystem And Google’s data center, including TPU Used to train our neural network Using TPU for our network Training efficiency increased by 15 times We also use this powerful infrastructure Verify our model in the simulation In the virtual world We drive all day long Equivalent to 25,000 cars All in all, we have already opened more than 5 billion miles in the simulation. And this scale Regardless of the training and validation of our models We can teach our new car skills quickly and efficiently One of the skills we started to deal with was Driving in bad weather Like snowing, like you see here Today, this is the first time I want to show you the way behind the scenes.
Our car looks like it in the snow. What we saw in our car before we applied any filtering Driving hard in a snowstorm Because snowflakes make our sensors produce a lot of noise. But when we apply machine learning to these data This is what our car sees. We can clearly identify each car Even all sensor noise The sooner we unlock these advanced features We can drive our car faster Bring more cities around the world And a city near you We can't wait to build our self-driving car Offered to more people Let us be closer to the future Roads are safer, easier and more suitable for everyone thank you all Please join me now to welcome Jane.
Sum up the morning meeting for everyone Thank you Dmitry This is a good reminder How can AI help people in new ways? I am a Google engineering intern. Almost 19 years ago I was shocked by the first day I came in. Committed to pushing the boundaries of technology And the promise of possibility Combine deep attention to building products Have a real impact on people's lives I saw it again and again as the years passed. How does technology play a real role in change? Starting from the earliest days, such as search and map products Go to a brand new experience like Google Assistant When I saw today's Google I saw those same Early values are still well believed We continue to work hard With all of you Create products for everyone And build a truly important product We are always eager to raise ourselves to a higher level Responsibly Contribute to the world and society We know what we really want to create for everyone.
We need a lot of different opinions So we have expanded the scale of I/O this year. Including a wider range of sounds In the next three days we invited more speakers. Telling people about technology can play a broader role From promoting digital health To empower non-governmental organizations to achieve their mission at the same time There are also hundreds of talks about technology. You come to I/O to look forward to us. We hope that you can also enjoy and learn. Welcome to I/O 2018 enjoy it I hope that you will get some inspiration in the next few days. Constantly creating beautiful things for everyone thank you all .