Google Cloud Next ’18: Day 1 Next Live Show

>> The show will begin in 10 minutes. >> Please make your way into the auditorium. The show will begin shortly. >> Take a moment and silence your mobile phone. >> Please take your seats. The show is about to begin. >> Please put away your digital distractions and turn your mobile phones to silent. Thank you. >> Please take your seats. The show is about to begin. >> DJ DREN: Hey all! Thank you! I’ve had a great time spinning for you today. Before we get going, say hi to my sister Angel! >> ANGEL: Hi from NYC! We’ve actually been collaborating and jamming all morning with GSuite! Look how cool is that? San Francisco, Give it up for your dj this morning — my sister Dren! >> DJ DREN: Thanks, sis. (Applause.) Should we get this show on the road? Are you all ready for Next? Please welcome The CEO of Google Cloud, Diane Greene! (Applause.) >> DIANE GREENE: Hi everybody, made in containers! (Applause.) Welcome to Google Cloud Next be 2018. This is Google’s biggest event every, also Google Cloud. We’ve got over 25,000 people registered, hundreds of thousands streaming from the cloud. We know also how important engineering is to everything we do so I want to let you know we have over 3,000 Google engineers here and we all want to talk to you.

You are our motivation. We live in an information age and information is powering our economy. And information is starting to power every single business. You know, IT Tech has been very exciting, always, but the scope of it has mushroomed, and I was talking to a CEO yesterday and she said yeah my job has gotten a lot more interesting.

Tech encompasses all business and all society and IT has gone from being a cost center to a driver for the business. We’re reengineering how we do business and that ends up going hand-in-hand with the journey to the cloud. Tech is now core to every product and there are computer sciences making discoveries in every single discipline. Talking to CIOs they tell me they realize they’re going to be shutting down data centers and the other thing they say is looking at their work loads, there is just a tiny fraction of them that are in the Cloud.

We must be very, very early. So no wonder Google is just seeing amazing growth. But why Google? Why choose Google as your cloud? Well, here at Google we have been working incredibly hard to build things for you and if you think about it, Google is an Enterprise company but we are just a very modern Enterprise company. Google is about information. We have a cloud that is built to efficiently take in the information, organize it and put it back out with a lot of intelligence. This is what every company needs today. Information is driving the business. Information that is super charged. As only Google can do. So a global cloud, it’s unbelievably complex set of technologies and capabilities. Innovation is continually required as you scale it. There has been 20 years of scaling and optimizing Google’s rather huge Cloud and it’s been done by an elite and very experienced engineering organization.

In fact, since I joined two years ago, I’ve kind of viewed my job as servicing all of the extraordinary technologies and services. Then bringing excellence to how we make it available, how we support it and make it easy to use for companies all over the world. The foundation is inside Google we have TI that’s for technical infrastructure. People acknowledge Google has the most advanced Cloud and it is pretty amazing.

It’s many football stadium-sized data centers all running carbon neutral. Hundreds of thousands of miles of fiber optic submarine cable laid under the ocean, hyper fast interacts and specialized processors TPU for machine learning, Big Date tools like data query, span in her for distributed global transaction consistency and also something that makes GCP intuitive to use for developers is we have consistency across services with a common core set of primitives. It’s harder to build that way, but our users appreciate it. On top of the infrastructure are many unique assets. We have Gmail, calendar, Chrome Book and if you have ads, that can be combined with other data and last but not least a phenomenal body of AI technology. AI and security, these are the areas where Google is heavily investing. Among others. But why? Security is the number one worry and AI is the number one opportunity and that is the way for every company, I believe. Looking at security first, in large part due to all the horrendous attacks on Google over the years, we’re a leader in security.

It’s built into every single layer. It’s not bolted on. Google Cloud security starts with the Titan chip that is in every server and Chrome Book checks the boot by default. Everything is stored and in transit and is encrypted and you can use Google keys or your own to do the encryption. Forrester named Google in its way for security. We announced 20 announcements in March and there will be another 10 at this conference. For productivity Apps where Cloud workers but their data, it matters. There is no more secure set-up than to combine a Chromebook, G Suite and two-factor authentication that you plug in. Companies seem to be catching on, Chrome Book sales were up 75% last year and G Suite letter year over year. G suite stops 99.9% of spasm and phishing attacks. We’re deploying in large, large Enterprise one of the France’s largest companies is deploying Chrome Book and G suites to their 140,000 workers, many of them mobile. Reed Hastings of Netflix is a user of Chrome Book and G suite and I heard this big “wow” from him last week and he said G suite is getting phenomenal.

So AI, Google founders saw the power of AI. They knew it would be important and as I said it’s key to reengineering a business. Today it’s built into everything that Google does. At Google Cloud that means data center energy usage, BigQuery, GMail and we are now working to make it easy for you. We’re incorporating the power of AI into everything that you do AutoML let us you build a model without having to have an expert in ML and judging by the usage of machine learning I think we are headed for massive growth. Of course we have major efforts in many, many areas and I want to call out in particular, dev Ops and Open Source. We want Google Cloud to be the best place for software development and deployment. So what we’re doing is bringing Google’s much-wanted automation, deployment and monitoring tools to Google Cloud platform. Google has long supported Open Source. Open Source is what let us you avoid lock-in with a given vendor and let us you avoid expensive operating system licenses. Google is the leader in Open Source, we have over 2,000 Open Source projects, we have release the the millions of lines of code under Open Source licenses, examples being Kubernetes, one of the fastest projects in the history of Open Source.

The reason why is we were running inside of containers in Google for years and we took that learning and rewrote it for cloud and then we Open Sourced it, not surprising it took off so quickly. And Google Cloud is a forward-looking company, do you know we have a quantum computing project and we have several customers with early access via Google Cloud. Or OIT, TPUs at the edge and server side, we can take advantage of a full software stack from the twice to the server. We’re proud of being cutting edge. But we’re also proud of having the table stakes that an Enterprise needs. We’ve been doing what the regulators in the industry and analysts are telling us to do. In fact, two years ago at Next I had a meeting with the industry analysts and they were giving me a lot of hard feedback that we were not Enterprise ready and judging from other companies they’d seen trying to get Enterprise ready it might take ten years. So we buckled down, took the challenge and two years of very hard work later, on those table stakes, we are psych to be named a leader in three of Gartner’s quadrants.

(Applause.) Yeah! Infrastructure is a PI content and collaboration which is drive which is on track to cross one billion users this week. We hope for more magic quadrants and we were recognized by Forrester as a native platform security wave support leader. We were also named a leader in the one for data analytics, I’m forgetting the name. When we talk about Enterprise readiness it means bringing excellence in how we work with customers. As a result we develop something that’s been working extremely well. We developed a vertical strategy where we’re combining customer facing domain expertise with engineering domain expertise. We did health first. We had the opportunity to take Google brain healthcare’s research, considerable body of research in health to the market and we also saw the need and developed an API for health record Interoperability which is now in Alpha and we have great customers and I have an exciting announcement for us which is that we are partnering with the National Institute of Health and what we’re going to do is make access and compute for large and it’s working because last week we won the award for best cloud for healthcare at the HIMSS, biggest conference for healthcare every year and a year ago we launched the other verticals, financial services, manufacturing and energy, transportation, and gaming and media, I just want to mention that Unity is moving to Google Cloud (Applause.) And by the way thousands of start-ups are building on Google platform and we see a disproportionate amount of use in our ten sore process units in those start-ups.

Retail has been a great vertical for us. There is unlimited technology we can apply there and today I am excited to announce that the giant retailer Target is running on Google Cloud. (Applause.) Mike McnNamara joined Target and I am pleased to welcome Malika to the stage. >> MIKE McNAMARA: Good morning. >> DIANE GREENE: Welcome. Mike is an impressive person, he doesn’t have a lot of time but tell me what you can about your reengineering effort. >> MIKE McNAMARA: I would sail I joined a company that understood the value of software and technology but didn’t — hadn’t figured out how to unearth and create that value. So if three years we have transformed just about everything and I would say that technology has been from being an anchor to the business to being an engine for growth. It was largely an outsourced operation, I insourced it, I’m a firm believer that technology is so critical to industry this these that to give it to somebody else to do is probably the wrong idea. So we insourced it, reorganized the teams around product Agile, dev Ops and it’s made a difference.

>> DIANE GREENE: It’s unbelievable. Why did you choose Google Cloud? >> MIKE McNAMARA: To be honest, if you look back a few years, three, four, 5, infrom STRAUSHGT was driving around in a horse and buggy and today you have three performance cars there parked on the driveway so you choose a partner who you can get on best with, who suits your organization and I think that at Target and Google, we have a lot of shared values. My guys love working with your team. We’ve learned an awful lot from Google about site reliability engineering and more than that it’s a commitment to Open Source. That made all the difference. At the end of the day it wasn’t me who chose Google, it was my SCOMBRERZ. Google was important to the engineering team so it mattered to me. >> DIANE GREENE: I’m glad you did. I’m proud of how we got you through black Friday and cyber Monday last fall. >> MIKE McNAMARA: Thank you for that. >> DIANE GREENE: Thank you for joining us, cheers. So the engineers chose Target, we focus on engineer-to-engineer collaboration. In fact, last year we tripled the number of engineers working with our customers.

We have more engineers than we have sales reps, customer engineers, professional service, technical account managers, we have something we call OCTO for office of the CTO, a lot of former CTOs, super talent and that has been in such incredible demand we 10X ‘d it last year. And also we do not want to do this aLen. Partners are can completely key to us and to our customers. We have a lot of different — we have several different categories. They’re quite different. We have some very deep integration and technology partnerships with some of the big Enterprise companies that we all trust. Most recently we announced a great project we’ve been doing with Cisco based on Kubernetes, it’s hybrid, it let us you run seamlessly between on-prem and the Cloud. Last month we announced the integration of G Suite and the ability to the take the GRM data and Google Analytics data and put it in BigQuery, people like that. Customers win when the tech companies partner. We announced SAP at Next 2017 and have made great progress since then. Here to tell you is Burnt Leukart. Here is his video yo.

. >> Good morning and greetings. I’m excited about the amazing things we are doing with Google Cloud. This community and the opportunity to talk to you about our great partnership. Diane, thank you for inviolating me to your amazing data center in the midwest last fall. This was a privilege and a proof of the depth of the partnership and the trust we have developed. It made us realize that Google was operating at a totally different scale. I also came to appreciate how much security was involved at every single point.

Security to get into the complex, security for each server and security to destroy drives that were no longer being used. Last year, we announced the availability of SAP Hannah on GCP. We are ready to run core SAP applications on GCP, SAP platform, the core of the Enterprise satisfying is generally available on GCP, allowing customers and partners to build and extend business applications using machine learning and IoT.

We also announced early access of SAP cloud platform Android SDK. Furthermore the first phase of the SAP data custodian solution is also GA on SAP, customers like Colgate and other brands are running SAP on Google Cloud and are starting to take advantage of all the great innovation. We showcased a retail industry proof of concept, and it was enriched with Google Cloud AI at our conference last month. Our customers absolutely loved it! Diane, you and I remain very aligned on the future. We both keep pushing for openness, for more containerization and more easy use of tools, as we work together to build the best apps for the Enterprise.

It’s never been a more exciting time. Have a great time at Google Next. >> DIANE GREENE: Thank you, Burt. (Applause.) You know, as we do these engagements with the large customers, the big system integrators are important, Accenture and did Deloitte are examples of our highly-trusted partners and we’re doubling down on the practices we are building with them and we love our small boutiques and we will help them with our engineers and our channel is now over 12,000 people. I mean 12,000 partners seeing great engagement. The last critical think that we hear from all our customers is a need for talent for that we’re doubling down on training and we’re also — we help with hiring and what we hear over and over again is when our customer is working with Google is makes it easier to recruit because engineers like to work with us.

Also if you’re hiring you should check out the hire App. It’s from Bebop, filled with AI, cool App. We hope thousands of you will be trained in our boot camp here at Next and certified in Google Cloud. In closing my keynote, I will say it has been an astonishing year of understanding how you want to use technology to drive your businesses and we are working so hard at building what you need. And also building what you didn’t know was possible but will soon need, things like AutoML. Google’s mission is to organize the world’s information. That’s Google’s mission. Google Cloud’s mission is to organize your information. Super charge it for you! Who better to talk about Google’s mission than Google’s CEO. Everybody please welcome Sundar. (Applause.) >>SUNDAR PICHAI: Thank you. All right. Thank you, Diane. It’s great to be here at cloud Next with all of you. And all these enormous cubes! Diane told me Google’s containers were getting bigger, I had no idea how big.

It’s probably the only place I can tell a container joke. (Laughter.) Excited to be here. Diane mentioned Google’s mission, our mission is to organize the world’s information and make it accessible and useful. I’ve always thought we were fortunate as a company to have a timeless mission, one that feels as important today as it did 20 years ago. What’s changed in that time is how we think about our users and a big part of the change is all of you. In the beginning we focus mainly on helping our users get things done in the context of their personal lives. But then we launch GMail and we realized we can deeply impact the way people work. Over the PaaS decade we have internalized this as a part of our mission. So now when we talk about our users, we mean businesses, large and small, developers, partners, creators, and everyone in between. We want to help companies of every size from the coffee shop on the corner to complex Enterprises, large Enterprises with thousands of offices around the world.

To all of you here. So today, realizing our commission means helping all of you realize yours. To do that, we are bringing the best of Google’s technology and infrastructure and opening it up to the widest possible community of partners, customers, and developers. And when we say “opening it up,” we really mean opening it up. We create open platforms and share our technology because it gets us to better ideas faster. We have seen this play out many times over the PaaS two decades. Take the example of Android. It started ten years ago. We wanted to bring everyone together to build phones. So we launched with one phone and one carrier in 2008 and we put out an Open Source operating system. Today we have an incredible ecosystem of phone operators and App developers leading to 24,000 different devices from 1300 different brands.

Kubernetes is another great example. It’s built on Google’s years of experience running production work loads at scale. We Open Sourced it in 2014 and it got even better as we began to incorporate best practices from the greater community. It is now the industry standard for running and managing containers. To go from release to number one in just a few years is amazing. That’s the power of Open Source. Today we are honoring that inflexion point with industry AI and we are going to take a similar approach. This is why we created a tensor flow to make it possible for anyone to use AI. Just a few years in we are seeing incredible use cases especially in healthcare. AI is helping doctors with diseases and decreasing patients’ risk for heart attack or stroke. We know AI has a promise for the world. We are using it to reimagine our own products from surge to Maps to Android and we want to bring the power of that approach to all of you. We want to make it possible for you apply Google’s years of investment to problems that matter to you and to your customers.

It’s hard, that’s with what a cloud journey is all about. The Cloud is going to be a huge opportunity for Google and for all of you. We are deeply investing for the long run and the best is yet to come. Very, very optimistic about what we can accomplish thanks to the community here and the journey we are on together. I want to turn it over to Urs Hölzle to elaborate. He was our employee No. Eight at Google and working on Cloud long before anyone was calling it the Cloud. Let’s give a warm welcome to Urs. (Applause.) >> URS HÖLZLE: Good morning. As Diane said, cloud computing is a fundamental shift in computing, with many great things ahead. Today I want to talk about what Google Cloud is doing to bring the cloud to you, even for the Computations, work loads that are remaining on premise the longest. The Cloud gives you unprecedented agility, performance, reliability, and flexibility. Using Google Cloud Platform, you get access to almost unlimited computing on demand, over the world’s biggest and most powerful network — A combination that enables you to get new insights, Serve more customers, And build more things.

On GCP, You get services like Spanner, Our globally consistent relational database. You get Machine Learning APIs to add vision, speech or natural language to your applications, Or BigQuery, Our high performance cloud Data Warehouse that helps you break down data silos. And with our new Velostrata service, you can migrate workloads to GCP in minutes. But cloud computing is still missing something: A simple way to combine the cloud with your existing On-premise infrastructure, Or with other clouds. In fact, the industry has provided the opposite. And that is a problem. Cloud providers do things their own way, often differentiating in areas that aren’t differentiated – they are just different. It might be about specifying permissions, setting up a virtual machine or network, or any number of ordinary tasks. All of these are different in different clouds.

And that creates unnecessary complexity everywhere. And that’s getting worse. Today, eight out of 10 enterprises have a multi-cloud strategy. And that’s on top of their own on prem infrastructure, Which isn’t going away overnight. Thus, as an IT owner, you are faced with too much to do. You define security policies and roles, manage access to services and applications, monitor the health of your setups, debug operational problems, and so on. All of this is challenging enough in a single cloud environment. But in a hybrid environment, The way most enterprises work, It gets even harder: You have to do everything several times, once for each environment, Each with its own rules. Right? That’s a lot of unnecessary work. So it’s not surprising that administration has become one of the key expenses.

So according to IDC, between 2005 and 2015, spending on servers fell 15% but spending on administration rose by 83%. That’s not a good trend, And we want to change it. The move to software containers has provided some help in simplifying and speeding up how we package and deploy software. They wrap an application and all the things it needs, Allowing it to run across lots of different environments. Google put containers into Linux over a decade ago. As Sundar mentioned, four years ago we released Kubernetes, a better way to organize this universe of containers and microservices and today it’s by far the most popular way to run containers.

With could you be kn — Kubernetes you don’t have to worry about operating system patches. It comes with many tools for managing releases, Load balancing, networking etc, and supports continuous integration and deployment. That’s why Kubernetes adoption has been really strong over the past four years. In a survey last year nearly Three-quarters of enterprises said they were using Kubernetes; I’m sure it’s higher today. So i it’s the industry standard for managing applications. The best way to experience Kubernetes is on GCP as Google Kubernetes Engine – Also known as GKE – The only container service that’s been generally available for over three years, built and maintained by the same team that first developed Kubernetes.

But Kubernetes solves only part of the problem. A lot of complexity remains at higher levels, especially for service management: How containerized services talk to each other, How they’re authenticated, How they can discover and use each other to create an application, and so on. And that’s even more important Today as many companies are moving from monolithic software to microservices to make software more maintainable. So let me tell you how we are extending Kubernetes to address your fastest-growing costs. I am very excited to introduce Istio, another Google-developed open-source product.

Istio extends Kubernetes into these higher-level services, and makes service-to-service communication secure and reliable in a way that is easy on developers. You can discover, connect, and monitor services holistically, across multiple locations, And still manage and monitor everything from a single place. The automation doesn’t just lower costs; you get more information about how your services are performing, which means you can develop and manage them better. We started working on Istio last year with IBM, Cisco, Pivotal, Lyft, and Redhat. I’m excited to announce that As of this week Istio is at version 1.0 this week, ready for production use. Many customers are already using Istio in production, including Ebay and Autotrader. (Applause.) Thank you. Let’s take a look at how Istio service management will work. At its center are certified Kubernetes environments. Kubernetes certification was introduced last year as a further guarantee of compatibility. Today you can choose from many certified environments. On GCP, GKE is the best way to use Kubernetes that is close to the Open Source Kubernetes. It’s very close to open-source Kubernetes but with much lower operational hassle. The same will be true for Istio.

Today, I’m very happy to announce that service: Cloud Services Platform, A fully managed service platform that puts all your cloud management of Kubernetes and Istio in one place, created and supported by the same team that Write Kubernetes and Istio. (Applause.) With the Cloud Services Platform you can better control traffic with dynamic route configuration, conduct A/B tests, release canaries, and gradually upgrade versions using incremental deployments. And if an API is business critical, CSP integrates natively with Apigee’s API platform on GCP, So you can make your services available to developers inside and outside your organization, with Apigee’s industry-leading API management. Perhaps most importantly, the Cloud Services Platform can transform a critical element of your computing strategy, namely security. Of course, you get the automated patch management of Kubernetes. You can use a consistent set of security services that are independent of the application logic, so developers don’t need to be experts on security. These policies can work on all services regardless of where They’re located, thee provide unforgeable identities that authenticate encrypted calls to other services and all this requires no code changes. So you can have One security model, And one set of policies, versus many.

For visibility, we’re also expanding Stackdriver, our monitoring and management solution, So that service monitoring works out of the box. You can understand performance of an application with built-in request tracing and profiling. Last but not least, CSP also supports serverless, Making your developers even more productive. That’s a lot of functionality. And this is only the start of building a stronger, more customer-friendly, Fully hybrid way to use the cloud. Let’s look at a demo, please welcome Weston Hutchins Let’s look at a demo. >> WESTON: To see how Cloud Services Platform helps me as a developer or operator, let’s take a look at a typical retail web app – The kind of multi-tier app that just about every company, Regardless of size, has.

For this demo, we’ve built a Kubernetes app called the ‘Hipster Shop’ – the one stop shop for everything you need for the hipster lifestyle Let’s look at how this app is implemented under the covers. To do that we’ll jump over to the Cloud Console. We deployed this app on GKE, our managed Kubernetes service. Here you can see all the k8s deployments that make up this app.

Let’s zoom into the cart service. You’ll notice that it’s deployed across two different clusters for redundancy — one on the East-coast and one in the west. But as a developer, I don’t want to think about Deployments; I want to think about how these deployments are related to the ‘services’ that make up my application’. And Istio helps me with that. I deployed Istio into both of these clusters and immediately I start to see interesting data from the services. To see that, we’ll turn to Stackdriver, and use the new Service Management features that light up automatically once you’ve installed Istio. What you’re seeing here is the new Service Topology graph for Stackdriver. Right away instead of just giving a list of services, I get a graph that shows my microservices and the relationships between them. The circles represent the services, and the lines are the dependencies between services. If I hover over the line, I can see which services are communicating with each other. Oh, and by the way, this all works out of the box — no scripts; No code changes.

Istio automatically sends the appropriate metrics, which then builds the graph displayed here. Now let’s drill in a bit. If I click on the ‘frontend’ service, I get pop-out with several relevant dashboards for my service’s health. I can see metrics like the number of requests per second, errors that my end users are seeing, and end-to-end latency across my service. Again, there was no extra instrumentation required ; Istio automatically collects this data. But if I notice a problem, I’ll want to investigate even further. Let’s take a look at the new metrics view for the front-end service. The bar at the top of my screen is a timeline over the last hour of interesting events that Istio captured automatically — thing errors or performance anomalies.

The graphs off to the left show me critical traffic metrics. The top graph gives I get a count of requests grouped by error code so I can easily see if users are hitting 500 errors. And if I scroll down, I can see a heatmap of my services response time. The service level intelligence that Istio provides helps ensure higher reliability and availability for my application. That’s a look at some of the individual metrics, but what really matters is whether we’re meeting our business objectives. And that’s where Service Level Objectives come in. Here you can see the new SLO dashboard.

This gives me a detailed look At business-critical services — things like whether my Buy button is throwing errors or is High latency. In this case, we’ve established two Service Level Objectives. One around Availability which we’ve set as 99.25% and another around latency; Which will send us alerts if we notice performance increases at the 90th percentile. From this I can jump into a number of new features like application traces and error aggregation for Istio-enabled services, which we’ll get into more depth at the breakout sessions. These are just a sample of the features we’ve added to Cloud Services Platform. Back to you, Urs. >> URS HÖLZLE: Thank you, Wes. (Applause.) The Cloud Services Platform brings many advantages to containers.

Having Istio and Kubernetes, You manage not just the implementation and deployment of a service, you manage the service itself as it’s running. Today for some people, a cloud strategy is simply a combination of lift-and-shift for existing systems, plus writing new Single-cloud code for new applications. That misses out on so many benefits of the cloud. Using Istio, you get a common platform to train your people against, a common service and security model, Reduced operational complexity, And faster innovation. You get more of what the cloud has to offer — Better security, Lower costs, simplified operations, all based on open source. And as you just saw in the demo, on Google Cloud you get the best way to experience Istio with a fully managed hassle-free service that’s still fully compatible with the open source version.

But there’s something else we want to show you today. Something that makes the Cloud Service Platform a true breakthrough. Back to you, Weston. >> WESTON: Everything you’ve seen here is running on Kubernetes and Kubernetes runs everywhere. But the one thing we consistently hear from customers is that they love the automatic management of GKE, but they also need those management capabilities wherever they have computers — in their own datacenter, in a grocery store, or a factory floor.

Everywhere. Let’s go back to the console. When we started the demo I showed that the app was deployed “across two GKE clusters” — one in the east and one in the west. But, the one thing I didn’t tell you was that the second GKE cluster isn’t running on GCP. In fact, if we zoom into the location field, we can see ‘Moscone’ listed as the location. This cluster running on a vSphere rack here on stage (Applause.) I’m excited to announce GKE On-Prem – A managed Kubernetes offering that runs inside your datacenter.

(Applause.) With GKE On-Prem, You get the GKE experience in your environment: And the best part is that GKE On-Prem, looks and feels the same as GKE in the cloud. I can continue to do things like edit configuration, set policy, And even deploy new apps to my On-Prem cluster as well as my GCP clusters, all from the same tools. Think of GKE On-Prem as adding an additional zone to your app, one that’s in your own datacenter. Back to you, Urs. (Applause.) >> URS HÖLZLE: Thank you. So there you have it, you can deploy and manage the same application using a single tool — on-premise if you want, on your own servers, managed with a browser.

So whether you are Modernizing existing code by wrapping it in containers, Or writing modern microservices, you have one way to configure, one way to deploy, One way to secure, One way to audit. Either way, managing or monitoring the application is the same as in GCP: A single, consistent, unified way to do your job. No multiple configs, multiple policies, multiple trainings, separate teams. Just one place to do it right, deploy and go, wherever you are. And with our Cloud Marketplace, which we announced last week, you can find, purchase, And deploy container-based Third-party applications into either GCP or on-premise environments, and manage them exactly the same way. That’s what we mean by bringing the cloud to you. And that brings real tangible benefits to our customers. Because A consistent hybrid platform across all environments is a natural choice for many businesses, for many years to come. So if you’re running your services in branch offices or factories or on ships, you can use the same consistent way to deploy, manage, or secure these services. I’m pleased to see how quickly the industry is adopting a hybrid, open approach. Cisco is a key partner. We started working together about a year ago and Last fall we announced our plans to develop hybrid cloud products together.

I’ve excited that we have made so much progress. Here to talk about that is David Goeckeler, Executive Vice President of Cisco’s Networking and Security business. Please welcome David on stage. (Applause.) >> DAVID GOECKELER: Urs, how are you? >> URS HÖLZLE: Thank you for joining us, David. >> DAVID GOECKELER: Great to be here. >> URS HÖLZLE: How does the partnership fit with your vision for the Cloud. >> DAVID GOECKELER: For the PaaS 30 years we have been connecting the world. You talked about it we started going down the path of the Cloud and simplicity and the world has eau GERNGed intuit era. Cisco was deeply invested in how do we bring this to the world, how do we orchestrate it, and and you I, Urs, found out we are aligned with our view of how are we going to bring technology to the world. Can we build an environment where users can build an environment once and bring it to the world. Can we do that in an open, extense I believe way, any architecture today, based on Kubernetes, Istio and brought what Cisco world-class, networking and security and that’s what we have been focused on.

I’m happy to say we have made a ton of progress between our two teams. We now have the Cisco hybrid platform for Google Cloud the first GKE certified platform. >> URS HÖLZLE: And I think you have an announcement coming up? >> DAVID GOECKELER: As you said, we have been in trials with customers getting terrific feedback. This slugs will go general availability next month in August so this vision that you painted here and all this tremendous work of how do we take an application, write it once, run it in the Cloud or on prem, customers can get started down that journey today.

>> URS HÖLZLE: I’m excited that we are aligning things for our customers and we have a joint road map together. >> DAVID GOECKELER: We’re super excited about what we’re going to bring to the industry together, the collaboration has been great over the last year and we look forward to driving this forward. >> URS HÖLZLE: Thank you, David. >> DAVID GOECKELER: Appreciate it. >> URS HÖLZLE: Appreciate it, too. (Applause.) >> URS HÖLZLE: So back to the demo, the Cloud services platform will be available this fall in alpha and it will bring you security services and enforcement and we will share best practices at that time. And just like Kubernetes greatly simplified containers, triggered an explodely ecosystem over the PaaS four years, Istio simplifies services and you will see a robust system of third-party services and tools emerge for it very, very soon. So we are bringing the Cloud to you and ending the false dichotomy drain on premise and cloud. We help you modernize with lower management costs, have built-in security policies all in one place consistent from on premise to the Cloud with a rich selection of tools and one goal in mind.

From the network to the applications, to the management, we bring the Cloud to you. Thank you very much. (Applause.) And now I’d like to welcome our next speaker, Prabhakar Raghavan, Who will talk about some very exciting developments in collaboration and the future of work. Welcome, Prabhakar. >> PRABHAKAR RAGHAVAN: I have spent a decade of my career competing against Google. Then it dawned on me the secret to Google’s many successful products. Search, Maps, port yells. Portals. I was Inspired to use this to transform the way we work through G Suite – because we spend a third of our lives at work So what is G Suite? Virtually all of you know Gmail, reimagined email. Google Docs, Sheets, Drive and Calendar, Hangouts Meet for video conferencing and Hangouts Chat for team communications, Together these applications have over 1.4 billion month active users and that number includes 80 million students. Your future employees. Over time we put all of these applications together to create G Suite. Today we have over 4 million businesses relying on G Suite. These included everything from small businesses to the likes of PWC and Salesforce.

Companies like Airbus have chosen G Suite as an investment in their people to remove the mundane from their work lives and let them focus on the true creative potential. What does this mean? To get a glimpse of this, let’s roll the video. Whoa. (Music Playing.) (Music Playing.) >> Hey Google, join the meeting. Muse muse (Music Playing.) (Applause.) When you just saw was how one team reacted to a high-stakes challenge using G Suite, no more Russian roulette and anxiety. I know you’re thinking this guy is playing a vision video, no, that wasn’t a vision, everything on that is either in play or will be announced at this conference.

My team and I talked to customers to sigh and see what it would take to truly reimagine work and we came back with three inalienable things. G suite had to be secure, smart, and simple. Let’s take these in turn and see what it means. First, secure. A Cloud made of security eliminates attack vectors that most others have resigned themselves to, for example, you cannot use a thumb drive containing a Google doc because it cannot sit on a thumb drive, it sits on the cloud so the security officer doesn’t have to run to the local laundry where employees forgot thumb drives in their pockets. Full stack security is self-administering, no versions, no patches, everybody is working on the latest, and the greatest and most secure version at all times. For two-factor authentication as you heard from Diane, we have security keys that plug into your phone or laptop. The compliment passwords, cannot be broken, but let’s look at the outcome. For G suite customers that have deployed our security keys, to date we have a grand total of zero account high jacking, that is security Nirvanna! (Applause.) It’s about more than just protecting end users.

We need to make it super simple for admins to protect users. Earlier this year we introduced the G Suite security center, a one-stop shop that gives you security analytics and recommendations for instance here is are the settings that make your Enterprise more secure. Today I’m announcing a critical new dimension, the G Suite investigation tool. What does it do? It let us an admin investigate in an not — in an unusually amount of files outside the domain, no longer gathering from multiple silos, running scripts, hours later you have the answer but hours later is too late.

That is security reimagined. Investigation tool goes intubate at that today. Investigation tool goes into beta today. Next, we heard from customers that they would like to have select users data residing in Europe or in the United States. G Suite data regions will allow the primary data for key G Suite apps to be resident at the customer’s demand in Europe or the United States and best of all this is dynamic, meaning if a user moves or the file’s ownership changes the file moves seamlessly under covers with no impact on the files. It is available to all G Suite customers today. (Applause.) Now you’ve heard Diane and Sundar talk about the many years of investment that Google has had in artificial intelligence and it’s the reason for leading experiences, magical moments, in search, YouTube, Maps and so on. Many of you know that millions of people in the world use Google Assistant to get more done every day. And we are committed to bringing that magical assistance to the workplace. We recognize that had an assistant in the workplace wasn’t building a consumer appliance and tossing it over to the Enterprise.

So my team went off to study what it is that users do over and over again every day in the workplace. The results, perhaps not surprising are that we do three things representatively. First, reading a content consumption, second, writing, and third, meetings. So let’s take each of these in turn and look at what AI means for them. One of the things we looked at was tabular data in a spreadsheet. If you don’t know how to run a spreadsheet formula, you would have to run off to someone who could do that. With Google Sheets you can ask in natural language, for instance, “give me the top stores by sales for the last week.” And out pops the answer, that is content consumption reimagined. (Applause.) Let’s talk about writing. How many of you set aside every single day to respond to messages, chat, so on. I see a few hands go up, you don’t see my hand go up and the reason is I use some magical tools from the G Suite.

It’s been over two years since we introduced smart replying GMails in the industry. GMails process hundreds of messages every single day and each incoming messages will respond three responses, for instance, yes, I can make it or, sorry, I’m unable to make it. What we’re measuring is that over 10% of GMail replies are these machine written smart replies. Think of that. We are at a point in AI where 10% of your replies are machine written and human accepted. I’m happy to tell you that we’re bringing smart apply to Hangouts chats in the coming weeks. (Applause.) Thank you. The responses are casual enough for Chat but still good enough for the Enterprise. This is an approach you will see us taking repeatedly, we will build a product on one G Suite product and bring it to others. You may have seen an announcement of Smart Compose. I want you to watch carefully. You see somebody writing an email and there are several things it’s picking up. It’s picking up the context for the email you’re composing, so that it’s learning that you’re writing about office supplies and suggesting ink, cartridges, and computers, not chips and salsa.

Something you cannot see here is that smart compose figures out how you address your correspondence, for instance, with Diane Greene I might start my email with a chipper, hey, Diane, but for Urs I’m respectful and begin all my emails and say something like, Heir, professor, Dr. Urs Hölzle! We will see this ramping up over the coming weeks. (Applause.) One of the things that makes great work of this product is the use of feedback we get and with Smart Compose it’s been gratifying to get feedback saying you know what? I’m prone to writing errors and it’s not just a matter of saving time with Smart Compose and catching errors, it’s building my confidence as a professional writer. We studied grammar correction. How do you correct grammar? We decided to adopt a novel approach to grammar correction.

Many of you are familiar with Google translate that takes text and the language like French and translates it to English so we adopted the technology and what we are doing is taking text in incorrect English and translating it to corrected English. This approach is not brittle, like rule-based approaches and the data is saved on your servers, no Chrome pluggings, and data going all over the map. Grammar Correction in Google Docs is available today in Beta. One of the nice things about these AI features is they are getting better over time. Let’s talk about scheduling meetings. Scheduling meetings in the old way is hellacious. They have to find four people in Tokyo, 7 in Zurich and in London and when they are done with that they have to find the right meeting size and location. No more with that with Google Calendar. We take the Liz of participants from you and the rest happens magically.

We find the right time slots and rooms in each location based on your historical patterns of which rooms you book, your proximity to your seat and so on. That is scheduling reimagined. The deep commitment to high definition video meetings, last fall we launched meeting hardware to far our Hangouts Meet Service. This sits in your room and let us you have video conferencing in a way that’s robust, scalable and affordable. We have folks who have deployed thousands of these rooms. We will be enabling voice commands for Google Meet hardware so you can start a meeting with “hey, Google, start the meeting” and there is more to come. (Applause.) We will be radiology this out to select customers over the fall. Now, that said, let’s all recognize that AI in the workplace is more than just talking to a plastic box. You have to nail the key user journeys of reading, writing, and meetings. Finally, let’s talk simple. Building products that are loved by billions of users has driven a discipline in my team of simplicity and design.

And we’re bringing the billion users simplicity to the workplace and you’re seeing users are demanding it in the workplace and others are following suit. This is what users expect. Now the outcomes of the simplicity are startling and measurable. We measured in Google Docs 74% of the time users are collaborating not the old way of one person writing on a desk stop and handing it off and version 27.3 later you’re somewhere.

Attachments make no sense in the Cloud age. They serialize your work flow, slow you down, lead to printing, security explosion, and I know you’re thinking what about attachments, the insights are not mine and I’m going to invite the person who taught me these insights, Kim Anstett of Nielsen CIO. Please welcome Kim. Thanks for joining us. >> KIM ANSTETT: Great to be here. >> PRABHAKAR RAGHAVAN: You work at a company that has a storied history what were you thinking diving into G Suite? >> KIM ANSTETT: When I think about our company that he have 95 years of innovation.

As the world and consumers change we have to change, too. My job is to make sure over 40,000 associates in over 100 countries have the most robust tools to innovate and service our clients and the change is happening faster than every. So in 2015 we started to look for a state of the art collaboration platform. One that would allow our associates to work together anytime, anywhere, on any device and that brought us to G Suite. >> PRABHAKAR RAGHAVAN: So we talked about simplicity and that is something you have always espoused.

How do you think about simplicity in a G Suite now? >> KIM ANSTETT: Love the simplicity. I am impressed by the simple yet robust tools in G Suite. Every day D irkcs, Sheets and Slides make our jobs easier because we blur the lines of how we live and work. One of my favorite examples is when I think about those moments when I talk to an associate who has experienced that first moment of saying, there will never be another version 25 of that presentation deck. And never again do they have to wait hours or maybe days for a colleague to send them the latest version of a file so they can apply their edits.

That doesn’t happen with G Suite. We also found that video was a game changer, collaboration across teams, across the globe is so much simpler with one click in Google Meet but it isn’t just the features of G Suite that is simple, we found the migration was extremely simple. On July 18th, 2016, we cut over the entire company in a single weekend. On Friday our associates were using desktop tools for mail and calendar. By Monday morning they were live in Chrome, accessing GMail, Calendar and every collaboration tool they could ask for. This was successful because of the strong partnership that we had with Google and Maven Wave that could help train our employees that were live in the business to bring associates up on Monday morning.

Our most senior executives were blown away at how seemless or largest global IT deployment was that day. (Applause.) >> PRABHAKAR RAGHAVAN: So the tools are great, deployment is good. What’s the business outcome that you get? >> KIM ANSTETT: It’s been incredible. We are absolutely delighted with our decision to go to Google. We have over 90% of our associates that are collaborating in Google Drive.

We are seeing over 500,000 video calls each month. >> PRABHAKAR RAGHAVAN: Wow. >> KIM ANSTETT: And we have a brand new, fresh, 100% digital experience for our employees through our company intranet that is on Google sites. We have seen over 100 enhancements come ought over G Suite and over half of those are impacting our employees every day. So I can say to all of you here today who are thinking about making the move to Google, I can say it’s very simple, it will be the best decision you made and it’s because Prabhakar and team are the best.

>> PRABHAKAR RAGHAVAN: Thank you for your leadership, Kim, and thank you for joining us. >> KIM ANSTETT: Thank you. (Applause.) >> PRABHAKAR RAGHAVAN: Today, we talked about how organizations reimagine work using G Suite to get their employees to remove the mundane and literally work on the same page. I would like to invite you all to join us on the journey of reimagining the workplace and invite to you join us tomorrow at the stage where my Garrick Toubassi will be announcing more things to come in G Suite. Now we talked about how we can enhance your life and here to tell you more is Fei-Fei Li. >> FEI-FEI LI: Good morning! Many my nearly two decades as an AI technologist, I’ve watched our field evolve in remarkable ways. Today, AI is transforming industries all over the world. And in the process, it has the potential to dramatically improve the way we work and live.

AI is empowerment, and it’s our goal to democratize it. At Google Cloud, the last year has been one of innovation in applied AI. We’ve advanced the state of the art significantly, all while making it more accessible than it’s ever been. With our APIs, some of the most powerful capabilities of machine learning can be realized in just a few lines of code. For example, Bloomberg is using our Cloud Translation API to deliver news to a global audience in seconds.

Of course, data plays a pivotal role in AI, Which is why Kaggle, our data science platform, makes over 7,000 datasets publicly available. With nearly 2 million members, it’s the world’s largest community of data scientists. We’re also lowering the compute barriers to AI with our Tensor Processing Units━or TPUs. These custom chips dramatically accelerate machine learning tasks, and are easily accessed through the cloud. TPUs allowed eBay to reduce the training time of their visual search model by a factor of almost 100━from months to days.

And researchers at Stanford used TPUs to build an image recognition model so sophisticated that it’s being used by neuroscientists to better understand how vision works in primate brains. Our second-generation TPUs, announced last year, are now generally available. This means they’re within reach for every Cloud user, including free-tier users. Today, I’m excited to announce that we’re bringing our third generation TPU to Cloud, demonstrating our ongoing commitment to putting the best hardware in the hands of AI developers. And this is just the beginning. We’re working with customers from retail to healthcare, from education to manufacturing, and from agriculture to entertainment. AI is no longer a niche for the tech world━it’s the differentiator for businesses in every industry. But true democratization means bringing that differentiator to an even wider range of developers. We know that many of you need more flexibility than our APIs were designed for, but aren’t yet ready to make use of advanced tools like TensorFlow and Cloud ML Engine. That’s why we developed AutoML. AutoML lets you extend our most powerful machine learning models to recognize data specific to your challenges━without writing code.

With AutoML, anyone can make machine learning work for them. Our first release was AutoML Vision. It extends the Cloud Vision API to identify entirely new categories of images. This made a big difference for Urban Outfitters. They wanted to provide a visual search feature to let users order a product as soon as it catches their eye –even in the real world — all through a mobile app. Unfortunately, their in-house model didn’t meet their expectations, and they needed a new approach. After evaluating their options, they decided to start over using AutoML, trained on images from their catalog. First came an immediate accuracy boost. Since AutoML is based on Google’s State-of-the-art image recognition technology, their app had a major advantage when recognizing products in the real world.

It was faster, too. Processing time dropped from over 10 seconds to just 23, and training time fell from weeks to a matter of hours. And although it’s based on the Cloud Vision API, the resulting model is available only to Urban Outfitters. We believe your data is not only private, but should remain your competitive advantage. That’s why none of the data that trains an AutoML model is used in any other Google machine learning application. (Applause.) We’re excited to announce today that AutoML Vision is going into public beta. Over 18,000 of you have already signed up to use it, in industries ranging from manufacturing to healthcare to entertainment. We can’t wait to see what you create. And today, we’re announcing two new AutoML products, extending its capabilities into language. The first is AutoML Natural Language, which builds on the text analysis abilities of our Natural Language API to include an understanding of subject matter specific to your industry.

The second is AutoML Translation, Which extends Google’s machine learning language translation technology to recognize the jargon, terms and figures of speech of your application━ In 27 language pairs, With more on the way. This means translations that more faithfully capture the nuance your audience expects. And now, to demonstrate how easy it is to apply AutoML to your own ideas, please welcome Developer Advocate Sara Robinson. (Applause.) >> SARA ROBINSON: Thanks, Fei-Fei. The best way to see how AutoML Vision and NL work is through live demos. Let’s start with Vision To show you AutoML Vision, We’ll look at a model I’ve built for predicting the species of a leaf given an image.

The data for this is sourced from the LeafSnap dataset, and our model was trained on images of 10 different types of leaves. Here, in the AutoML UI is where I gather and label all of the images that will be used to train my model. I can upload them directly in the UI and I only need 10 images per label to start training. The best part about AutoML is that I don’t need to write any of the model code to train this. I can simply press the train button To train my model. This particular model took less than an hour to train. When training completes, I can take a look at metrics to evaluate the accuracy.

Looking at the metrics, this is what is called a confusion matrix. It tells us the percentage of images for each label that our model was able to classify correctly. Ideally we’ll see a strong diagonal down the top left, since this means our model classified a high percentage of the images in our test set correctly. And we can click on any cell to see the specific images where our model got confused. Finally, the most important part of this process is generating predictions on images our model hasn’t seen before. Let’s try our model out by generating predictions on two images of leaves. As someone who’s not a plant expert, I’d probably classify both of these as leaves. Or, if we’re being very technical, Green leaves. Let’s see what our model says. Turns out these both belong to different species and our model predicted both varieties correctly with over 96% confidence. Once we have a trained model, AutoML gives us access to a custom REST API endpoint to make predictions.

If I add more images and train a new version of my model, I only need to change the model ID in my API request. I’m incredibly excited about AutoML because it gives anyone the tools to build custom models, regardless of their machine learning expertise. You might even say that AutoML Vision is turning over a new leaf in the way we build image classification models. And although I’ve shown plant species classification here, you could easily build a model in any other domain. We’ve seen our alpha users build AutoML Vision models for identifying damaged products before they are shipped, classifying catalog specific images, and identifying points in time, like stages of construction. But what about other kinds of information, like text documents or news articles? With AutoML Natural Language, it’s just as easy to build a custom model to make sense of written content.

Take DonorsChoose. Org for example, a non-profit that receives hundreds of thousands of proposals each year for classroom projects in need of funding. They currently rely on a team of volunteers to manually screen each submission, so they published a dataset to Kaggle in the hopes that a community of machine learning experts could build them an automated solution. Let’s see if AutoML Natural Language can help. Similar to AutoML Vision, here I can look at all of the examples in my dataset and their corresponding label. The text inputs to our model are need statements from the teachers, detailing the materials they need for their classroom. Each need request has an associated category to help direct the teacher’s request to a relevant donor.

The labels used to train the model have been assigned manually by volunteers. Once the model is trained, it’ll be able to predict the category of new requests automatically. To see how our model performs on new text that hasn’t been categorized yet, we’ll take the following need statement as an example: My students need aquarium lighting for our environmental clownfish aquaculture project. DonorsChoose needs to assign the category “Lab Equipment” to this request, which isn’t completely obvious from the text itself. Let’s try it out on our trained model. We can see that our model was able to correctly classify this as “Lab Equipment” with 98% confidence. Like AutoML Vision, I now have access to a custom model that I can use to generate predictions on new text with one API request. That’s really all there is to it. Without writing any model code, you can build custom machine learning models that can accurately recognize your own images, understand the contents of your documents, and even translate your jargon authentically.

They’re ready to use immediately, accessible through Google Cloud, and they scale automatically to your needs. Thank you. (Applause.) >> FEI-FEI LI: Thanks, Sara! It’s amazing how quickly this field is advancing. But progress brings challenges as well, and more and more of you are asking for guidance in developing an enterprise AI Development for developing an Enterprise AI strategy, that’s as powerful as it is trust worthy — trustworthy. AI hats potential to empower us like no other. But that power demands responsibility. For example, algorithmic bias is a concern for everyone. So as AutoML brings machine learning to more developers than ever before, we’ve invested heavily in a set of best practices designed to help you build models that you can trust to treat your users with fairness. All over Google Cloud, we’re taking measures like this to ensure AI is making a positive impact━not just in our own products, but in everything our users build as well.

We call it Human-Centered AI. I’m pleased to announce this human-centered approach is driving an entire category of technology we call AI Solutions, intended to solve some of our partners’ biggest problems. Our first, of many to come, is focused on contact centers. Working in a contact center requires some of our most uniquely human skills, like navigating informal language, understanding context, and recognizing what someone needs from limited information. Unfortunately, call volume is often too much for a human workforce, so we worked with some of the top players in this space to understand how AI can make them more effective. We call it Contact Center AI.

To show us the difference this technology can make, I’d like to welcome Dan Leiva, VP of Customer Service Technology at eBay, And Merijn te Booij, CMO of Genesys━ The leading provider of customer experience solutions. (Applause.) >> FEI-FEI LI: Dan, let me start with you. What are the most challenging issues you face when handling customer calls? >> DAN LEIVA: Thanks, Fei-Fei. I’d say it boils down to three fundamental problems: First is the infamous phone tree a caller has to navigate, Usually by answering what feels like an endless series of que — questions they have to answer just to get the call. Most of us want to talk to a human being. I’m sure everyone can relate to that. The second problem is keeping track of information. Because here’s the irony━half the time, when we finally reach a live agent, they ask us to repeat the information we just gave the phone ay to address these issues? We have to put the caller on hold and search for that information.

>> FEI-FEI LI: I couldn’t agree more. Merijn, how has your team addressed these issues? >> MERIJN te BOOIJ: We wanted to create a new customer engagement and I will talk about a few things. First of all, we want to think that the normal conversation would be the answer and with Genesys, that can be the answer. We do that with an AI platform and we use predictive routing which will match the optimal customer with the optimal agent paced on the outcome we predict. We bring AI to the desktop and we believe that agents can be helped with AI and we offer information and microbullets that will make tasks easier to do and that will account for new agents and we believe on-hold events and transfer events will go down and that will return a much better customer satisfaction. >> FEI-FEI LI: That sounds great, Dan, how has it helped Ebay? >> DAN LEIVA: Why don’t I show you. We’ve prepared a video of our prototype in action. It begins with Mala, who needs to return a pair of shoes. We then meet Josh━a real-life customer service agent who just joined eBay. Let’s see how Contact Center AI helps them out.

>> Hello, we delivered you six market running shoes on June 25th, are you calling with this order? >> Exactly. >> How can I help you. >> Unfortunately they don’t fit so I need to return them. >> I can help you with that. I am starting a return for you. You will be he receiving an email with the details of your return. >> Cool, thanks. >> Mala conversed with a virtual agent who is returning her shoes. Now she needs a better fitting pair of shoes the agent will detect this and Al connect Mala to an agent. >> Would you like me to connect you to an agent to find the right shoes.

>> Yeah, that would would be great. >> We recognize the intent and works to find the best agent to help, Genesys predicts that Josh is the best fashion expert to help Mala and the call is routed to Josh including all the text and conversation with the virtual agent. >> Hello there, Mala, my name is Josh, I would be happy to find you — >> The transcript of Mala’s conversation with the virtual agent is visible and the transcript with Josh is interpreted in realtime. As Josh and Mala are speaking, agent assist interprets audio and builds context on the meaning and the intent of the conversation. This allows agent assist to find relevant answers and articles to help Josh in realtime. >> I definitely recommend tennis shoes since they have the right support, grip, and to avoid injury, what type of court are you playing on? >> Primarily hard court. >> And is this your preference or would you like to explore other brands? >> I like Pamarca shoes.

>> The agent finds the right content and made it available to Josh. >> I searched Ebay for you for hard court tennis shoes, women’s size 6, would you like me to text it to you or would you like to receive it via email. >> Could you text it? >> Certainly, thank for choosing Ebay. >> FEI-FEI LI: That was cool. (Applause.) I think we’re all excited about spending less time on hold. But it’s more than that. Contact center AI elevates human talent. It makes simple tasks faster, while making human interactions more meaningful. Google has several technologies that help users take action or communicate over the phone, and while Contact Center AI and the recently announced Duplex share some underlying components, they have distinct technology stacks and aims overall. And Contact Center AI is compliant with our data privacy and governance policies, making it ready for enterprise use. We’re working to bring Contact Center AI to you, with partners in the space like Genesys and Mitel, along with system integrators such as KPMG and Deloitte. We’re happy to announce that Contact Center AI is in Alpha and is open for sign-ups today.

AutoML and Contact Center AI are examples of our passion for bringing applied AI to every industry, all while elevating the role of human talent. We’re creating technology that’s not just powerful, but trustworthy, with an ongoing dialogue between designers, engineers, legal experts, ethicists and more. And earlier this year, we demonstrated our commitment to responsible AI with a set of published principles that guide our decisions.

But above all, this is a partnership with you, our customers. It’s a shared effort to understand not just your challenges, but your users and their communities━to increase engagement, and reinforce trust. It’s how we’re helping you harness AI to detect fraud more quickly, manufacture products with greater consistency, recommend content with more insight, and empower your employees to be faster and more creative with intelligent tools. Above all, it’s how we’re living up to a simple idea: Rather than replacing human skills, AI’s greatest potential lies in enhancing humanity. Thank you. And now I’d like to welcome back to stage Diane Greene. (Applause.) >> DIANE GREENE: Thank you Fei-Fei. All right. You know, >> FEI-FEI LI: And her team are transforming dolphin research. I have a Georgetown University professor who has 35 years of dolphin fin photos, the unique ID for a dolphin is their fin and they have been tracking these dolphins painstakingly over the last 35 years up and down the coast and in Australia where I’ve been out in the boat spotting them and a few months ago we applied AutoML to these images and they can now do in a few minutes what used to take them months, tens of thousands of hours.

In fact, they can do things they never thought we could do before, they’re crowd sourcing dolphin fin photos now, to track them. It’s just incredible, what AI, the opportunity we have in AI to better understand our world and of course Cloud is an equally big opportunity. And in closing out this morning’s keynote I wanted to leave you with a few take aways. I think everybody is going to move to the Cloud and whether you’re reengineering your company, rethinking how you work, or simply taking advantage of our Cloud infrastructure or using our AI, just know that we’re here to partner with you, to help you disrupt in a nondisruptive way.

The theme of this conference is “Made Here Together” thank you everyone for being here, thank to our customers, to our partners, and thank you Googlers for all that we make together! Have a great week here at Google Next. Bye-bye! (Applause.) Gmail, et cetera, that have taught us the value of cloud-native applications. Now, as we move to public cloud, we do believe that this is ultimately the end state that everyone should be striving for. When you actually build applications in the style, not only do you have the ability to have them portable, but as you containerize, operational efficiencies, you can decouple applications from operating system upgrades. You can help your developers be more efficient. Oftentimes you are breaking them into microservices. So individual teams can work separately and upgrade on their own cadence rather than having to coordinate a large monolith. And in fact, these basically are some of the reasons that this model, by increasing your operational efficiency, and helping your agility, this is really what drives a lot of the new directions in cloud and in software development.

But as much as we want to go up to that direction, the reality is that the majority of applications today are down in what we would consider this lower left quadrant. They’re sitting on PVRN PREM, large applications, large monoliths, not easy to upgrade, not easy to modify, sometimes difficult to maintain. And so the question is, what do we actually do to address these applications? Ultimately, we want to take these work loads and try to find a way to get to the upper right. Unfortunately, that’s just not so straightforward. existing applications and largely moving them as is to the public cloud. Sometimes this ends up being the most pragmatic approach. And the reasons might be because, you know, the applications are coming from a third-party ISV. You may not have source code.

So it’s not easy for you to rewrite them or rearchitect them in a modern way. Or perhaps you actually do have the source code, but these are applications that were written by developers many, many years ago, most of whom may not even be at the company anymore. Or they’re a little bit brittle and fragile. So you’re worried about actually can you actually make substantial changes. Because the application’s working just fine today. So in these cases, moving to public cloud can still bring you benefits. The question is how do you actually make that migration smooth and easy? And later on in this talk, I’ll describe a few ways we’ve been doing that here at Google.

recently was the processor vulnerabilities, Sector and Meltdown. These were the largest security vulnerabilities discovered in decades. We were able to keep your applications and workloads continually running without actually having to disrupt the applications and make slightly lower friction migration path that’s available to you to help you on your journey to cloud native. Alternately, there’s another direction which is to improve first and then move. And by this what we mean is thinking about some of the applications that you have today and deciding to take that step and start to containize them. So separate the application from the op operating system, as mentioned in the keynote, start to orchestrate these containers in the deployment of these applications. Most IT organizations have already been adopting them for at least of some the applications and services. And then once you’ve learned the technology, learned the deployment practices, it becomes much easier to take that next step to move to the cloud. Now, I’ll mention yet another journey here.

In addition to lift and shift, lift and improve, improve and move, is build and connect. So this is a little bit more about an architecture style rather than a form of migration. And what I mean by architecture is some of your complex applications and services may have many, many components. Some that are going to stay on premises, some of them will be running in the cloud. And in this kind of an environment or architecture, you’d be looking at how to use things like AVRN PVPAGEES that your multiple locations. I’ll talk briefly about this later in the talk. But at this point, you know, the thing that comes through if you look at all these to the cloud? Or do you want to take a more incremental approach? As we meet with customers, oftentimes what people look at is, you know, what are some of the easiest applications to move first? So pick some of your test Dev applications.

But also start planning an architecture that allows you to move your more complex workloads eventually. So breaking it up into waves and phases. Part of the effort here is also aren’t understanding first discovering what applications you have, how can you assess whether these are good applications to move, do the planning. And then finally do the migration and then once you’ve migrated them, optimize them. So there’s multiple sessions here at the conference where people go into more detail about the programs that are available through partners, through PSO, to help you with this full migration process. But now that we’ve talked about some approaches at a per-application level, let’s take a step back and think about the holistic picture. In the keynote, they described the notion of hybrid. And so ultimately what you need to have is the infrastructure and framework to think about how you manage a diverse set of applications across a diverse set of locations.

And this is also a very critical part of your cloud journey. So what is hybrid? You know, I think if you ask the 500 people in the room, we might get, you know, 400 different answers of exactly what hybrid means. It has a lot of different interpretations. But for the purposes of this talk, I think where I want to focus on is just the recognition that you do have multiple locations and multiple applications and services that need to run in the location that suits you.

And again, put simply, do what you want where you want. What we found is this approach is something that resonates with customers. This is a survey of cloud users by IDC recently. They found that 87% of cloud users are hybrid. More than half of them have multiple kinds of cloud deployments in their organization. And a large fraction of them, 40%, are cloud first. So a lot of these trends are very likely to be accelerating. Practically speaking, hybrid is a reality. It’s a state that exists now. It’s going to continue to increase. And it’s going to be a state that’s going to be here for years to come. So how do you think about what you want to do around hybrid? One way is to break it up into layers in an overall architecture starting from the network, then looking at the application layer, and then the management layer. So let’s look at each of these, in turn.

So networking, obviously, is an underpinnings. And in a second, I’ll go into examples of how you might build your hybrid network. At the application layer, this is where there is a lot of flexibility, a lot of technology, a lot of opportunity leveraging the latest technologies like containers and server lists. As we heard in the keynotes this morning, the cloud services platform in GKE are examples of how you actually build your applications and services and spread across your hybrid infrastructure. Now, the — and then on the highest layer, thinking about management. So let’s go through a couple of these. Starting here. So networking. With networking at Google Cloud, one important step in providing a hybrid infrastructure is figuring out how to connect your on-premise data center with Google Cloud. So the virtual private cloud, or VPC, is a way to actually create an isolated network in GCP for your own private access. It allows you to make GCP look like your own network. But when you connect it to your own data center, ten it becomes simply an extension of your on-premises footprint. But it’s more than just connecting a single data center to one GCP region.

A single VPC can actually span multiple regions across the globe without having to go onto the public Internet. So why is that important? Well, for several reasons. One, from that single on-premise data center, you’ve now been able to connect to all of the Google regions. You don’t actually have to set up a separate, different connection from your data center to each of the regions that you want to operate in. Secondly, it gives you much lower and more predictable latencies when you actually have to connect across these regions, especially for those services that you have that are large-scale global services. Because you don’t have to go out into the public Internet. And then finally, having a global GPC, you can now have a single policy that spans this network and from a management standpoint, that simplifies your problem. So let’s say you wanted to take advantage of these technologies. What are some of the steps that you would take? So there are three different options to connect your data center to GCP.

The first is you use more traditional VPN-like approach. So with Cloud VPN, you connect your GCP project to an on-premise VPN gateway. And then now you can encrypt data in transit. For some customers, your requirements might be more substantial. And so this is — you could choose to go down the dedicated interconnect route. With dedicated interconnect, you bring your network and you meet it at one of our dedicated interconnect points of presence. We have more than 70 dedicated POPs worldwide and are continuing to add to them. With each of these interconnect points, you can get increments of 10 gigabits per second of bandwidth. But for some customers, perhaps, you don’t have such demanding networking needs. And that’s why we have a third option, which is called partner interconnect. This might be a case where you’re not really close to or your data centers aren’t sufficiently close to one of our points of presence. And so what you do is you actually work with a partner.

The partner has a peering location with GCP, and then you bring your network to the partner’s — you peer with a partner. This is a turnkey solution integrated into the console. And it allows you to have a more flexible set of networking options, up to 10 gigabits per second all the way down to 50 megabits per second. So that provides the basic infrastructure that you need for a hybrid architecture. Moving up the stack to the application layer, you know, there are many other talks at the conference where people will go into detail.

We could spend hours talking about what you would want to build for hybrid applications. I think here I just want to highlight a few of the basic principles. Namely around portability, abstraction, and automation. So the ideas here are you want to have a set of designs that allow services and components to interoperate in a meaningful way. And so standardizing on open-source components, open APIs are a key part of it. And then once you do this, you want to be able to automate and be able to manage at scale. And then finally moving up to the management layer. Here you could choose to take a variety of approaches depending on what you may have standardized in your environment and what you may decide you want to actually change in your environment. Basically, it’s kind of your appetite for change.

So at the management layer, one of the most critical pieces of your stack is around monitoring. So you need to have a way to have ongoing monitoring of the health of your applications and services. And stack driver provides a suite of tools that give you operational awareness. There’s a set of tools for logging and monitoring the services that are running inside of, say, virtual machines or running in terms of other, you know, containers.

And it doesn’t matter where they’re running. They could be on GCP. They could be running on Premise. They could be running elsewhere. And this is one way to kind of unify your management of your workloads. A second path is, as was announced this morning with GKE on-PREM, we now have the ability to have a single control experience that manages across the cloud and on premises. So no matter where you run them, you can have a consistent experience. And the third one is maybe aimed at some of you who might have VMware environment. So you might be — might have lots of tools and practices built up around your VMware infrastructure. We’ve been partnering with VMware to build a plug-in to fit into that set of tools. So we recently last week announced that we’ve been working together on a plug-in for V-realize automation and V-realize orchestrator. For those of you that aren’t familiar with this environment, essentially what some customers have done is created a portal or sort of their own private cloud.

And they create blueprints. This is one of the more common use cases for these tools. Where you’ve identified an end user wants to create an application. You click on the blueprint, it automatically deploys resources. For those people using VRO or VRA, the natural extension is to be able to back those resources with resources in GCP. And so this plug-in allows you to get some of the benefits of Cloud while continuing to use your existing management infrastructure. So now you can use a blueprint. The blueprint creates virtual machines in Google compute engine. And now those workloads can take advantage of things like, you know, basically lower-cost compute because you have the various forms of discounts like sustained use or discounts, the custom machine types, and this environment is now running and using resources in the cloud even though you’re using your existing on-premises management infrastructure. So this plug-it is in EAP right now. If you’re interested in taking a look, there’s a sign-up for you to take part. OFolks, o you can take a picture.

All right. So we’ve talked about various journeys that you can take to get to the Cloud. We’ve talked about a hybrid architecture that you might want to think about when you’re coming to the cloud. But you still have to figure out how you can move those actual workloads and that data as quickly as possible. So if you’re going down this path, you know, what are some of the biggest challenges you’re going to run into? A recent survey showed that most migration projects unfortunately aren’t super successful. About 94% of them run into delays. They’re over budget. We’ve often heard even of cases where people have moved workloads to the cloud only to move them back because of something they didn’t realize, something they didn’t plan for properly.

Why is this hard? Well, a lot of organizations have not just hundreds but maybe thousands of workloads and applications that they need to move. So how do you handle the scale? How do you figure out which applications to move, what the dependencies are? It’s pretty daunting. Not to mention the fact that organizationally you’ll have different owners of these applications and you may need to get their approval in the migration process. Another area of complexity is they come from different sources. Many of them could be running in virtual machines on VMware. They could be running on physical. Some of them could be in another cloud and you want to move them to GCP.

Of course, we have also complexity. Your most mission-critical workload, say it’s an SAP or Oracle database, these are applications that are business critical. You do want to take advantage of moving them to the cloud. But the problem is you can’t take any business risk, and you don’t even want to take any down time or at least keep that down time to an absolute minimum. So those concerns lead into this question of risk. How can you actual prepare yourself and test and is there a way to make sure that workload succeed when you move them? In the worst case, if things don’t go well, you can roll back.

Fortunately, we came across a partner we’ve been working with for a while. And we recently announced an acquisition about a month Agatha closed. They were already a patner helping with migrations to Google Cloud, but we found it’s a very complimentary team, technology and product. And really what they are enabling us to do is provide an enterprise-grade purpose-built solution for helping accelerate the migrations of workloads to GCP. Not only can they move from on-premises to Google Cloud but also move from AWS to GCP. So this technology and this product is a real enabler to reducing the friction involved in these complex migrations. One of the key capabilities is that it allows to you move your workloads very quickly, oftentimes with less than ten minutes of down time. And that’s pretty amazing if you think about how you actually want to reduce the impact and reduce it is friction of a migration if you can reduce that down time, this is a critical advantage. So the typical migration, what will happen is you’ll have a bunch of data that’s associated with your VMs and you need to copy that up to the cloud before you can actually migrate.

And if it’s a large VM, let’s say a terabyte, that’s going to take some time. That could take hours. Now PL multiply that by hundreds or thousands, and you can see how long this process can take. The key insight from the team was to took at Lou you decouple compute from storage. And so the idea is that actually you can stream Cloud — stream the storage.

So essentially what happens is you take a short down time period at the beginning, and you start bringing over the most critical pieces of the storage. And there’s a lot of intelligence and optimization in this technology that allows you to start that process. And then once you’ve moved those critical pieces, you can start running your workload in the cloud while the rest of the data gets migrated in the background. You know, they say a picture is worth 1,000 words. So let’s take a look at this graph. What it shows here is — this is from a customer example on — this is showing time on the X-axis, and Y is both the number of VMs that have been moved to the cloud as well as in green the amount of data that’s been migrated. So this customer chose to make two-week sprints to actually do the migration. And you can see that there’s a step function. In the first week, what they would do, if you just kind of blow into one of these sprints, they did some discovery and planning. So picking up, say, 75VMs that they actually wanted to migrate.

So figure out which ones they are, kind of plan internally how they would move them, actual scheduling of the migration. And then in the second week, do the actual migration. Now, what you can see in this blown-up zoom is that most of the VMs were moved and running in the cloud in hours. Meanwhile, the data streaming in the background — in fact, it took about three days for all the data to get there. But in this time, the customer was able to actually start running these applications again and not impact their business.

And when you think about, you know, the amount of time you have to do migrations, this means you can easily do a migration of a large number of workloads over a weekend. And be able to take — rather than trying to look at many, many days of moving a lot of data. So why is this important? Well, one, you know, minimizing down time and then taking advantage of other capabilities in the tool set to be able to validate and increase the confidence that your workload’s successful. As I mentioned earlier, oftentimes less than ten minutes and you’re up and running. You save a lot of manpower. You accelerate the process of migration. And this actually greatly increases the likelihood that a migration project will be successful because you can stay under budget. But earlier I mentioned one of the challenges that was around risk and complexity.

What this technology also allows you to do is roll back. Unfortunately, there’s a conflicting deep dive session right now going on about the VELOSTRATA technology. Hopefully you can see the video or look at some of the collateral online. There’s also the ability to roll back. While we’re streaming the data to the cloud, there’s still data for that virtual machine running on premises. And if you find for some reason that maybe you forgot to move a workload or that it depends on or you have some other challenge, you’ll always have the ability to move back. And that safety net, that insurance policy is, you know, very valuable when it comes to actually doing these large-scale migrations. And ultimately, now by getting these migrations to happen, you can get to ramp up your cloud usage much more quickly. Now, in addition to M migrating your virtual machines, you may actually also have a lot of data that needs to be migrated. The data transfer appliance is one solution that’s available to help take, if you have, say, hundreds of terabytes of data that you need to move to the cloud in addition to your virtual machines.

You just, you know, can’t afford to actually move all that data. This is not necessarily so much for the virtual machines in that case, but it could be because you have some data warehouse or other large amounts of data that you need to be able to use while you’re in the cloud. So the transfer comes in to you and for you form factors with 100- 100-terabyte and 480-terabyte versions. The larger one can handle about a petabyte. Basically it’s requesting a transfer appliance, you get it you rack it it, fill it up with data and then ship it back to us. Basically this is a solution where you can actually have it sit in the data center for a while and then bring it back to Google Cloud.

So now that we’ve talked a little bit about some of the migration challenges and some of the solutions that are available, I wanted to bring one of our customers from Cardinal Health, Jon LATShaw and some of the work that they’ve been doing for Google. Jon? >> Jon: Thanks for having me. >> Jack: Can you tell us about what you do? >> Jon: We try to make it more efficient and cost effective. We’re a Fortune 15 company, about 700 sites and a lot of applications. Not to be too specific. One other point, if any of you have had the misfortune of being in a hospital or need to get a prescription for some sort of illness, you’ve probably used our services just going to a CVS or some sort of hospital.

>> Jack: One of the things when I was visiting you guys recently, hearing the stories about, like the kind of on-demand nature of the business you’re in. Like someone ordering something today for a patient that has a procedure tomorrow and, you know, critical services that you have to deliver to make sure that happens. >> Jon: It’s hard to plan emergencies, yeah. >> Jack: So can you talk a little bit about what prompted Cardinal’s migration to the cloud? >> Jon: Yeah, in a nutshell, it’s speed and efficiency. Our business is hard to predict. You wouldn’t think that from — if you look at healthcare, but it’s actually — a large component of our business is pharmaceutical distribution. So where drugs need to be, where generics might launch. And so a lot of that requires us to turn on a time and with traditional infrastructure, it’s just hard to do. Whereas conversely with GCP, I can spin up, spin down, all at a moment’s notice. And then although we have $130 billion in revenue, we actually run on grocery store margins.

So every penny counts. So we found that running in the cloud, if you do it the right way, can be considerably more cost effective than doing it ourselves. >> Jack: So with that in mind, can you talk about some of the best practices or perhaps lessons learned that you guys have experienced? >> Jon: Yeah. Kind of go back a little bit in your presentation, if you haven’t heard of ILLISTRATA, check it out. It’s the real deal. We were one of their first customers. It makes a tremendous difference if you’re doing lift and shift, there’s a lot of that we’re doing. And it really works as advertised. It sounded a lot like magic to me when they first pitched it. And they’ve proven to be a great partner, and it works.

So on that, you know, W we’re about a third of the way through and we’ve learned a lot of things. There’s one thing I would say, the technology is not the hardest part of this. So while it is challenging in some aspects, don’t worry about that so much. What I would say is get your house in order while you’re contemplating.

But before you move production, make sure that your CNDB is accurate and up to date. We had a very complete one. We could go through and say all the fields were filled out. But when we wanted to gain intelligence about our systems like what systems were part of our business process, what applications needed to talk to what applications on what ports, it wasn’t reliable because we never had to use it like that. So spend some time getting that stuff in order and be sure you can answer the right questions about what needs to talk to what.

I would say be friends with your finance people. This change is going to have a ripple effect of substantial changes through the finance team. They need to be prepared for, you know, direct chargebacks. You don’t have to do direct chargebacks. We chose to. It drives the right behavior. But it is a massive adjustment for them. Lastly, partner up maybe by your CICO, buy a bottle of their favorite beverage repeatedly. Security folks by nature are very cautious. Google has an enormous emphasis on security, but it’s not only their responsibility.

It’s a joint responsibility. And we’ve spent a lot of time on bringing the security folks along, getting our CICO on board, getting an endorsement. So that’s kind of it in a nutshell. If I had it to do over again, I would spend more time up front on those things and less time worrying about the technology. >> Jack: Great. So can you also talk a little bit about your experience with Google Cloud specifically? What has kind of stood out with that partnership? >> Jon: A couple things. One, they’ve been a great partner. So they’ve — their product teams have been very curious. We’ve gotten access to that. So for those that are kind of just dipping the toe in the water, if you’ve had experience with on-premise companies like, I don’t know, database company that begins with an “O” or something like that, and you say, hey, it would be great to have this feature in your next release, maybe in a year, two years it might come out.

We’ve had the experience of working with our product teams to say, hey, on your API gateway, we would like this. And they’re, like, okay, cool. And literally six weeks later, we have that feature. So they’ve been extremely responsible, extremely curious on what it takes for us to move the cloud. And it’s not an unselfish investment. We’re a lot like every other Fortune 500. We have the widest variety of systems you can imagine. And if Google invests in making us successful, then that just paves the way for everyone else. I found their infrastructure, their network to be fast and reliable, more so than other cloud providers we’ve looked at. Live migration’s a big feature for us. We did some things internally that are kind of unnatural acts, but it was to save on licensing fees for, like some web sphere products where we would create a VM that had two CPUs but 48 gig of RAM.

We did that because we wanted to save on licensing. So in some cases we’re going to unwind that and do it the right way in the cloud. In some cases we can’t. And so their custom machine size gives us that flexibility to do that instead of kind of the T-shirt sizing that other cloud providers have. >> Jack: Great. Well, thank you so much, Jon. >> Jon: Thank you. >> Jack: Appreciate it. It’s great to have the opportunity of fantastic customers like Jon and his team, and looking forward to collaborating further on their continued journey in the cloud. So before I wrap up here, I want to just highlight a number of sessions. There are actually probably many other ones that are related to the topics we’ve talked about where we can get a little bit deeper. We can talk about their experiences moving to GCP. Or if you want to look at other different technologies and products that have done migrations to cloud. Here is a few sessions. And as I mentioned earlier, unfortunately that 112 session is going on right now. So I just want to leave you all with a few important takeaways.

I think, you know, what I try to do is paint the journeys that you might want to take. And it’s really important to recognize that this modernization of IT is important. There’s a great opportunity here to improve efficiencies and help your business be — help you accelerate your business rather than having IT infrastructure just be a lot about cost. But ultimately, Google is here to meet you where you are. We have a vision for where we want to take everyone.

But along the way, everyone might have a slightly different journey. Second is hybrid is just the reality. It is going to require rethinking a little bit about your architecture from a network to a management to an application standpoint. But it’s important to have a vision for where you want to go because this is a reality that’s here for many, many years ahead. And then finally, the journey is long. It’s complex. But we’re here to support you. Whether it’s through PSO, through the many partners we have here at the conference, from the engineering teams, from the customer teams.

We’re here to help you be successful. There’s a lot of resources and tools and guides. And so please, you know, reach out if you need help with your migration. And thnk you. >> Please welcome Diane Greene. (Applause.) >> DIANE GREENE: Hey, everybody, thank you. So you are in for a treat with these two fireside chats. Google is doing a lot in health, Google wide we want to help everybody’s health in the world.

So we’ve got two firesides, the first is led by Dr. Greg Moore and he joined Google Cloud as head of healthcare for Google Cloud about two years ago. I was told about him by our Machine Learning researchers, he was head of innovation and a radiologist at Geisinger, M.D., Ph.D. and he’s now with us, and I am going to let him introduce his esteemed panel and I mentioned NIH and Twitter lit up because doctors and researchers are so happy to have their data accessible in the cloud so great day for health. Let me welcome Dr. Greg Moore. (Applause.) >> DR. GREG MOORE: Welcome everyone. We are excited to have you here to discuss healthcare. It’s truly an amazing time to be involved in healthcare technology. Over the last decade, we have witnessed significant changes in medicine – and perhaps most relevant to this audience is the digitization of health data. And yet today, we stand at a significant juncture. We have created and will increasingly create vast amounts of digital health data, yet much of that data remains underutilized and siloed in the organizations that hold it.

Challenges of interoperability, limited access to computing resources at scale, and the critical need for security and privacy protections, have often made it difficult for healthcare systems and life sciences companies to translate these rich data sets into meaningful improvements. Additionally, the creation and collection of this data has put significant strains on many individuals within the healthcare system, including patients and their providers. Yet despite these challenges, I see a bright future. By granting providers and biomedical researchers access to the better flow of data via Cloud, we’re working to inspire new discoveries with AI and ML, which we expect will lead to insights to improve patient outcomes.

I see a world where Cloud and new AI-supported clinical insights and workflows will help providers spend more time engaging with their patients and less time facing the computer. I see a world where Cloud enables us to turn petabytes of healthcare, genomic, imaging, clinical, and claims data into breakthroughs, better care, and streamlined operations. Unlocking the power of this health data is not easy and requires deep collaboration and partnership with each of the stakeholders including patients, and requires careful data stewardship and protection of patient privacy. This journey holds great promise for positively transforming the way that healthcare is delivered and enabling access for care, discovery and insight. As we are successful, I believe these changes will lead to improved outcomes, increased patient and provider satisfaction, lower costs, accelerated biomedical research discovery, and ultimately better health for everyone on the planet. Today, I am fortunate to be joined by two luminaries in healthcare and biomedical research.

Andrea Norris is the NIH Chief Information Officer and Director of the Center for Information Technology. As the NIH CIO, Ms. Norris oversees NIH’s $1 billion IT portfolio, which supports scientific research and discovery. In addition, as the Director of NIH’s Center for Information Technology, she manages a wide range of NIH-wide information and information technology services, including a state-of-the-art, high-speed research network; the Biowulf high-performance scientific computing system; cloud-based collaboration and communication platforms and tools; bioinformatics research programs; business solutions and applications; and not to forget, the NIH Data Center and the 24×7 operations of NIH’s distributed IT environment. My second guest, Dr. Toby Cosgrove is the former CEO and President of Cleveland Clinic. He went to the University of Virginia School of Medicine and received a Bronze Star in the U.S. Air Force in Vietnam. As a cardiac surgeon, he performed more than 22,000 operations and holds 30 patents for medical innovations.

We are additionally excited to announce today that Dr. Cosgrove is joining Google Cloud Healthcare team as an executive advisor. (Applause.) Now please join me in welcoming Andrea and Toby. (Applause.) >> DR. GREG MOORE: Thanks so much for joining us here today. I would like to ask you a few questions for the audience. Andrea, What do you think are the largest drivers of change in healthcare and biomedical research today? >> ANDREA NORRIS: Greg, as you know The National Institutes of Health is the largest funder of biomedical research in the world. 80% of NIH funds roughly $35 billion a year supports scientific research by 300,000 researchers who are located all over the US, and in some cases, around the world.

Today, we stand at a unique moment of opportunity for biomedical research. We are now able to harness the power and rapid advances in technology to accelerate discovery of new drugs, new therapeutic treatments, and cures in ways we could not have imagined. We are generating vast amounts of biomedical research data, and it is doubling every 7 to 10 months! Exponential growth in genomic sequencing data volumes of high value data from Electronic Health Records Mobile health technologies like personal EKGs, blood sugar devices, and even my own Fitbit Medical imaging data And even behavioral data about the combination of experiences throughout our life based on where we were born, live, learn, work, and play This “data deluge” has led to many new “big data” research programs.

The Cancer Moonshot program an ambitious effort to accelerate How we detect, how we treat and how we cure hundreds of different types of cancers. Our Brain Initiative to revolutionize our understanding of the human brain and how it works enabling us to better diagnose and treat diseases such as depression, Parkinson’s and Alzheimer’s NIH Data Commons a group of innovative projects testing new tools and methods for working with and sharing data in the cloud enabling researchers to experiment with rich data sets across many data domains and scientific disciplines in ways we couldn’t do before These are just three exciting programs we have underway where we are seeing tremendous opportunities for discovery. But there are challenges. Most of research data is siloed in single computers or servers not integrated or interconnected. We need to make research data FAIR findable, accessible, interoperable and reusable We need to be able to sustain high-value research data sets in Protectioning an individual’s health, information, or other sensitive information, data.

We take that responsibility very versesly. We’re also seeing some exciting opportunities in the areas of artificial intelligence and machine learning. In fact, NIH just sponsored a large workshop on these topics just yesterday to explore how we can apply these capabilities to accelerate medical advances. How can we do a better job in recommending patient treatment options and how can we do a better job in predicting the outcomes of those treatments? Today is takes about $1 billion and takes up to ten years to get a new drug out for use.

How can we accelerate computer aided diagnoses using MRI and X-ray images or other diagnostic medical procedures? So we are at an exciting time, and there’s a lot more still to come. We’re just at the beginning. >> DR. GREG MOORE: A truly exciting time and an amazing amount of work going on at NIH. Thank you for that summary, Toby what do you think are the largest drivers of change in healthcare and biomedical research today? >> DR.

TOBY COSGROVE: Greg, we have a data deluge going on now and we are going across the United States with exams, physical findings and we have digitization and there is one mammogram as there are in the entire New York City phone book, huge amounts of data and there is 3 billion base pairs in every human genome and we have a tremendous input of data from scholarly works. There are now 5600 journals putting out over 800,000 articles a year, that’s more than I can keep up with, and you stop and think about this, the explosion of data can only be looked at in retrospect.

100 years ago, the total amount of knowledge in healthcare doubled every 150 years. By 2020 the total amount of knowledge and healthcare will be doubling every 73 days. Stunning. We have to be able to categorize that. We have to be able to store it, we have to access it and we have to interpret it. Big challenges going forward. The second thing we are dealing with is the explosion and cost of healthcare. Right now the total portion of healthcare of the GDP is 18% and the concern is it’s going to go up and limit the other things we can do, like education, across the country. What we’re seeing is concerning to the fact that we may be seeing more pressure on healthcare as we go forward.

Two things, the silver Tsunami of people who are aging. Right now there are 10,000 people a day who turn 65 and the life expectancy is approaching 80 years of age, tremendous amount of older people. The second thing we have to deal with is we can do more and more for people as we go forward. Just think about 20 years ago we didn’t have great joint replace ams, the cardiac surgery we have now, transplantations as we have now and cancer cures for people now. So we have two things going o an explosion of data we have to learn to deal with, which can help us and secondly, we have the cost and we’re going to require both technology and the ability to manage that data for us to get better care and control the cost.

>> DR. GREG MOORE: Certainly many challenges but opportunities ahead as you pointed out with this explosion of data and explosion of knowledge. It’s our responsibility to figure out how to use all of that for better patient outcomes. Andrea, what do you think these changes mean for the individual citizen? >> ANDREA NORRIS: At NIH, we are working to better understand disease and match the best care for you based on your unique health identity: Your health characteristics Your life experiences Your genetic profile Our All of Us Program launched in May is among the most ambitious research effort that our nation has ever undertaken we aim to collect a massive amount of individual health data from 1 million participants across the US. It represents the hope for all of us to come together to help change the future of healthcare. Our goal is to uncover paths toward delivering precision medicine individualized prevention, treatment, and care for all of us. Participation is open to everyone. We want to reflect the rich Diversity of our country. Participants will be partners who are willing to share there’ll it — their biology, lifestyle and environment data to help research. You will be a true partner with ongoing opportunities to help shape the program with your input.

You will have access to study information and data about yourself, with choices about how much or little you want to receive. And rest assured, All of Us is employing state-of-the-art security technologies and following strict security protocols and processes to protect participant data and assure it is used ethically. The All of Us Data and Research Center is powered by Google Cloud and supported by Verily Life Sciences (together with Vanderbilt University and the Broad Institute) to allow “anywhere and everywhere” researcher access including citizen scientists. Key questions we hope to help answer: How can we: Prevent the chronic pain that affects more than 100 million people across the US each year? Or develop better pain medicines that are not addictive? How can we slow down or stop different kinds of dementia? Did you know that every 66 seconds, someone in the U.S.

is diagnosed with Alzheimer’s? More than 5 million individuals live with Alzheimer’s, my Mother included. Develop treatments for diabetes, which affects almost 10% of all Americans, or prevent diabetes altogether? Develop more cancer cures that will work the first time, so we can skip painful trial-and-error chemotherapy? I encourage you to sign up at: Learn about your own health, including personalized risk factors and studies that lead to new understanding and treatments Help fight disease and improve the health for you, your family and friends, and future generations. It will take All of Us to be successful! >> GREG MOORE: Tr Truly important work, Andrea, we are delighted to host NIH together in the Cloud.

Toby, same question for you, what do these changes mean for the individual patient? >> DR. TOBY COSGROVE: Great question and it’s exciting. Everything is going to change. What diseases we are treating is going to change. We’re now seeing more chronic disease, acute disease is going away and 85% of our costs in healthcare are now for chronic disease so there is going to be a big push on keeping people well. The second thing that’s going to change is how we treat them and it’s going to be much more personalized medicine as the human genome becomes a regular part of our therapy for patients.

Where are we going to treat them? It’s going to be different, you are not going to treat in the hospital anymore, at home, as outpatients, and a lot of it is going to be done by virtual visits. In fact, some hospital systems are beginning to see only half of their doctor visits done in person. Big change there. Then you have to say who is going to treat them? Well, we’ve got about 100,000 doctor shortage across the United States so you are going to see more and more nurses and physician’s assistants stepping in to help and that’s one of the ways technology can really support them.

The other question is, who is going to pay for it? We now see that high-deductibles are part of what’s going on, and co-pays, and so people are now interested in how much things cost and what the results are going to be so they’re pushing more and more towards value for healthcare. That is where the information is going to be ubiquitous, people are going to deal with it everywhere, they’re going to measure the value and use it where they want to use it and I think this is a tremendous change for healthcare providers and for the entire healthcare system in the United States, in fact, around the world.

>> DR. GREG MOORE: Certainly data is going to be of key importance to businesses for healthcare systems to really survive in the age of value-based care and knowing that data and actioning it. >> DR. TOBY COSGROVE: You’re absolutely right and this is the key to driving quality and taking costs out. >> DR. GREG MOORE: Indeed. Toby, thinking about your four decades of experience at the Cleveland Clinic, how has technology impacted the practice of medicine in your experience? >> DR. TOBY COSGROVE: It’s affected almost everything. I must remember back when I did my first paper. What we did is we had spreadsheets of yellow pieces of paper and we used to go in op Sunday afternoon and make telephone calls to find out how people were doing. Things have changed. Let’s take heart surgery, everybody has heard the expression the cracking of the chest, and that was an incision this long.

Then it went to minimally invase sieve which was about a 2 and a half incision, and then robotics, this was 3 millimeter incision and now with a catheter, no incision, exciting change there. Let’s take a stroke. People used to have a stroke and there was nothing you could do for them. You would count on rehabilitation. Now we have clot-busting drugs and we have mobile stroke units so we’re beginning to take the care to the patient and when somebody has a symptom that sounds like a stroke you dispatch an ambulance, they can do the CAT scan in the driveway and through tele medicine they can read it and save the patient right there, saving millions of brain cells for patients.

Now what’s excite and is on the verge of treating patients is we are starting to make pace makers in people’s brain in the area of infarction and seeing rapid rehabilitation and they are gaining their activities back closer to normal. Finally, let’s look at cancer. Cancer used to be something that you treated and you gave the chemo therapeutic agents and saw if they worked and if they didn’t work you saw something else. It was trial and error.

Now we are sequencing the patient, we’re sequencing the tumor, and we’re directing the chemotherapeutic agent to the particular genome type of cancer and repeatedly doing it over a treatment period. That does a number of things. It decreases the morbidity of it, it improves the treatment, it improves the mortality rates and clearly it’s much more efficient. So the changes that have gone on have been built around new technology and new knowledge.

I am so excited about what we’re seeing going forward because now we have the opportunity to understand all of the things that go on across the body in which more detail and treat them much more efficiently and much more invase — invase actively. It’s a great time for medicine. >> DR. GREG MOORE: Appreciate you helping to fide us here at Google Cloud. Andrea, partnerships will continue to be key for transformation in healthcare and biomedical research. Can you share your thoughts in how NIH is approaching those partnerships? >> ANDREA NORRIS: At NIH we can only achieve what we do with partner ships with academia, and others. And I would like to promote what Google Cloud has provided over the years.

It was a few short months ago when we talked about combining our resources through the use of state of the art platforms, software and tools. We talked about how important it is to stay focused on our missions, and we acknowledge that NIH’s mission to pursue and apply fundamental knowledge about living systems to extend healthy life and reduce disease and Google’s mission to organize the world’s information and make it universally accessible and useful are clearly synergistic. We have learned that Google Cloud can overcome the data storage issues we have, all the rich tools for data preparation and analysis. In line with NIH’s first-ever strategic plan for data science, today NIH launched a new initiative to harness the power of commercial cloud computing and provide NIH researchers access to the most advanced computational infrastructure tools and services.

We’re calling it the strides initiative. It stands for “science technology research infrastructure for discovery, experimentation and sustainability. It’s a mouthful. As Diane Greene announced this morning we’re delight that Google Cloud is our first industry partner. This innovative partnership will allow hundreds of researchers, those working at the NIH and at the more than 2500 academic institutions to make use of Google Cloud’s technologies and services to advance health and reduce the burden of disease in a more cost effective and equally important a more sustainable framework. Our initial efforts are going to focus on making NIH funded high-value biomedical data sets accessible through commercial cloud platforms.

This would be available to the research communities to find, access, reuse and share this research data and we will do this while ensuring safeguards that are needed to secure and protect the privacy of your personal health information or other sensitive data. We will also be partnering with Google Cloud to take advantage of innovations such as machine learning and artificial intelligence, and experimenting with new ways to optimize technology-intensive research hoping to capitalize on some of the opportunities I discussed earlier. In addition, we plan to establish training programs for researchers on how to use the Google Cloud platform for health research. So why are we excited about this? Put simply, our partners like Google Cloud will provide the state-of-the-art platforms and tools we need to support our new computationally intensive, data rich biomedical research programs. That means our researchers can focus their efforts on what they do best applying the best scientific expertise, methods, and tools to make new, critically needed discoveries and breakthroughs in health. We hope our STRIDES Initiative will begin to shift today’s research paradigm to a richer, more collaborative model.

Greg together we can take advantage of this great opportunity to implement an open, interconnected, and sustainable ecosystem for collaboration and discovery one where we can bring the best scientific and technical insight into some of our most challenging health problems. And in this way, with help from Google Cloud, we believe we can begin making great STRIDES to accelerate discovery and improve the lives of you, your family and friends right now and for future generations to come. >> DR. GREG MOORE: Thank you, Andrea, we couldn’t be more excited at Google Cloud to partner with NIH in this truly important work. With that I’m going to thank my guests and we have another fantastic panel to follow. Thank you. >> DIANE GREENE: Thank you, Greg, Andrea, and Toby. Our next panel is a shift over to scientists, and technologists and heading up that panel who will introduce everybody is Dr. Eric Schmidt the computer science kind of doctor, let me welcome to the stage, Eric Schmidt. >> ERIC SCHMIDT: Hi, Diane, nice to see you. It’s my pleasure to introduce two very close friends who you will recognize by virtue of what they have done and the impact that they have had on life, Eric Lander who is one of the top scientists in America today, one of the co-discoverers of the human genome and work with me and others as the President co-chief science advisor under President Barack Obama.

Jeff Jennings, President of research for Google you know him because you use his products every day, Spanner, Map reduce and others. Please welcome Jeff Jennings and Eric Lander. (Applause.) >> ERIC SCHMIDT: There are so many things to talk about and with this audience I want to talk about not just Cloud but science and the things you have done. Let’s start, Eric, with you. Your background is first and foremost as a mathematics. >> ERIC LANDER: Yes. >> ERIC SCHMIDT: And now you are one of the most successful biologists every.

What happened in the data world in biology as you grew up in it? >> ERIC LANDER: The data world in biology, it’s easy to describe what it looked like in the 1980s, which was there was no data world. The Whitehead Institute for biomedical research which is where I had my first position, when they were planning the building in 1983 it had no provision for a computer room anywhere in the building.

At the last minute just before they were going to move in they built a computer room because they thought a computer might be useful. >> ERIC SCHMIDT: What did they do with everything? >> ERIC LANDER: All data was stored in your notebook, if it was important you Croat it down and the thought that I might want data in your notebook, I might go down the hall and talk to you and get data from your notebook, but within 1990 the human genome project got launched and I was involved in this project and from 1990 to 2003 the whole world worked to go to read out one human genome, that is the 3 billion letters of one person’s DNA.

That took 13 years and we spent $3 billion doing that and we thought we were going to do a whole lot more of those, but that was not particularly in our mind because it was $3 billion. >> ERIC SCHMIDT: $3 billion — >> ERIC LANDER: 3 billion letters of DNA for $3 billion. >> JEFF DEAN: Can I buy a vowel? >> ERIC LANDER: You can, yeah, it’s gotten cheaper to get made because in the 15 years since we finished the human genome project, the cost of sequencing DNA has fallen by about four million fold, that’s three times faster than Moore’s Law and at the Brod Institute we sequence one human genome off the line every 5 minutes and we recently scanned genomes, worldwide there might be half a million gene gnomes and it’s going up and up, it’s stunning at the rate we generate data so for a field where you didn’t think you would have a computer room, today at a biomedical research institute we have hundreds of people in our data science platform who do thing that you would recognize here in Silicon Valley.

So it’s the biggest shift you can imagine. You can’t be a biologist today without data. >> ERIC SCHMIDT: I want to go back to that because they combine both your activities. Jeff, you are obsessed at the moment about the retina and in particular something called the fundus. Why are you obsessed, literally with retina? >> JEFF DEAN: Right, the reason we’re obsessed is there has been a lot of fundamental advances in general purposes computer vision so we’ve gone from not being able to see very well with computers to being able to see very well and this has broad implications across a range of industries because if you think back to the time that animals evolved, we are in that world of computing. One of the major fields this has implications for is in various kinds of medical imaging problems. Now we can train machine learning models to take, for example, in opthalmology and take, for example, an example of a retina and see if a person has RAD, there are hundreds at risk around the world and there are many people in countries without ophthalmologists.

We now have machine learning models that are not only as good as board certified ophthalmologists, but they have gotten very good at predicting and reading the disease. That’s an advanced in healthcare. >> ERIC SCHMIDT: You had a recent result where you could look into somebody’s eye and predict whether they smoked or not, right? Their sex, the quality of their heart health with this exam. People are excited that you will be able to offer a tool to a cardiologist, number one cause of death worldwide is heart disease of one kind or another, you can literally tell how they’re doing in their eye. >> JEFF DEAN: It’s super interesting. Assessing diabetic retinopathy is something we know how to do. What’s interesting is Lily Payne who heads up the diabetic research we have been doing and we had people to train about machine learning models and she said to this new person train a model to predict age and gender from a retinal image and they shout they wouldn’t be able to predict gender at all because ophthalmologists can’t predict that but they would be able to tell if the pipeline was working.

She said go off and do that and comeback when you’ve done that and the person said I’ve done that and I can predict with 75% accuracy and Lily said no, no, that can’t be right. They came back and they said now I can do it with 90% accuracy. So that led the team to say, hey, maybe there is more in the retinal images we really thought there was and they started to predict a bunch of other things that are things that you might want to be able to predict that are indicators of your cardiovascular health. So now we have a combination of things we can look for in the retinal image that less us assess cardiovascular health to the same level of accuracy as a more traditional, draw blood, send it to the lab, get the lab tests back. >> ERIC SCHMIDT: One of the things you taught me a few years ago, Jeff, is we are getting good at looking at a sequence of numbers and facts and determining hidden patterns and predicting the next step so the vision work is looking back, seeing a pattern and discovering it.

But we’re also learning that we can discover all sorts of patterns that didn’t exist and your research, Eric, you’re doing a lot of this and discovering strange things that people don’t know. Can you take us through the Big Date impact? People assume it’s useful for genetics, but it’s useful for medicine. >> ERIC LANDER: It’s remarkable what you can do with Big Date. I will pick some diseases and tell you what’s happened because of large amounts of data. Sk Schizophrenia, go back seven years ago, the number of genes that we knew that not definitively played a role in schizophrenia was approximately 0, nothing. Today that number is over 200 genes that are definitively known to play a role in schizophrenia, and none of it came from prior biological hypotheses.

It came because people collected 120,000 people with and without the disease and looked at them for millions of genetic variants and started building models and correlating and validating and in this way we have this Liz of 200 genes and we don’t understand what most of them do but some of them we do. Number one on the Liz it’s involved in printing the synaptic connections in the brain and number sicks on the Liz is also involved. So suddenly we know that disease involves that process and now people are turning their guns on the Liz and figuring out what do all these things do? So it’s true for inherited diseases that rather than knowing a lot and having prior hypotheses, asking the data is the best way to go. This is turning out to be the case in cancer. We thought in the 1980s and ’90s that we knew what the genes that caused cancer were about, they had played roles in signaling pathways in the cell. That’s true but it turns out there were vast classes of genes we were missing and if you looked at tumors and read out their DNA and correlated where the mutations were you could find whole new classes of genes based on those mutations and those have implications for which patients respond to which medicine, because it turns out the patients that respond to particular anticancer drugs often have particular mutations, lung cancer is a great example but there are many like that.

>> ERIC SCHMIDT: One of the things that happened at Google five years ago is that a team under Jeff started working on a platform for genetics on top of Google Cloud. I remember meeting with them and saying, what’s special, right? This is just data. But they explained that these sequencing machines produced an enormous amount of data and the genetecists were having trouble. What will it be possible for them to do because of the cloud that they couldn’t do before.

>> ERIC LANDER: You have to realize maybe with the genetics I was telling you about you myodone it with a spreadsheet and some sophisticated analysis but now we’re moving to the point where we need to think about truly massive data. So what do I mean? I mean reading out from every cell, it’s pattern of which genes are on and off, how much. There are 20,000 genes, every cell is a vector of length 20,000, every celling in your body is a point in dimensional space and we’re getting to read these points out just in the last several years.

We can start reading out hundreds, thousands, we’ve go got several million. There is a project called the human cell out Liz that’s on target to getting a billion cells and asking questions like: What’s a cell type? It’s a cluster in 20,000 dimensional space, et cetera. What are pathways and programs that the cell is running metaphorically? Well, we’re going to have to cluster and organize the patterns of genes being turned on and off and there is zero chance we are going to be able to do this without aggregating these data, normalizing these data without writing sophisticated tools to do this and this is be not abstract stuff, it’s not just cute that you might find stuff out about biology, you like the retina, people used to think the retina had a certain number of cell types, that number went from 100 to 200, we had been blind to half of what’s going on. You want to look at a cancer, people would take out cancer, slice it up and look under the microscope. Now they’re taking out a cancer and disaggregating it into single cells and asking what are the immune cells thinking, the cancer cells thinking and the stromo cells thinking and every cancer is getting described as a collection of cells and it’s being combined with imaging because you can put them on top of each other and read the gene program and see which cells are thinking what when they’re near who? And I’ve given up trying to project where it’s going to go because every time I feel like I’m making a projection that’s too forward leaning, it ends up being pokey some years later.

>> ERIC SCHMIDT: Jeff, you’re struggling about this issue of both the scale of the data and the scale of computation. It’s interesting that Eric mentioned Moore’s Law, it’s slowed down quite a bit in the last few years and thank goodness this is proceeding. You and your team have developed a TPU and we announced this morning the third generation. Why did you have to do a specialized architecture and how do you use them? Can these people use them, too? >> JEFF DEAN: Yeah.

As you observed, the progress in general purpose CPU performance due to Moore’s Law has slowed down over the last 12 years, not growing nearly at the rate it was the 30 years before that. So just as we’re having this explosion in machine learning usage for really a wide variety of different kinds of problems, in fact if you look at the field of machine learning the rate of research paper production is growing faster than Moore’s Law over the last ten years. So there is all this news, exciting use of computers for applying them to data sets and interesting problems just when our computing capabilities is slowing down in general purposes.

So it turns out that deep learning, machine algorithms send to be made up of a handful of things and there are two nice properties these have. They’re tolerant of precision, which is fine with most of these Al >> Rhythms, and, second, they’re all made up of a handful of matrix, vector dot products, linear algebra so if you can build a computer that is specialized to do linear algebra that broadens the set of architectural choices you have and you can focus on machines that are extremely good at that. That allows us to avoid the slow down because the semi conductor processing is not happening.

>> ERIC SCHMIDT: You did something extraordinary, you created tensorflow and the creation is the power and that’s the base of the algorithms and you gave the Open Source software to all of our competitors and the hardware and 100% of the market is using it, congratulations for giving all this to our competitors! (Applause.) >> JEFF DEAN: Thank you, thank you, we have happy tensorflow users, which is great. Tensorflow is the second generation that our research team put together to conduct our own research and to put some of that research into Google products. When we first started out I wrote a document about why we should Open Source that because I felt like in the PaaS we haven’t Open Sourced some of the infrastructure innovations that we have built at Google.

>> ERIC SCHMIDT: In hindsight you might have Open Sourced Map Produce but with tensorflow you knew you had to get ahead of it and you just did it, right? >> JEFF DEAN: Yep we wrote it in a way that was going to be easy to Open Source and that’s been a good choice because there are so many potential applications for machine learning around the world that are benefitting from having a unified tool kit that all kinds of contributors around the world contribute to to improve that nonprofits and governments and organizations around the world and companies request build their machine learning use on a common set of tools.

That’s been really great. >> ERIC LANDER: It’s interesting you bring up the Open Sourcing of that, because the project that we’ve been working o I think Andrea Norris mentioned it, the “All of Us,” for NIH the architecture was to build storage for data, and working with Vanderbilt and the Brod Institute we decided to go about it the same way, an Open Source architecture that would work for all these projects. Indeed there was a notion and RFAs were put out for different groups to develop architectures for each separate project, as if what we should be doing for psychiatry was different than what we should be doing for cancer is and because we got ahead of that, there is a document on the web now, the data biosphere, which is a statement of an Open Source architecture, how to modulize the relevant code, open APIs for it, and commitments by several people and all of the projects that are going on have adopted the same Open Source architecture code on GitHub so it is our dear hope that we will avoid by getting out ahead of it what happened to EHRs where we have many, many different incompatible systems.

At the moment there is a decent chance that for genome — genomics they will all use the same thing and we will all benefit. >> ERIC SCHMIDT: The EHR is very good at billing, for example, but there is sophomore they can do and I want to finish on the notion of computer architecture. Jeff, we have a joke that computing is clip but knowledge is priceless. What’s happened is it’s been difficult to find people who have the requisite mathematical ability, these are bespoke Al. >> — Al garythms, and you have an idea, called auto-ML which should make a big dent in this hard-to-use problem. >> JEFF DEAN: I would say today there is relatively limited expertise in the world about how to take a problem and solve it with machine learning. There is tremendous interest, you see the rate of research papers, the rate of undergraduate enrollment in machine learning classes is going up exponentially, but there is still tens or hundreds of thousands of organizations in the world that can effectively use machine learning today and there are 10 million organizations that probably should be using machine learning and have data of the form that would be amenable to machine learning.

The usually way you solve a machine learning problem is you have data for a problem you care about, you have computational resources, and then you have a human machine learning expert sit down and make a bunch of decisions, how you’re going to model it, if it’s a deep-learning model, how many layers, what size filters, and you stir it all together and they run a bunch of experiments and at the end you get a solution to your problem, you hope. But the problem is there isn’t that much machine learning expertise in the world. We will rely and wait for educational institutions to produce more of that expertise, but we really want to broaden the accessibility so more and more people in the world can use machine learning to solve problems they care about.

Auto-ML is this idea that we can take machine learning problems and train a machine learning model to ought Mott clay learn to solve machine learning problems >> ERIC SCHMIDT: A meta machine learning. >> JEFF DEAN: Yes, for example, one of the first things we did in this area was two researchers in our group created neural architecture research which is finding a way to find architectural models. There is a model that samples architect tours, you train all ten of them and you see what works well. >> ERIC SCHMIDT: So you generate a bunch, try them, kick out the ones that didn’t work. >> JEFF DEAN: And you say this one got a 92, this one a 73 so you say do more like the one at 92 and you enter rate, many, many times. >> ERIC SCHMIDT: This is available to our customers in the audience? >> JEFF DEAN: The Cloud ML vision product has been available for months and just today there were a couple of new language processing products that were announced.

>> ERIC SCHMIDT: Just to be clear the TPUs, the computational resources, the versions of Map Reduce are all available in the Google Cloud offerings today and Tensorflow is available today? >> JEFF DEAN: Free. It’s been downloaded by hundreds of organizations. >> ERIC SCHMIDT: Don’t waste time getting to this architecture. It is the architecture of the future. In the time we have remaining I thought I would explore with where are we with the hospitals, medical systems, the data problem? You’ve done fantastic work at the genetic level, you’re P beginning to develop drugs with partners and so forth, I think that’s pretty well wired.

We haven’t seen the same benefit when you go into a hospital. It still feels like it’s the dark ages, you have to fill out every form, I can’t get my MRI from here to here and on and on. Explain to me where you think we are and where we might go and Jeff, maybe you can talk about where you think we will be going. >> ERIC LANDER: Yeah, it’s frozen history that’s set us back, because hospitals got into these data systems early, and didn’t think about how all these pieces would fit together, we find ourselves in a situation where, you know, it’s just a bizantine world broken up in different ways and in most cases people might migrate to another solution. But it’s hard to do that. So everybody in hospitals know lots of things are going on by hand that not only are a huge pain for physicians, and every time a few rev of something is introduced at a hospital, people retire because they don’t want to learn the new rev. This really happens! But we know it’s not just the pain in terms of work load and boring work and transcribing notes and things, we know that we’re missing so many insights.

So much of what you’ve done with retina scans and other things, that everywhere in the hospital there are signals. There are signals that will let us anticipate who is going to be at risk of a kidney injury, who’s mammogram doesn’t look like it has an obvious breast cancer but there is a signal there. >> ERIC SCHMIDT: To be clear: With enough computing power and Tensorflow and TPUs, we can discover these that have not been seen before? >> ERIC LANDER: And people are doing that.

We now see evidence when you run machine learning models on mammograms and other things, there is data there but it wasn’t evident to the eye, just like in the eye it wasn’t evident that you could see cardiovascular outcomes? >> ERIC SCHMIDT: What about the physician problems? Physicians are enormously under stress today, there is a huge burn out problem, a huge shortage of physicians, true not just in the United States but globally. What can we do about that? >> ERIC LANDER: The next generation of young physicians, I see and work with them all the time, they know this and in their world they interact with tech in the normal kind of way so they are incredibly anxious to get this in place.

I think we need to take seriously that this is a harder problem than many things that get done in tech but maybe the single most important and satisfying problem to work on. I find at the Brod Institute we get people in tech who have worked on other problems because they say they can’t think of anything more important to do we have to solve this in the next generation. >> ERIC SCHMIDT: It’s interesting, Jeff, that you just started these projects. Why? >> JEFF DEAN: I’ve always been interested in the use of computers for healthcare, my dad is an epidemiologist and was interested in the start of the personal computing revolution and how it could be used to inform public health decisions. I actually wrote some software in high school for epidemiology which is still one of my most cited works. I think where we are today is in the last ten years in the U.S. healthcare system we’ve gone from basically having very little data in electronic health record form to most of it is, 10% to 90%, something like that. That’s a tremendous opportunity because if you think about large healthcare systems with billions of patients and tens of thousands of doctors, tens of years of history per patient in these electronic medical records, we now have computers that are good at predictions types of tasks, given a patient here are they likely to develop diabetes, are they at high mortality risk now.

>> ERIC SCHMIDT: This goes back to the key thing AI can do which is predict the next thing based on history, they can probably predict what will happen next to me that will help them? >> JEFF DEAN: Yes, so essentially the way I would view these models is if you have medical records with 100,000 medical doctor years of wisdom instilled in those decisions that are recorded in the medical record, you want those medical decisions to inform every doctor in the healthcare system so that they can make better decisions, they can get an instant second opinion or advice about these sorts of things ask we now have the capability with machine learning to make it so you can distill your 2000 doctor colleagues into a research tool for you. >> ERIC SCHMIDT: What are the researching things that you’re doing in your research job that could ultimately lead to transformative products in this area? I know, for example, you’re working in voice, the notion of voice transcription.

>> JEFF DEAN: Yeah. >> ERIC SCHMIDT: To help doctors out. >> JEFF DEAN: As you called out one of the big stressors for doctors today is the way we’ve gone to electronic medical records unfortunately has them doing manual data entry that goes into the MR and datas additional work where they’re not caring for patients. It’s a documentation thing. That is the richness of data that is there so it’s great to have that there but we think you could use speech recognition which works better, automation techniques that would allow you to go from a conversation between a doctor and patient to a draft that you would want to enter into the medical note that would relieve doctors of the documentation burden that they have today. You would also, then, probably get richer and more interesting information into the medical record.

>> ERIC SCHMIDT: So unfortunately we have to stop, we could go on for hours. For me it’s a privilege to be with you but it’s an even bigger privilege to be with two perhaps biggest heros of my life. If you look at what Jeff did in the last 15 years and imagine what he will do for all of us in computing, it’s extraordinary. Eric Lander’s contribution to our world is difficult to describe it’s so broad. Thank you so much both of you.

(Applause.) >> ERIC SCHMIDT: Enjoy the rest of the conference, thank you very much. (Applause.) >> Please welcome, Tariq Shaukat. Good afternoon everyone, thrilled to have you here and thrilled to be announcing and introducing our customer innovation series. As Diane mentioned this morning we believe that Cloud is really a phenomenal and transformational set of new technologies and new capabilities that are available to our customers but even more importantly than that, we believe that the cloud is really enabling a fundamental reefing of how each and every business and industry is working. It really allows you to start forgetting about the infrastructure, forgetting about all of the details of how does this actually need to work, relying on the capabilities that we have in the Cloud, relying on the new innovations that we’re able to generate and think about how does that impact my business? How do I transform my organization? And we are within Google Cloud investing deeply in industry verticals in order to make this happen, whether that is in healthcare if you saw the Fireside chats we had just a minute ago, we’re doing the same in financial services, media and entertainment, in gaming, the same in the energy industry and the same in automotive and transportation.

We really do think Cloud is becoming a catalyst for change, a catalyst for transformation. Even more importantly than us believing it, is the fact that we’re seeing our customers do it. We’re seeing our customers implement the Cloud and Machine Learning and get new value out of it, transform their businesses. So I’m thrilled to kickoff that customer innovation series by introducing a series of speakers from global icons. These icons will come on out and each of them will tell you what they are doing in their industry, what they are doing in their companies to really drive change. This is a wide range of icons. We have Mike McNamara will come out and talking about Target and what their journey is, we will have someone from the The New York Times come out and talk about their journey and we will have Lahey Health come out and talk about how they are transforming the way they work in a secure fashion using the capabilities in Google Cloud.

We will have eBay the iconic retailer talk about how they are pushing the envelope of what they can do within the Cloud and using cloud capabilities. LATAM airlines the largest airlines in Latin America will talk about their journey and how they are enhancing their customer experience and using the Cloud and last but not least Darryl West will talk about how one of the largest financial institutions in the world is driving their business forward. The theme across all of these that I would like to ask you to pay attention is the theme of business transformation not just technology transformation which is critically important but business TRAFGS across each and every one of these institutions, all of whom have their own legacy, all of whom have their own history and unique needs that they need to meet.

So I not couldn’t be more thrilled to kick this series off by inviting Mike McNamara to the stage, chief information officer of Target. (Applause.) >> MIKE McNAMARA: I’ve recently celebrated three years at Target. Three years ago, I didn’t know that much about Target and frankly I would have been hard pressed to point to Minneapolis, where I now live, on a map of the United States. In three years I’ve learned that Target is a huge part of American life. We have over 1800 stores nationwide and over three quarters of Americans live within ten miles of one of our stores. On top of that, a whopping 85% of U.S. consumers shopped at Target last year. Beyond the walls of our stores, 25 million guests each month experience Target through our flagship app or

Between our stores, distribution centers and corporate offices, we are more than three hundred and fifty thousand team members strong. And when it comes to brand love well, a quick search on YouTube will surface millions of loyal superfans enthusiastically vlogging about their latest Target finds. America loves Target. Yet despite all of that love an outstanding team and an enviable store network, three years ago Target was losing ground, we were dangerously late to digital, and our technology wasn’t keeping pace. So what to do? As a business leader I’ve always seen my job in two dimensions. First, how do I operate today’s business as efficiently and effectively as possible? And Second, how do I create tomorrow’s business as quickly as possible without messing up what’s already there? On the one hand, I want productivity and stability AND on the other, speed and disruption. Three years ago our systems were not stable. My first Cyber Monday at Target was a fairly miserable affair. Shortly after launching our Cyber Monday offers, we began seeing contention on a key database and there was nothing we could do save throttle traffic to the site and limp through the day.

Now, as it happens we had a huge sales day but we’d upset hundreds of thousands of our guests and left tens of millions of dollars on the table. By the time my second Cyber Monday came around we had moved to the cloud. Rather alarmingly yet again, a key database began to overheat. But this time was different, this time with the execution of a few simple commands we spun up a new instance of the database on a bigger server transferred all the data across and redirected the traffic. The whole affair lasted about twenty minutes guests never noticed and our sales kept on rolling in. And that’s the beauty of cloud it offers elasticity and agility. Today I don’t worry much about systems stability.

I am confident that our technology will support the Target of today. Which means, my engineering team can focus their energies on creating the Target of tomorrow. Marc Andreessen famously wrote that software is eating the world. Today, you can wait around until a competitor’s software comes and gobbles up your business or you can create your own future. I’ve long believed that the retailers with the best technology will end up in the winner’s circle and the ones who don’t get the criticality of technology will be, in retail parlance, discontinued. Today at Target, our engineering team is agile, and it’s re-energized, creating high value software at a breakneck pace.

Developments that only a few short years ago took months or years to complete we now finish in days or weeks. Infrastructure that took weeks to provision is now provisioned in minutes. In the past twelve months we’ve launched hundreds of new experiences for our guests and our team. Here are four examples, all with a common theme. Drive Up allows our guests to shop on our mobile app drive to their local store and once they hit a geo-fence around the store, a team member is alerted to bring the order directly to their car. No need to get the kids out of their car seats or trudge through the aisles with a toddler in meltdown.

It’s simple, convenient and hassle-free. We’ve launched Target Restock, promising overnight delivery of all your household essentials. We created a mobile app that allows our store team to place online orders for guests from the store aisles on the off chance we don’t have the exact product the right size or color in stock. We even piloted a ground-up re-write of our core supply chain systems, which for any retailer is the equivalent to open heart surgery. In doing so, we created an entirely new way to get goods to our stores. What did all these developments have in common? The MVP was live, in market in a matter of weeks.

For someone who’s been in this business for over 25 years that kind of speed is simply stunning. And every week or two I seem to stumble upon a proof-of-concept in live, or a new production system, that a couple of engineers have knocked up in a few days. And while they may not all be big things, the combined impact of these innovations is profound. Google Cloud underpins our speed and innovation. It abstracts all the complexities of the underlying infrastructure away from the developers allowing them to focus on value creation. Compute resource is on-demand, infrastructure is just an API call away. As I look into the not-too-distant future I see artificial intelligence everywhere. At Target we are using AI’s to tackle some of our thorniest problems like supply chain optimization. But, we are also using AI’s to solve a gazillion everyday little problems. For example… we sell mix and match swimwear online. We used to rely on photographers to tag their images as being a bikini top or a bikini bottom.

Well those photographers are arty-types and turns out that accurate data entry is not high on their list of priorities. The result? Misery. Today we use a trained AI model to tag all the images. The result? Joy. By the way, Google, kudos for tensor flow, it’s magic! There’s a lot of hype in the market about Cloud and AI. But if you cut through all the marketing nonsense the TV advertising, the billboards at the airport there is real substance there.

And, I see that substance every day at Target. Cloud and AI are not the only ingredients, but to make fundamental change, like we did, without cloud, is like trying to make wine without grapes. At Target, we’re operating today efficiently and effectively and creating tomorrow with speed and gusto. Thank you. (Applause.) >> Please welcome Nick Rockwell. >> NICK ROCKWELL: It thank you. I’m truly honored to be here today representing the The New York Times. At the Times, we’ve been working with Google Cloud for over two years – in fact, we just fully completed our Cloud migration journey in April. We moved all of our consumer facing products to Google Cloud, and it’s been a huge win for us, on so many levels.

But I’m not going to talk about that today. Instead, I have something really special for you. I am incredibly proud to announce a new, transformative partnership with Google Cloud. Today, The New York Times and Google Cloud are embarking on a truly amazing mission: We are going to scan, encode, and preserve our entire historical photo archive, and while we’re at it, evolve the way in which our newsroom tells stories, by putting powerful new tools for visual storytelling into the hands of our journalists. The New York Times Photo Archive is one of the most comprehensive libraries of visual journalism in the World. It is a treasure trove of historical imagery – millions and millions of photographs, spanning from the 1800’s up to nearly the present day. It is one of the greatest gems of the Times, the collective work of hundreds of skilled photojournalists and artists, who through their talents have given us a breathtaking chronicle of humanity, truly one of the great memorializations of the human experience.

These are actual, physical prints, stored in filing cabinets, and using an old-school card catalog system as its index – with the help of a few heroic archivists who somehow seem to know where everything is. I’ve been in there and it’s quite an experience. The Photo Archive contains images on every imaginable subject, from science to sports, Wars to weddings, Protests to parties. It contains portraits of Statesmen and movie stars, as well as shopkeepers and children, Politicians as well as passers-by, Snapshots of personalities, lifestyles and trends throughout the last 150 years — The vast majority of which have never been published. These photographs aren’t just images from our past, they are themselves historical artifacts. Each Times photo is marked up like a well-traveled passport, the prints have stamps, dates, notes, publication history, clipped captions, and other vestiges of their travels through the newspaper’s history, Written, taped or glued on the back. Each image has a story of its own, and preserving those stories is just as important as preserving the images themselves. And so we will capture all of that information, those marks and traces, as well. What you see here are just a handful of the millions of photographs that are being digitized, tagged, classified, every hand-written note captured, and made accessible – made visible – through this partnership.

But, critically, this project is not first and foremost about preservation – it’s about storytelling. By making the archive accessible, we make it possible to reach back into the past, and tell stories that were left untold. To put the archive and its vast potential in perspective, consider “Unpublished Black History”: a series that we published every day during Black History Month in February, 2017, built around a Clack — collection of never-before seen photographs featuring some of the most iconic faces in American History. It gave the world a fresh look at such figures as Shirley Chisholm, the first black woman elected to Congress, Iconic figures such as Martin Luther King Jr., and James Baldwin, Lena Horne, through an incredible story about how Harry Belafonte helped her secure her apartment at a time when white property owners wouldn’t sell co-ops to African Americans, no matter how famous or wealthy; Kareem Abdul Jabaar towering over his high school teammates when he was still known as Lew Alcindor, Or Run-DMC performing at a benefit concert in 1986, to name just a few, This series delivered a new perspective on our collective story as Americans.

And it was the labor of love of 7 full time journalists, and it took over 3 months of painstaking archival research to produce. The archive is full of these stories, but we are limited by the precious expertise and time of our journalists and researchers. So imagine what will be possible with the new digital Photo Archive. We will move from file cabinets in a basement, card catalogues and faded, hand scratched notes as metadata, to life in the Cloud, where journalists Will have the tools to access, research, interrogate and wander through this archive with unprecedented ease and efficiency, unearthing lost gems, making connections — creating stories. Using Google Cloud Storage to store the digitized assets, Google PubSub for workflow orchestration of the digitization process, Google Spanner to manage and index the metadata – metadata parsed and digitized in part by Cloud Vision – all in a reliable and effortlessly scalable architecture, This will truly change the game. It will allow us to research archival stories not in months, but in as little as a week. It will help us scale series like Overlooked, in which we go back and write the obituaries of those who left an indelible mark on history, but who were nonetheless overlooked – many of them women, such as Charlotte Bronte, author of “Jane Eyre”, on whom we never ran an obituary.

Or Emily Warren Roebling, who oversaw construction of the Brooklyn bridge when her husband, Washington Roebling, fell ill, Ida B. Wells, who campaigned against lynching via brave and pioneering reporting for the Memphis Free Speech and Headlight, a paper she co-owned and edited. And Mary Ewing Outerbridge, who introduced the game of tennis to the US, by smuggling a tennis net home from a vacation in Bermuda. The New York Times Newsroom will bring these stories to life for our readers through the beautiful, emotionally resonant visual journalism that we are known for – and that helps all of us to better understand the world.

This work is made possible by the NY Times’ Story Partnership team and our colleagues at Google Cloud. It is an initiative that will unlock the past, connect it to the present, And reveal the world’s most important untold stories for future generations, and for years to come. Thank you very much (Applause.) >> Please welcome, Lore Chapman. >> LORE CHAPMAN: Good afternoon.

I am pleased to be here today. I am Lore Chapman, VP of IT Solutions and Integration for Lahey Health just northwest of Boston Our Google story is similar to the beloved Jules Verne novel where the intersection of a determined team, a specific goal, a challenging deadline, and a time in history converged. Instead of Around the World in 80 Days we called it Going Google@Lahey in // 91 (64 if you don’t count weekends but weekends there were) The Challenge was our starting point.

For Verne’s protagonist, Mr. Fogg, it was the adventure, with his reputation riding on circumnavigating the globe in 80 days. Many doubted he could accomplish such a feat, so of course, a significant financial wager ensued. Our Lahey Challenge? To rapidly implement GSuite – a platform that could scale and enable collaboration for our 15K colleagues. Unlike Fogg, our timeline was driven not by a wager, but by 4 aging email systems that hindered connection and growth. That’s a big deal when you’re serving health care professionals who want to communicate and respond to their patients in the most timely and thoughtful way possible. This adventure also tested the reputation of our IT@Lahey team Internally, we saw the potential and felt compelled to prove we could rapidly and successfully implement a major change to one of the most personal workforce applications: Email. Externally, if successful, we would be the largest healthcare institution yet to partner with Google. And like Fogg’s financial stake, in healthcare – an industry with growing expenses and shrinking margins – our IT investment decisions must be carefully vetted – And the risk and reward needs to be carefully considered. It was daunting for sure.

So the challenge was accepted, and we began our journey. Fogg’s Journey took him from London, to India and right here through San Francisco! How appropriate. Our Lahey journey also required us to navigate mental, physical, and virtual geographies, Our goal was dexterity and a digitized workspace – enabling collaboration anywhere any time. It was truly an Iliad and an Odyssey both mental and physical But, wait, that’s another epic journey. Physically our team supported six Go Live dates … “Big Bang” at 50 plus locations spanning 50 plus miles. Virtually – our technical team laid the foundation for the G Suite platform. So how did we achieve this bold challenge? Here I’ll draw parallels to Fogg and his team At the outset, Fogg was portrayed as cavalier, somewhat selfish, obsessed with time, and rigid to a fault. but he proved to be thoughtful, caring, generous, and loyal the same traits that positioned our expedition for success. Our IT@Lahey team is hundreds strong and along with our core Team of 15, they drove the three key “legs” of our adventure project management, technical implementation and change management.

And, full disclosure, not all were willing participants. Some were volunteered, others were volun-told. The resistance was not born so much of a shrug off, “I just don’t want to help out”” Rather it was grounded in competing priorities and comfort zones. Most of us felt more comfortable behind the scenes – or screens. During the journey we found amazing strength in unexpected places. The word cloud shows our attributes, but I’d offer that similar to Fogg’s team, the four featured here: served Lahey best. Caring is the foundation of healthcare every day. Our team cared, connected, and delivered, for the first time, an enterprise-level communications system. The team’s generosity and loyalty to the task, the timeline, and to their fellow travelers never wavered. And our IT@Lahey team proved Tenacious.

Like Fogg, they utilized “creativity, ingenuity, and scientific know-how to solve every problem that crossed their path.” Ultimately, this story is about Change. Fogg made adjustments whether the circumstances called for compassion or changing course, he did what was right and the complicating details worked out. For Lahey, the heart of our journey was Change Management. We named it The Force of 3 + 1. The 3? Creative Communications Coaching and Training And the +1 was our Knowledgeable Guide – SADA, our implementation partner. Any of you familiar with Dan Heath’s “Switch” concept of change understand the importance of both the emotional and intellectual sides of change The elephant and the rider. Well that elephant was definitely in the room and we had to acknowledge it but we were able to harness that beast. We focused on finding the bright spots at Lahey leveraging what we already did well shrinking the change, scripting the critical moves, and rallying the organization to adapt, adopt, and transform in 91 days.

The result? Our colleagues can share knowledge, empathy, and ingenuity at a level we’ve never had before, with just a click. So what’s Next? Jules Verne didn’t write a next chapter for Fogg, but it was an interesting time in history. The technological innovations of the late 19th century made the prospect of traveling the world possible … the Suez canal was open, the US transcontinental railroad was complete; and railways connected the Indian continent. At that time, despite skepticism, you can just imagine what many dreamed could be next Lahey’s next adventure? Today – Healthcare is in an era of change and disruption. When we started, we focused on email, calendar and contacts but as I mentioned, we made the entire GSuite available and our colleagues embraced it. And not to paint too rosy of a picture, we certainly had — and continue to have — resistance as you may expect in a change of this scale. Some just prefer the solutions they were used to. But G Suite provides real advantage. It’s a workflow revolution – redesigning workflow is intuitive and we are transforming the way we work.

Lahey Health’s value proposition is to provide the highest quality, lowest cost, coordinated care, – accessible and close to home. IT@Lahey is deeply committed to our mission of delivering reliable, innovative, solutions that make a difference in the lives of the people and communities we serve. This is enabled by our platform strategy and vendor partnerships G Suite is now one of our cornerstones. In closing, I will leave you with this image – a great Englishman the world lost this year. A hero of mine — he was also obsessed with time Roger Bannister … an athlete, academic and doctor. Like Bannister on that fateful day in 1954 when he ran the “miracle mile,” I want to let the Lahey / Google / SADA team know that you “threw in all your reserves and broke the tape in 3 minutes and 59.4 seconds.” You made our Google adventure a success achieving the possible.

Thank you. (Applause.) >> Please welcome, Larry Colagiovanni. (Applause.) >> LARRY COLAGIOVANNI: I’m excited to have the opportunity today to share a bit about eBay and how we’re taking advantage of the cloud, artificial intelligence, and what we call the fabric to drive innovation in the commerce space. eBay is the world’s largest, global marketplace. Our mission is to connect millions of buyers and sellers around the world, empowering people and creating economic opportunity for all. We still don’t compete with our sellers – we are a true marketplace. We offer more breadth and more choice to help people shop for their passions and find their version of perfect. Whether it’s inflatable pink flamigos or new back country skis or new laptops or luxury vehicles, we literally have it all on eBay. And you don’t have to sit through an auction to do that. The reality is 88% of what we sell is fixed price and you can buy it right now. And 80% of item are new. We have 175 million buyers shopping in 190 markets across our 1.1 billion listings. To give you an idea of what that looks like – In the US, a watch is purchased every 5 seconds, a camping and hiking item every 6 seconds, and TV, video, or home audio item every 3 seconds.

You can imagine the scale of the infrastructure that we have to build to support that and also the amount of data that we amass on a daily basis. One of the things I’m most excited about and count myself fortunate to have the opportunity to work on is our eBay for charity platform as well. We like to give sellers a choice far that. Sellers can ear mark some of their pros to benefit a charity and we have raised over $810 million to date for nonprofits in the U.S.

And UK. The future of commerce is changing more quickly than we’ve ever seen. Advances in artificial intelligence, cloud, and fabric are altering how, where, and when people shop. These technologies are also shifting user expectations – Especially Millennials and Generation Z. And driving the creation of disruptive new business models, such as subscription services, direct-to-consumer, and on-demand shopping from the home, car, and mobile device. Brands, retailers, small and medium businesses alike, must both recognize these opportunities and take advantage of them – or risk being left behind. My team at eBay was created to enable us to innovate with these new technologies, move fast, and target new customers without disrupting the experience for our existing ones.

eBay no longer wants to rely on users coming to our website or downloading our app. Whether it be on Google Home, in Facebook Messenger, on VR devices, or, on the TV, we want to go where our customers spend their time today, into what we call the fabric of commerce. This is one of the many ways we’re attracting new buyers and sellers to our marketplace. Last year at google next, we demoed for the first time the voice work we had been doing with Google Home, helping people find out how much an item was worth. Since then, we’ve continued experimenting with conversational commerce and expanding our natural language processing capabilities more broadly on Google Assistant.

We’re also using our AI to help buyers sift through the 1.1 billion items in our catalog to find what they’re looking for, without having to scan through pages and pages of search results or having to select a bunch of filters to narrow down the results. We don’t want people to come back to eBay and keep entering, I want a medium shirt. We want to remember what your interests were and what size you were and which brands you love to personalize your shopping experience on eBay and help you find what you’re looking for. With shoppers relying on image search more and more each day to find what they’re looking for, we also continue to innovate in the field of computer vision. Our billion plus listings have given us the training data to create industry-leading models to predict the leaf-category of an image. Since we started using Google’s advanced TPUs, we’ve also been able to improve the accuracy of those models by 10%.

Not only did they help improve our accuracy, they cut the time it took for us to create a robust model from about 40 days to 4. And we’re now able to iterate on those models in a matter of hours. Taking advantage of the cloud TPUs help us ensure that our customers see the freshest possible product listings and find what they want every time. Our work with computer vision has enabled us to start experimenting with augmented, virtual, and mixed realities as well. A few months ago, we released a clever solution built on Google’s ARCore platform, which uses Augmented Reality to enable our 10s of millions of sellers to select the best USPS flat rate box for items they need to ship. Sellers select a box size, place the virtual box over their sold item, and move it around the item to make sure it will fit inside, helping eliminate the hassle of finding the right size box for shipping.

This was the first of many innovations you’ll see from us in this space. We have been hard at work at improving the customer experience for those shopping on eBay from China as well. We started hosting some of our infrastructure in Google Cloud in Tokyo, closer to where our customers are, helping improve our page load times by 40%. As we iterated on the experience, one of the biggest challenges we faced was localizing it. While we tried leveraging some publicly available translation APIs, they weren’t trained with commerce specific data and thus weren’t accurate enough. So, we started training our own commerce-specific models and built a machine translation service on Google Cloud that’s now powering query, title, and description translation on our site. Training the model with commerce-specific data helped us achieve a 50% improvement in our BLEU scores for query translation as an example. We learned firsthand what a challenge building a localized site for China can be and how that can hold some back from experimenting in the Chinese market.

So, today, we’re excited to announce we’re making that machine translation service available to 3rd party developers through eBay’s developer program, so that others can benefit from it as well. We’re starting with Chinese and will add other languages in the weeks and months to come. To close, the world of commerce is undergoing another round of massive disruption. The entire ecosystem must recognize the shifts in technologies and behaviors driving the change, and be agile and bold enough to capture this opportunity. Leveraging the Cloud, AI, and fabric are some of the many levers we are pulling at eBay to do just that.

Thank you. (Applause.) >> Please welcome Dirk John. >> DIRK JOHN: Good afternoon Airlines and technology are no strangers. In fact, Airlines have been technology innovators and early adopters in Distribution, Analytics, or Operations Research Let’s see how we tackle the challenge today But before I go there, let me give you a quick introduction to LATAM Airlines LATAM Airlines is the largest airline in South America We are born from a merger of LAN and TAM in 2012 We today operate the most dense network in our region We connect you within South America, and to the rest of the world Every minute a LATAM aircraft is taking off somewhere around the globe We bring dreams to destination We aim for creating the best customer experience for our guests Through offering an excellent product Through excellent operational performance And by taking care of your needs Accomplishing this mission is increasingly difficult We operate in an extremely complex environment We are a multi-country airline in a highly regulated industry We operate out of 6 base countries – all with their own specificities, regulations and constraints And every time we want to create a consistent experience for our customers in this environment, no matter where they fly with us We have huge technology debt We operate a technology landscape that grew over many years and that was not built for today’s requirements One of our big aims is to create a seamless experience with Latam in all interactions We face an ever more demanding customer More and more we serve pragmatic and empowered digital-native customers These customers value time, like to have control and options, and want a good deal We want to give these customers a great experience Too.

Our answer to this challenge: Going Digital We believe that the best experience for our customer is a Digital experience We’re going digital so we can implement an engaging, seamless and frictionless journey To be successful with that, we are building Latam’s Digital future based on three key ingredients Firstly, enabling cross functional work and fostering collaboration The borders of what used to be separate teams are disappearing, today we can only be successful in joint teams We create a common goal and equip the teams with all necessary tools to fulfill their task In many cases this requires a significant change in mindsets and the way we work Secondly, we focus on fast, scalable impact: The demand for a short time to market and immediate impact is increasing We rethink the way we create our Digital products to be up for this challenge and improve every day Today we have the right tools To be up for this challenge, and thirdly, we leverage data to Better understand our customers We’re putting customer experience at the core of what we do, consistently delivering what customers want when they want it Data and data usage is the key for achieving this We change the way how we look at data and how we manage data Let me give you some practical examples of what we’re doing and how Google is helping us on this journey To foster collaboration, we have worked with Google to Implement G Suite within six months for all of our 42.000 employees in 6 months This has had a profound impact on how we work together and communicate Today we work on the same document with our teams in Chile, Peru and Brazil while we discuss it in a video call Instead of doing slides, teams present their results in Data Studio and give access to a broad community Of co workers.

We see how collaboration tools increase the quality of our results every day We focus on strengthening our impact To do so, we have adopted a cloud first strategy using GCP This allows us to focus on innovation and value, instead of managing systems, For example creating airport approach paths using precise 3D-digital elevation models so we can fly efficiently On the top you see how we used to fly to Cajamarca, Peru, using conventional ground based navigation And this is today’s picture, using the new routes, flown completely automatic with high precision For the future we want to continue adopting best practice technology, such as server-less architectures, and moving to continuous operation We use advanced analytics to suit our product to our customers We are applying data analytics for a long time at Latam Airlines However, recently we set up a fully cross-functional team for advanced analytics to accelerate in understanding our customers and their needs Being able to run TensorFlow models and using Big Query after a few weeks of implementation was only possible, because we can rely on the GCP stack This allows us for example to provide more dedicated and custom-made offers. What comes next? We will continue to improve the digital experience of our customers every day This means we will improve the interaction with LATAM through the use of technology.

Another big focus will be to further improve our customer insights through use of analytics to be able to offer better products and services. And we will continue to reduce our technology debt to be able to accelerate and scale We will continue building the capabilities for an agile, cross-functional digital team Collaboration and cross-functional work models will be normal in our daily work And work environment. At the same time we will develop new models of collaboration with academia, startups and other corporations And we are looking forward to absorbing the next wave of technology innovations We believe having the right team, strategy and partners will allow us to have a successful Digital journey Thank you.

(Applause.) >> Please welcome, Darryl West. >> DARRYL WEST: Good afternoon, everyone. I have the honor and privilege of leading a fantastic IT team at HSBC, a company that is faster and better for its customers. We have an aspiration to be the most awesome IT team on the planet in financial services and we see our purposes as transforming the world’s banking experience. I’m going to talk to you about how we’re partnering with Google to be able to realize that combings. For the honing can — Hong Kong and Shanghai banking corporations, we have all kinds of businesses that we work with. We are the number one financier across border trade, we are a significant trader in the financial exchange markets so we are a central part of global commerce.

We have 2 foie 5 trillion assets on our balance sheet but more importantly we have a huge data rest at the core of the company and this year we will pass 100 kettle bytes of data and within that data there is a massive amount of insight and we need to leverage the tools to get that insight to be able to manage our business better. So last year when I was here we talked about where we wanted to get to to be a Cloud first company. And HSBC is no different than other global enterprises. We tried for many years to build and provision markets, build infrastructure and run it ourselves.

We decided a few months ago we should focus on our commerce and partner with Google to do the heavy lifting on infrastructure. So we’ve moved through this journey over the last year or so to get to a point where we will have a managed service in eau laughed capacity and allowing our teams in the banks to focus on the data science, the data management and processing that enables customer experiences. Of course as we have been going through this journey we are collaborating with the Google team on the platform. Now, we are a globally systemically important financial institution and it’s a heavily regulated business. It’s an important ten gnat in the banking business that you have trust and confidence, and the data is reliable. Anything that we do apart from history is going to be a close issue with the regulators.

So this Liz of questions, the regulators spent the last year or so working closely with them and given our geographic spread we have pretty much every regulator in the world is looking at what we’re doing so we had to invest a significant amount of time and effort to convince regulators that moving the data to the Cloud is a good thing and I’ve taken the position that the more we do with Google Cloud makes the industry more safe and reliable industry.

So we have worked with the regulators and got them to the point now where they’re comfortable. They have asked us to assess the risks end thief-end, we have to take the proposition to our managing board and they asked us to address things like access and control, we are governed by 67 countries around data sharing so we have to be in compliance with those rules. We need to know what’s happened, how we can monitoring logging on the Cloud and how encryption is working because the fact that the data is encrypted and rests if a Google Cloud is fantastic but the regulators are requiring the banks to be able to control and manage those encryption keys.

The last thing and probably just as important is how we work together. We’re a company that is 153 years old, we have a certain culture and a way of doing things and partnering with a west coast tech company required a complete change of mind-set and approach and one that required an agile and a dev Ops method to get there. We have done and answered all of these questions and we’re in a good space moving forward. We’ve collaborated closely with the engineering team, we formed a joint team just over a year ago to work through the challenges. We’ve created a Cloud framework for banking, which I think will become the standard model for financial services around the world, describing the controls you have to have from extracting your data from your core systems, putting it on to the Cloud, making sure it’s encrypted well, the access is controlled and you have the right log-in. All of those controls have been studied and through the course of our collaboration with Google we’ve seen a modification, an improvement to the Google offerings in the space of customer roles, regional support, I and mentioned the log-in for virtual Cloud and the encryption keys.

It’s done and now ready for prime time and we’re going on a sort of a global tour to convince all the regulators that I deal with this is going to be a better way of running the banking business. So the results so far have been actually quite stunning. A year ago I stood up here and talked about the various use cases in the banking world around financial liquidity calculations, stress testing, calculations that the finance teams have to do, market risk and have they have to run calculations to calculate MAFKT risks, we talked about financial crime and the ability to ingest billions of transactions and look for nefarious patents in those transactions and using this platform to identify money loaned dering within our global network. So as we stand here we have two important use cases live and we’re able to calculate global liquidity for a country in minutes rather than hours and critically using big courrier we are able to run this with a much higher level of accuracy so this has been a fantastic outcome are for us and it’s given us the confidence to progress.

We have a long Liz and batting order of new work loads to come over the coming months. In terms of what’s next for our relationship with Google, we really enjoy our interaction with the leaders here at Google. The first thing we want to do is double down on our data agenda. We’re going to complete over the next 18 months or so the wholesale migration to the Google platform and only this week we have publicly announced our seven-year strategic alliance with Google as a partner in this space. On data management we have a similar problem to most big companies. Historically we have not had great data architecture and we have worked with the Google team on how to catalog our 100 bytes of data and it’s been difficult top understand where it is and we have worked with the engineering team to come up with tools and you will hear about those tools at this event next year.

More importantly what you heard this morning from Urs, it was great to see, we had HSBC’s name up there when we talked about the announcement, we worked closely with his team in developing the whole suite on prem and it’s important that we have Kubernetes available to us as we do that migration over to the Cloud over the long term. We’re proud of our partnership with the Google team and we have defined a new stack for us for our Next Gen set of applications and we will leverage the tools Urs talked about this morning. We are now developing mission critical applications for our business. For example, our business banking franchise which is a huge franchise globally for us, 67 countries, are developing a whole new channel in the native stack which shares the confidence of the tools that are on offer from this great team at Google.

So in closing I just want to say that we have a new chief executive, John Flint was appointed back in February and we have refreshed our strategy and we talk about commercial growth being our primary driver and it’s clear to me and the executive team that partnering with Google and enabling our Cloud journey is going to be great. It’s been fantastic, we hope that continues and together we will thrive. Thank you very much. (Applause.) >> TARIQ SHAUKAT: Thank you, Darryl, there are a couple of messages that I wanted to share with you and a couple of closing remarks. What struck me from the talks is how consistent they were in a number of different areas. That every company that was up here and I would Schmidt every company out there has to collaborate better. Every company wants to be more agile and is turning themselves if they’re not already into a data and an insights and analytics company. Every company is obsessively focusing on the customer experience and they’re doing that more and more in realtime more and more for the new consumer and customer that is out there that has changing needs.

When you really boil it down, every company is becoming as you can hear from the speakers today as well as throughout Next, they’re becoming a tech company in many ways and that means they are embedding the capabilities you have in the Cloud with AI into the heart of their business. Want as an adjunct or something separate on the side as an experiment but in the heart of this transformation journey that they are undertaking at the moment. We are thrilled by the partnership model that we’ve been able to create with all of these companies. It really is core to our DNA and core to what we are trying to do within Cloud to view these transformation journeys as partnership journeys with Google and with our partners, the ecosystem, but with these companies that are looking to transform themselves as well. We’re incredibly honored and grad — gratified to have these companies describe their junior in his to you.

You will hear more customers’ stories like this, how they are becoming more agile, creating better experiences and tapping into Big Date and analytics and machine learning and I encourage you to spend the time to go and listen to all of those stories and really hopefully take some inspiration and lessons learned from them so if you can join me in giving a big round of applause for the speakers we just had and thank you very much, enjoy the rest of Google Cloud Next. (Applause.) >>> Good afternoon. Please welcome Oren Teich. >> OREN TEICH: Good afternoon, and thank you for joining us here. It is an incredible conference so far.

I hope everyone’s really enjoying it. If you haven’t had a chance, I will strongly encourage you to make sure you find some time to check out all of the different expo spaces. I happen to be a little bit partial to the one downstairs, directly below us on the first floor, there’s a really cool air cannon you can fire. I’m Oren Teich, director of product management at Google. Obviously today we’re going to talk about serverless. I hope you guys like that stuff. I’ve only been at Google myself for a little over a year. I’ve been in the industry for a long time. And really my whole time has been focused on what we can do for developers. And part of why I’m so excited to be here at Google is the opportunity that it gives us to not just offer you single-point solution but offer you the whole comprehensive thing. Today I’m going to talk about a lot of different components. And what I want you to know is it is a lot. It is complicated. There’s a lot of pieces. And what we’re trying to do is offer a very comprehensive solution for our customers.

And also keep in mind that this is a point in time that, you know, today is July 24th or whatever of 2018. And there’s going to be August and September and July 24th of 2019. And we’re constantly iterating on this. And part of why I bring this up is I want to hear from you as well. I want to know the feedback you have. Because we’re trying to build the best product for you. And so please, by all means, come up, send me an e-mail, my last name @Twitter or @Google or whatever. But let us know how we can make it as serverless as possible. I think it’s always important because we’re never going to get over it to make fun of the word first. If I could choose any other word in the world, I would not choose — I wouldn’t have the word serverless.

Clearly there are many, many servers that are behind things. And in fact, no surprise, you know, Google is one of the biggest server companies in the world, right? We make our own hardware. We have incredible data centers. And I think this is really important because it’s not just about the physical machines, of course. It’s about how you hook them all up, what’s involved in getting them together and maintaining and operating, and it’s also about the people that are behind it, right, and how we actually keep everything running.

And in and of itself, that’s important and exciting. But there’s another piece. And I don’t know if you know this company, VIOLIA. They’re a huge French multinational company. They used to be part of, I think, the Vivendi Group. They have 350,000 employees. They take care of trash, electricity, a lot of municipal services across Europe, across a lot of — actually, in 48 country, last I checked. And I was meeting with their CIO earlier. And he said something that I hadn’t even considered before. And it’s really struck me and making me think quite a bit. He was talking about the importance of — we live in a scarce resource world. And the reality is that as we move more and more into computing, as we move and take advantage of our compute more and more, the resource costs can go up.

And it’s his argument, our moral responsibility to use these efficient. That the only way that we can be really looking toward the future and if softwares in the world, having 100 times more use of software if we don’t use 100 times more resources in the way. And their argument is serverless is the way they’re looking to do that. I think that was really insightful. I think it’s something I’m going to go back and look into. So, you know, at the end of the day, we have these serverless products. And I think just to recap, why are people we see choosing it? And what we hear all the time, right? I mean, this is motherhood and apple pie, right, what we hear all the time is serverless enhances Dev productivity.

And that last one, pay often for usage is often the way it gets reduced down to. Fully managed security. You know, of course, you’re not having to look up servers. But this is one piece of the overall puzzle. In fact, it’s one that I want to expand upon a little bit more because this is not sufficient when you talk about it. Ha, ha. We’re going to talk about why that’s not sufficient in a second.

But before we do that, I actually want to get Deep up here. He’s the executive director for the New York Times, responsible for a lot of their technical architecture. Deep, come on up and join me on stage. >> DEEP KAPADIA: Hi, guys. >> OREN TEICH: So I’m not sure if everyone’s aware, but “New York Times” have been longtime customers of ours. Deep, maybe you could give us a quick overview of how you use our products. >> DEEP KAPADIA: Absolutely. So we decided to migrate from our data centers off to the public cloud a couple years ago. And Google seemed to have a very compelling offering, especially when it comes to abstractions that are available to developers. One of the big things was, like, the serverless platform or the, like, or what we knew as the app engine at the time. And, of course, like the offering is expanded to other things at this point. But we started looking at app engine for workloads that we wanted to scale pretty fast. A lot of the traffic to “the New York Times” is very spiky.

The first thing that we looked at was our crosswords product. Our crosswords is our highest grossing product at this point. So it was a bit of a chance that we took but, like, we looked at how the scaling needs worked for our crosswords app. And what we found was, like, when we published a crossword app, people just, like, have at it, like right away. They want to solve the crossword then and there.

And they’re trying to download the crossword and, like, you know, solve the puzzles online, et cetera. And we just could not keep up with the scaling needs. So we would just overprovision infrastructure for the crosswords app. And when we started looking at app engine and its scaling abilities, we found that it was a perfect fit for these spiky workloads that we have for the crosswords app. And maybe for other things, too. Like, you know, Oren and I were talking about, like, you know, using it for our breaking news alerts at some point where we could, like, we could send out a breaking news alert and, like, quickly ramp up to, like, the number of requests that we would they’d to serve at any given time.

So that was one thing that we looked at. So app engine was our first major forAY into application development. Before that we had been using big query for a little bit, which I’ve always said was our gateway drug into Google. But, yeah. >> OREN TEICH: And on that, are you only using app Quinn in big query? How do you look at the overall platform today? >> DEEP KAPADIA: No, we don’t. We use a lot of Kubernetes.

We use GKE for workloads that may not fit the app engine model in some ways where we need to do something very specific that doesn’t fit the app engine model. So we doubled down on both app engine and GKE during the cloud migration. At this point all of runs on GKE. So that’s another thing. But on the app engine side, we have about 30 services and applications that run on app engine. >> OREN TEICH: And it’s not that you just developed for it, you’ve also created some incredible source repositories. >> DEEP KAPADIA: Absolutely. And if you go to, there’s a lot of open source networks to go look at. Some of them are also built around app engine as well.

>> OREN TEICH: Cool. Thank you, DO Deep. I really appreciate it. >> DEEP KAPADIA: Thanks. >> OREN TEICH: Don’t open the water. So, you know, obviously we have other customers, New York Times” is just one of them. Here’s some others that I really enjoyed hearing some quotes from. I’m not going to read you the quotes. You can do that on your own. I will call out in that upper left corner, the smart parking one. There’s a fantastic session that’s coming up later held by Morgan Halman. He’s going to be talking about doing more with less. And he’s featuring the architecture of what smart parking has done. It’s really, really interesting to see not just how people are using serverless for the operational benefits but also for programming benefits. Because obviously as you decompose your application into small pieces as you move into event-driven workloads, can you start to think about how you do your computing differently. And smart parking is a company that’s built entirely around this concept. And it’s really remarkable to see this collection of IOT devices that are sending signals out across an entire city and how they aggregate that for parking needs.

So I strongly enKURMG y encourage you to take a look at that session. That’s a good segue of even understanding the operational benefits that we get, let’s not be fooling ourselves. Writing software is still very, very hard. And sometimes we hear from our customers all the time is they come to Google because they don’t want to be in the infrastructure business. In fact, Nick who’s speaking at the same time, Nick Rockwell, the CTO, we were having a meeting a few months back in New York. He he said flat out New York Times is news business. Google is an infrastructure business. That’s the core value of why people come to us and to GCP. But there’s another part, too. How can we help you not just solve the infrastructure but solve the application development problem? And I think this, to me, is one of the most exciting pieces about serverless, is it gives us an opportunity to start to revisit a lot of the core things that were done before.

Hey, maybe we don’t need to have IP-based security. Maybe we can start to reimagine and rearc tech things for a more modern world. And so part of it is, how can we take all these different pieces which historically you’ve had to think about, right? Oh, what HON aring solution do I use? How am I going to configure my networking? All the pieces. And frankly, this slide is on the lower end of what the boxes look like.

I’ve seen versions of this slide which literally have 10, 120, 30 more boxes tan this. And we want you to just be able to focus on building that application. And so instead — so, you know, I showed this slide earlier and we talked about the operational model. I would actually say that serverless is made up of two things. I talk about this all the time right now. There’s the operational model which I think we all understand. But equally important is the programming model.

And I about the programming model, I mean, of course, that it’s service based. We can argue about the monolith services. By the way, short answer, there is no one right answer, and chances are you should start with the monolith. But anyway, you’re still going to have a services-based ecosystem. You’re going to be using big query. You’re going to have a whole set of services around. Event driven is incredibly important in this, right? What I love about event-driven architecture is you’re shifting who has to do a lot of the hard lifting. You let the infrastructure provider, us, do it. And of course, it has to be open.

And we’re going to talk about this more. You may have seen some of the announcements that came out. But one of the catches historically has been if you buy into your programming model, you’re stuck with that programming model. And we want to make it possible so that you can take advantage of these — take advantage of all of these characteristics and do it anywhere you want. We think you’re going to do it on GCB because we think we have the best operational model, but we’re not going to force you to be there.

We’re going to talk about that in quite some detail. Now, I talked about services, right? And another way we can — someone — I don’t remember, a year ago I saw on Twitter, someone said maybe we we should have called it serviceful. And I agree with that. Because, you know, of course, there’s compute, right, the ability to run some cycles. But the reality is, it’s all the services around it that really make it useful. And just as a thought exercise, how useful is something if you can’t store any data or have a cache, right? It’s awfully hard sometimes to do anything of any kind of scale there.

So all of these pieces, you know, Google has been in this business for a long time. And we do have all of these pieces. Given that I only have 37 minutes left, I am not going to talk about all these pieces, as much as I’d love to. And there’s been some amazing announcements that we’ve had out there today. That we’re not even going to talk about. For example, in that lower left corner, there’s a whole new product called Cloud Fire Store. It’s incredibly scalable, built on a high scaleability back end. It has the same APIs available for data store. And now you get just a bigger, better product. That’s something we announced today available for GCP. The list of things I’m not going to talk about’s longer than what I’m going to talk about but I want you to know they’re there and encourage you to do more research.

We’re going to talk about the middle two sections. The compute side. So first, app engine. You know, app engine’s been around for a long time. It’s been around for over ten years. It’s a really remarkable product. It predates Google Cloud. App engine is the O.G. serverless. It is really incredible product. You know, and I think if you’re not familiar with it, just a quick recap. It lets you take app source code, deploy it, and it’s exactly what you want. There’s no configuration. And away you go. It scales, Snapchat famously is on. Best Buy, you can see other customers using it, and you’re only paying for your usage, more or less. There’s always small caveats in these things. It is old. But that’s not to say it isn’t without its imperfections. So one of the biggest complaints that we had from App Engine for years is it had a lot of proprietary pieces.

So, for example, if you used the Java language, there was class whitelisting. There were only certain classes you could run. You couldn’t say, if you were in Python, you couldn’t take open-source software from out on the web and run it. I don’t know if you’ve noticed, open source is kind of a thing. It’s worth noting ten years ago it wasn’t. When this came out, the whole point of app engine is you’re standing on the shoulders of Google. What’s shifted is now we need you to stand on the shoulders of the community. So we’ve been thinking about this. And we’ve had — we’ve been limited because of the security needs, right? Because App Engine and the way it runs, we’re letting anyone run any arbitrary software which is a huge security risk if we don’t manage that correctly.

That’s why historically it’s been very carefullying manied. One thing Google has is smart engineers. And so for many years we’ve been working quietly in the background on a project that we announced called G-visor. And it was specifically designed to address this problem. So what G-visor is designed to do — and it’s it’s an open-source project. But it’s designed to give you the security you’d expect from a virtual machine but with the performance characteristics you’d expect from a container.

And so, you know, it’s, YAY, it’s cool, it’s complemented in user space, it’s written in go because go. And — but it’s really been remarkable. And what we do is we actually implement the system calls, intercept them, and then because we’re intercepting them, we can inspect everything that’s going on. We can make sure there’s no security issues. And because it’s go, we have memory and typesafe constructs in place. So this is now what’s underpinning many products at Google, especially all of our serverless products going forward. And this is what’s enabling us to do everything that I’m going to talk about for the rest of this talk. So the first thing it’s enabled us is with second-generation run times. They’re all G-visor based for App Engine. Sorry. And so what we’re doing here is we’re giving you a Dev experience. Historically you’ve had to learn our files. We want to bring it to you.

You want to install package? What’s the language say. No more API restrictions like I mentioned. And frankly, it’s just faster if you just do benchmarking. And so I’m really excited to announce that coming out and rolling out in the next 30 days, we will have for App Engine standard Python 37 and P PHP7 all available. Thank you. This has been the number one request we’ve had literally for years. You know, we, of course, have our bug tracking system. And at the top of it has been these Llanguages People always ask, why is it taking so long? And the answer is we need it to overhaul our entire security infrastructure, this runtime execution environment. Now that we’ve done it, we’re able to get a number of languages in at once and we can be committed to being more up to date in the future. So next I’d like to talk about cloud functions. Hopefully you’ve all heard of cloud functions already. It lets you to have very small snippets of code. It’s going to auto scale with usage.

Really nicely, it lets you pair them to events, GCS STORGC SC GCS storage bucket. And, of course, you have even finer grain control of where you’re only paying where code runs. So this has been a long time coming. We’ve been working on cloud functions, you know, quite famously, it’s been in beta now for too long. So thankfully, as of today, it is fully GA. So I’m really excited, cloud functions is now GA and it’s available as of today in four regions. You can get it in east and central as well as in Europe and Asia. And uniquely in the market, this is covered by the same SLA as the rest of Google products. People always ask why is this taking so long? What was going on? Not only did we need to create this entirely new infrastructure for security sandboxing, but we also needed to make sure it was going to work in the reliability and quality that our customers were expecting. And that, you know, I’m embarrassed. It should be something we were able to do quicker. But that investment takes time. And it’s something that we can now stand behind and we’re very proud to make available.

So today just go sign up, use it. It’s available. There’s no sign-up list. There’s no — nothing you need to do. Now, that’s — Thanks. That’s, of course, not all. Having additional languages is, you know, maybe we’d all want the world to be a node world. I don’t. But — so obviously having additional languages is critical, too. So this is coming back to how we can make the experience I IDIOmatic to what you and your company want. They came out with 6. Rolling out over the next 30 days we’re going to have node 8 as well as Python 37. And these are really, really nice advance — these are nice changes. I want to call out that going from one to — one of my favorite books is by Igor GAMA, and it’s a book on math.

And the title is “one, two, three, infinity.” You can look at a sequence. One to two to three, infinity and beyond. This would make language two and three. So I look forward to infinity coming soon as well. These are the ones we have available today. Getting the infrastructure in place to support these is great. Tease are, like I said, they’re rolling out in the next 30 days. They’ll be available to everyone once the deploys are finished. Now, that’s not all, right, because, of course, GCF is a new product.

It has a lot of new capabilities that are necessary. So the first one, VPC and VPN. Something we hear all the time — and Deep talked about this — is we are not an island. No product inside of Google is an island. What I hear time and time again from customers is customers come to us for the platform of GCP. And what’s important is that we work well together. And, you know, we can’t do everything from day one. We have to iterate our way there. They’re key issues. So if you’re not familiar with VPC, virtual private controls, and virtual private network, what this lets you do is it lets you define a network that only, for example, you have you’re bridging your on PREM and cloud, you can control who has access to it.

This is super important. And we’ve heard, for example, from people who might be running an on-PREM Cassandra cluster. With this new feature, you’ll be able to do that. This ties into the next one which is security controls. Right now it’s public. It is on the Internet. If someone guesses that URL they can execute it. With security control what we’re doing is putting in place IAM controls using the exact same controls that let you restrict with new roles like an invoker function, who is going to be able to execute that. Cloud sequel direct connect, huge use case that we hear from people, of course, is I need to store data.

So now we’re — you’ve been able to direct connect with cloud sequel. This is a fully supportive path. And then finally, near and dear to my heart is how do you store those little bits of metadata associated with your app. So we’ve introduced environment variables. You know, and our take is with this, with the new languages, with the GA, with these core capabilities, and just for clarity these core capabilities are in various states, many are coming out in alpha. This is a URL where you can sign up to get access to these. All of them are scheduled for this year. With these — with this set, we really feel like we’ve now brought GCF to the place where it was meant to be. We have a great GA product that I really am excited for everyone in this room to give a try and use. Now, I talked about, of course, events. And for the sake of time, we’re not going to dive into this today. There’s other sessions. But one of the things I’m most excited about with serverless is the way that you can rethink your application architecture.

And if you’re not familiar with wat we’re saying here, really briefly if you just think about the Canonical case of a photo upload. You need to upload it, store it and resize it. Does that upload come through your app? Does it then decide how to store it? Do you then trigger some resize? The nice thing is you just have the client, whatever that client is directly put it in GCS bucket. That bucket generates a trigger and then your code gets executed. So you can go from 100 to two, five, ten lines of code. Of course, GCS is a great example. Sometimes we get bored of hearing about GCS. One of the things to note is we have this incredible pub integration. And so much of GPC is integrated. There’s actually over 20 different services you can take advantage of today. For example, big query, ML engine, stack driver. Actually one of my favorites is you can create an alert in stack driver that triggers a pub sub event that calls a cloud function.

Hey, I’ve noticed we have a failure condition where when a machine in my data center crashes, we want to trigger something, you can do that and very simply and fully manage, create the trigger in stack driver, push it through pub sub. So these are all available with GCF. They’re working great. And we see, in fact, huge amount of adoption comes through these products today. Now, even with everything that I’ve outlined, that’s not enough. Right? And when we are out and talking to customers, what we consistently hear are two additional problems. So one of the dependencies, it’s great that we support node 8 and Python 37, but what happens if I have some random — or not so random — actually image magic or FFM peg are the Canonical examples that come up every single day.

You know, I want to resize an image. I want to transcode some video. And I don’t like the libraries that you’ve included. Historically on a serverless platform, the answer’s been tough or you do some crazy hoops, custom compile binaries, upload it into the blog. Things I don’t think anyone really wants to do. The other one I alluded to earlier is people say hey, this is great. But how do I run these workloads elsewhere? How do I make sure that I can have the portability I’m looking for? And so to address that, I want to introduce a new concept here. And we’re going to build off of this concept to some new products.

So the few concept is a serverless container. And what’s a serverless container? It’s just a descoping of container. If you’re all familiar with containers, they do many things. But one of the key things is they’re stuff, right? And if we define a little bit more what has to be in that container, it gets us wonderful things. So in our case, with a serverless container, what we’re defining them is we’re going to say they’re stateless. Don’t write to local disk and expect it to persist. This isn’t long-running 6, 12-hour five-day jobs. Based on event response. Let’s be explicit. It’s the Canonical example. They’re going to auto scale and have health checks. There’s actually a spec that we’ve published. I’ll show you the URL later. So if we define in scope these containers down, one of the benefits it gets us is we realize that this is actually what we’re doing today with App Engine and cloud functions under the food. This is how we’re actually running things. We realize why don’t we expose that to our users? And so what we’re now — what I’m really excited to announce is serverless containers on GCF.

This is something that’s coming out later this year. So it’s EAP now. We do have customers using it. And you can see there’s a sign-up link down at the bottom. Please sign up. And the idea here is we are going to take a container that matches that serverless container spec. It’s going to be fully managed. And we’re just going to run it for you. And so what you can now do with this is maybe you just have one small open-source package you want to install. Maybe you have some proprietary binaries you don’t have the source to give us to. That’s fine. Maybe you want to integrate this with your work flows. People have CICD. What we’ve realized is that containers are not just a packaging format but they’re the interchange format as well. All of those things are now possible. And so what I want to do next is give you a demo. And so we’re going to switch over to the computer he has.

So what you can see is pretty easy, right? What we have here is just a docker file. There’s literally nothing special about it. This is as boring a docker file as one could ever hope to have in their life. And you can see, though, that in the docker file, we just installed with app get one additional package. This is an open-source 3-D rendering package. This is not something you’ve ever been able to do with serverless before. Oh, I want to 3-D render an image and be fully managed and you’ve been out of luck. Now you just specify and here. And then you build it. Google has — you know, you can build it locally, with Google Cloud build. We don’t wear. We’ve already built it. We’re taking advantage of the fact that it’s already up and running on our container registry, Google container registry. Don’t care where it runs.

And what he’s going to do is going to hit enter and deploy it. What I want you to note is he’s using the same command, G cloud functions deploy. He’s just added the dash dash image. If he left it off, it would be expecting source code and building it. In this case, let’s give you that being doer image.

There’s nothing special about this docker image. This takes about 30 to 45 seconds to run. It is in the AP. I promise it will get faster. But now what you can see is this is running live. I mean, if anyone’s able to get that URL and copy it, you’ll see it yourself. And you’ll notice that what we have here is we’re actually using this open-source package to 3-D render in not quite realtime but pretty darn fast. If it’s not cached, it usually takes about half a second. And it just renders some text. Now, we’re not the best designers in the world. This doesn’t look like the most amazing thing. What we have here is literally something I don’t think we’ve ever been able to do before which is take a cloud function, have a custom piece of software installed into it, and have that run with all of the experience that you’d be used to, the same commands and everything.

He’s going to stay up here and join me on stage for a little bit more. We’re going to switch back to the slides for just a few seconds. Because this is now point number two. Remember, I said point number one, the blocker that we hear from people is that they’re concerned about the ability to run arbitrary additional dependencies. So we just showed you that. Point number 2 is that he asked about consistency. And one of the things we hear is, hey, you know, for whatever reason, good, bad, ugly, we want to run things on PREM. And there are really good use cases, by the way. One of my favorites is one of our big customers is an oil management, oil services company. And they have a giant oil rigs in the middle of the ocean. And it turns out that oil rigs don’t have good bandwidth. And they generate terabytes of data off of these oil rigs. And they want to be able to process these data.

And so they’ve installed or are in the process of installing Kubernetes clusters onto oil rigs because you just can’t get a terabyte of data a day off of a — actually, I’m sorry, they’re in petabytes a week. It’s literally impossible. I guess you send helicopters, right? So in their case, they can do all of this processing on site. But what they keep on saying is yeah, but we don’t want to have to write different code. 90% of the time when we’re running it into the cloud, this is great. What do we do when we have our on-PREM needs? What you may have seen in this room just before, an hour ago, Aparna and Chen were talking about our story.

So GKE on PREM is the package distribution that allows you to get a fully GKE running in your data center. This directly addresses one of the biggest concerns that our customers have. How can we have the same environment that runs in the cloud and in our data center? And so we’re taking that one step further as well in collaboration with them. And what we’re doing is we’re introducing a new add-on. So the GKE serverless add-on takes the exact same API that we expressed and showed you before. And now lets you deploy the same exact workloads onto GKE. Again, there’s a sign-up link. Take note. We can share this later. If you’re interested in participating at this, please do take a look and sign up for it. And the idea here is actually let’s show and not tell. So the idea here is exactly what we showed you before. So let’s actually do a demo.

So if we’re going to go back to him. So here’s what we’re going to do now. We’re going to make no changes except to the command line. So he’s going to deploy the exact same function, right? The same image, instead of using G cloud function deploy, we do have a new name space. And instead of — and we need to add one more additional piece of information, of course, because now we’re targeting a cluster. We need to say which specific cluster we want to go to.

So he’s going to deploy it to his very aptly named my cluster. And away we’ll go. I want you to note that this is using the same command, right? The G cloud command. It’s the same experience we’re used to. And this one actually takes about 15 seconds. It’s actually faster in general to deploy to this. And so right now what’s going on? We are spinning up a new service in GKE. So this is running in a Kubernetes cluster. And if you manage Kubernetes before, you know that getting a service running, there’s a lot of YAML, configuration.

And that’s why you want it oftentimes. But as a developer, sometimes you don’t, right? Sometimes you don’t need all that. And in fact, all those sharp edges can cut you. What we want to give you here is the experience of simply deploying. So you’ll see the URL’s different. This is now live demo. This is actually running on a Kubernetes cluster. And, you know, it’s the exact same code. We have had to make absolutely no changes of any kind. It’s worth noting that this cluster is provisioned with really beefy big instances. We’re doing this computation. It runs even faster.

So this points to one of the benefits that you have. You are — back to that model. There’s the programming and the operational model. You are, of course, trading off some of the operational model. It’s running in your cluster. You have to manage your cluster. But what you do get is the control that comes with it. If you want to run 64 core VMs, you go for it. You can do anything you want in this case. It gives you a lot more flexibility. Thank you very much. Now, it’s not sufficient to simply give you the software and have it managed.

One of the things we hear all the time is just how important free am do of choice is, right? So both — I was actually talking to the chief architect of a huge financial services company who — they won’t let me use their name on stage. I was talking to him a few months ago. And they have a giant Kubernetes cluster. Nothing about GKE. They have a giant Kubernetes that they use for their financial analysis right now. And they’re explicitly saying, but I have — they have over 10,000 people who they consider developers inside of their company. And they’re saying how do I enable this productivity in the clusters that we already have? And we hear this all the time.

People saying this is great but it’s not sufficient. It’s great that you have the best thing on GCP. How can we use everything at once? And so I’m really excited to bring all of this together by introducing to you a new project. It’s called K-native. And someone’s excited. This is a new open-source project. It’s created by Google but done in conjunction with a huge number of partners. And the whole point of K-native was to bring the building blocks of serverless together and to give you these workload portability. So K-native is made up of, like I said, built on top of Kubernetes. It’s about portability. And key primitives. So specifically today K-natives have three perimeters built into it. It has build. You can take source code and actually get a container. Because by the way, even though it was easy to build a container, I don’t want to deal with containers myself.

So we build that. Serving, right? The piece he just showed you where he types it, that’s what serving does. There’s nothing you need to worry about, right? There’s none of the configuration pieces. And then we’re not even demoing this today. But event binding. Right? How do you have a standard mechanism binding two events and making those available? Now, this is not — you know, if we did this by ourselves, I think that would be interesting. But ultimately our goal — sorry — ultimately, our goal is to bring the serverless world together around this. And so from that perspective, we’ve been working with a large number of partners for many, many months on this. SAP — SAP, IBM, Pivotal, Red Hat and many, many others. In fact, if you search Twitter for K-native today, you’ll see a pretty remarkable stream of blog posts, of announcements, of companies announcing the strategies behind this.

And it’s something — our hope, our aspiration, our goal here is to enable a whole flourishing ecosystem of serverless products to come out but to enable those in a compatible, portable way so that as a developer you can take advantage of the great serverless concepts. But also take advantage of the specific pieces that each company is going to provide. So to that point, I’d love to bring up one of our partners. So Michael, if you wouldn’t mind joining me, Michael Wintergerst is from SAP. Come on up. And we’ve been working with SAP now for quite some time on this. In fact, the whole front row is SAP, by the way. And I suspect there’s more of them out there. So we’re all very excited. Michael, thank you. >> MICHAEL WINTERGERST: Hi.

Tell me a little bit about this. I’m responsible for the enterprise platform as a service offering. It’s all around open-source frameworks. We have, for example, Kubernetes, Cloud and K-native and computing brings us the possibility to ease the whole development of our businesses and business applications. For instance, all our developers are telling us, hey, it’s great that you have all the nice ones, but here’s my code. Just run it. So that is the paradigm our developers have in mind. And therefore serverless with the idea to hide all the complexity of server infrastructure, networking, storage, PPUs, auto scaling, that’s really great, it brings a lot of capabilities to our application folks.

And with K-native, you are adding a lot more capabilities. As you said already before, here’s my code. Create a docker image out of it. Register that in a docker registry. Deploy it on Kubernetes. And also bringing, then, the inventing stuff inside. That is key for our customers and partners in order to create business services and enterprise grade applications. >> OREN TEICH: And you’re not just looking at using it as an integration piece, but you’re building your own product on top of it as well? >> MICHAEL WINTERGERST: Exactly.

I’m super excited that we announced in the morning the first open-source lighthouse project, our project. So kudos to our C4 guys sitting here in the front. So they made that happen. So what is it all about? And please have a look. It’s on GIT HUB. So it’s an extension factory for our digital core. So maybe you know at SAP, we are adopting — so it’s nothing that we have invented but it’s coming from Gardner. They shaped theirs two years ago. So we have a digital core. So a big enterprise resource planning systems we have. That’s our mode 1 environment. So where the customers are running their day-to-day business, but on the other hand, our customers would also like to see, hey, I would like to get innovations like blockchain, machine learning, all of those capabilities. Also in their mode 1 environment. And that’s what we said at SAP it makes sense to have an innovation platform. Where we are running the complete innovation stack on top. And it’s sitting on top of the SAP platform and it’s behind the mode 1 and 2 environment allowing to you create extensions to your mode 1 environment so that you can run all the nice machine learning features next to your mode 1 system without disrupting them.

So that is our approach. >> OREN TEICH: And I hear from customers all the time that one of the — I think the confusions that comes up around serverless is they think it’s just about greenfield. But I can’t rip and replace everything. I have an SAP system. >> MICHAEL WINTERGERST: Exactly. >> OREN TEICH: Of course we’re not going to replace it. There’s no world in which anyone would want to do that. How do I extend it and how do I do the new development, right? I think that’s what it’s all about. >> MICHAEL WINTERGERST: Exactly. When you have, for example, a system coming from SAP and an order gets created or a new shopping cart gets created, you would like to create an ex-tension.

So you would like to get functions which connect to this event or an order gets created in our mode 1 environment. But behind you have the machine learning capabilities. And you can bring in those capabilities without changing your mode 1 environment. And that is the key benefit we are getting out of it. >> OREN TEICH: I’ve got to say, it’s literally what every customer I know asks for. And I’m just so excited that you were able to join us and that you’re able to participate. Thank you. >> MICHAEL WINTERGERST: Thank you. >> OREN TEICH: So I would strongly encourage you to try K-native. It’s early days. There’s lots of people building on it. In case you were wondering, K-native is not designed for the end user. It’s designed to be the infrastructure pieces that enable products to be built on top of it. KEMA was built on top of it. We’ve built our Kubernetes serverless add-on top of it. If you’d like to get involved and see what the open source is looking like and see how we’re doing scale to zero on a Kubernetes cluster, go take a look and see what we’re doing.

So this is the last slide today. And what I wanted to do is give you a quick overview of what we talked about including some of the things we didn’t talk about. So when we talk about serverless right now, of course, there’s app engine, right? And a recap. With app engine, it’s faster. It’s open. We have the popular languages with more coming constantly. And I didn’t even have a chance to talk about it’s HIPAA client. We’re seeing more and more customers do incredible things with it. On cloud functions, we have, of course, this GA. New regions. The new languages. And these capabilities. I didn’t even talk about scaling controls, by the way. One of the things that comes up quite often is you can have a cloud function. You’re doing resource pooling. You don’t want to have — you don’t want to have one function starve the database if it gets scaled up really high. How do you actually set limits around what this looks like? What you can see is we’re moving from the basic enablement capabilities to what does it really look like when I have 500 functions? 50 functions? When I have complex systems? We talked about serverless containers.

And the ways that we’re expressing that, through GCF coming soon. Through the serverless add-on and through K-native. And I didn’t even have a chance to talk about the green gear. But I’m going to give it a few brief moments. I just want to call out that we’re also taking a lot of the services that have been built into App Engine historically and we’re making them first-class citizens within all of cloud. So, you know, scheduling, KRON and the cloud. How boring is that? It turns out it’s actually really, really important. I was just talking to a customer two days ago who were just, like, I just want to run something once every 24 hours.

Help. That’s really, you know, it’s standard use case. Maybe it’s every five minutes, every 24 hours. So cloud scheduler enables you to do that. That’s coming out later this year. Cloud tasks. Task was a built-in system into app engine. Tasks is a mechanism of doing interprocess communication. So if you want to fire off one thing to another and you need a place to store them in between, that’s what it does. I already mentioned cloud fire store. So all of these make up what we think of as serverless. And they’re just the first step. And we’re really excited about what you might be building, where you might be going, and I hope, I hope that you’re excited about this, that you’re going to go to some of the other sessions and that you’re going to try our products out.

Thank you so much. [ The session ended at .

Add Comment