AWS re:Invent 2021 – Keynote with Adam Selipsky

Please welcome
the Chief Executive Officer of Amazon Web Services,
Adam Selipsky. [music playing] Wow! Hello, everyone,
and welcome to AWS re:Invent. Thank you so… Yeah. All right. [applause] Thank you so much for joining us,
it is so great to be back here with all of you
on the 10th anniversary of re:Invent and the 15th anniversary
of AWS. Whether you're amongst
the tens of thousands that are here with us live, or the hundreds of thousands joining
us virtually from around the globe, we have put together a full throttle
experience for you this week. Of course, we also want
to acknowledge how lucky we feel, I feel to be here with you,
given the global health crisis that continues in so many places
around the world and the toll that
the pandemic has taken. We're also very aware of
the economic crisis that's followed and all the challenges
so many companies and so many families have felt.

And we continue to partner
with as many organizations as possible, as many as we can, including a lot of you here
in this room to help alleviate these crises. Given everything happening
in the world, we really appreciate you
taking the time to be with us. No matter where you're joining from, I assure you that
we're going to do our best to make sure this week
is spent connecting with and celebrating
our entire global community. [applause] Now, as always, we designed re:Invent
to be a learning conference. So, our goal this week is to arm you
with all the knowledge and all the connections
that you need to get true value for your journey to the cloud.

And we sure have come a long way
together in the past 15 years. It's hard to believe
when we first got started, the concept of cloud
computing barely existed. But back then, IT and infrastructure
just weren't working. It was expensive. It was slow. It was inflexible. It's suffocated innovation. And of course, it was dominated
by old guard vendors who loved the expense
and the lock in. But we knew there had to be a
better path forward for all of us. So it started for real in 2006, where we launched the first
generally available AWS service, the Amazon
Simple Storage Service or S3. That's… Yeah. [applause] Nothing like it at 15 cents
a gig pay as you go. And it freed developers from having
to create expensive data
storage systems of their own. And then just a few months later,
six, five months later actually, came the Amazon Elastic
Compute Cloud or EC2.

And then we released
the first database service. And now for the first time,
the foundational elements of almost any application,
storage, compute and database were all available in the cloud. Looking back, I do remember
the perplexed looks on people's faces when I tried to explain AWS
and the cloud. It was barely a concept yet, and it was really hard
to put into context. I can't tell you how many times
I was asked, “But what does this have to do
with selling books?” Seriously, the answer,
of course, was nothing. But the technology we use
to sell books has everything to do
with building and powering AWS.

As the first cloud provider,
we had to prove the value for companies
running their applications on this new infrastructure
platform. There were skeptics particularly
amongst competitors who were really slow to respond. They said at first, “Clouds not real,
never going to take off.” Then they said,
“Well, it's only for startups.” And then when enterprises
really started to use the cloud, they said, “It's certainly not
for mission critical workloads.” Of course, you prove them wrong. [applause] What do I mean? Well, S3 now stores more than
100 trillion objects and more than 16 million
new instances are spun up on EC2,
every day. AWS now offers over 200
fully featured services for compute, storage, database, machine learning,
analytics, AI and more. We're going to keep innovating
to offer the broadest and the deepest set
of services and capabilities.

And you can expect to hear all week
about our latest innovations. There are now millions of customers
in every industry across every use case and around
the world running their mission critical applications on AWS. And we have an enormous partner
ecosystem of over 100,000 partners. And the AWS cloud now spans
over 81 Availability Zones in 25 geographic regions
around the world with more already announced. Whether you're talking about
technical features, customer, partner, community
or operational experience, you can see we continue
to bring you the most capabilities
you will find anywhere.

And those are some of the reasons
why AWS is the leader in the Gartner Magic Quadrant
for the 11th consecutive year. In the 15 years
since we launched AWS, the cloud has become not
just another tech revolution, but an enabler of a fundamental shift in the way that businesses
actually function. There's no industry
that hasn't been touched, and no business
that can't be radically disrupted.

And every one of us here today
is part of that movement. But despite what feels
like massive adoption, we're actually just getting started. Analysts estimate
that perhaps five to 15% of IT spending has moved to the cloud,
there's so many workloads that are going to move
in the coming years, such innovation yet to come. It's such an opportunity
for all of us together. We're building evermore
powerful capabilities, pushing the edge of the cloud to new places
with things like 5G and IoT. We're driving towards seamless
integration of data, analytics, and machine learning
that will be transformative. We're tailoring services
and applications to specific customer
use cases and industries. The possibilities there are endless. From the earliest years
of AWS, the depth, and the sophistication of
the customers use cases was amazing. I was just looking back at a few
of the earlier more well-known ones. So it’s over 10 years ago, that Netflix boldly
went all in on AWS and transformed their business
and the entertainment industry by pioneering streaming movies and creating
original content on AWS. Then in 2012, we were all able
to watch what was happening hundreds
of millions of miles away as NASA JPL engineers landed
the SUV-size Curiosity rover on Mars, using AWS to stream the landing
and support the mission.

Meanwhile, NTT Docomo was part
of Japan's largest telecom, was the first to prove
the power of the cloud for analytics by building a massive multi-petabyte
data warehouse on Redshift that ran queries 10 times faster than
they'd run on-premises, 10 times. Each of these customers
dared to do something fundamentally different
using the cloud. They looked for new ways
to reinvent themselves and their industries. They saw around corners and then even braved
uncharted territories. Sometimes the work
we do together can be hard, but we love challenges. We see them as opportunities, right,
to explore and discover a better way. We take a step back, what each
of these organizations actually did was to find
an entirely new path. They were pathfinders. Of course, the concept of pathfinding
isn't new to any of us. We're all familiar with people who
push the envelope or challenge norms. He really see pathfinders
from all walks in different parts of society,
we draw inspiration from them, we learn from their ability
to invent, to predict
and to break down barriers.

So whether it's discovering
new treatments to save lives, or finding ways to bring power
to those living in the dark, or breaking the physical limits
of what we thought possible, or working to end segregation
and discrimination. They're doing something unique,
just like so many of you have done
something unique with the cloud. One company that's been on
the cloud journey with us for over a decade really, and has been pathfinding
for an entire industry is Nasdaq.

[applause] Nasdaq showing up, thank you. And I'm so delighted that to tell us
about the next wave of innovation, you're going to love this,
coming at Nasdaq. Please, welcome to the stage
President and CEO, Adena Friedman. [music playing] Thank you, Adam. And good morning, everyone. I am so excited to be here today. Founded 50 years ago, Nasdaq has been
built on the whole principle of disruption and pathfinding. It began with the electronification
of the markets and then the race-to-zero latency
as volumes and investor participation exploded. And it seems very appropriate
to be standing here today as we embark on
the next era of innovation, to which is to bring the cloud to the financial markets
in partnership with AWS. Now, Nasdaq’s footprint today
extends beyond our markets. We're a technology company
and a SaaS provider to the broader
capital markets ecosystem.

As a market operator, we serve
millions of investors globally. We’re the largest
corporate issuers in the world, with highly advanced equities
and derivatives markets. But our technology also powers
130 other markets around the world, and we provide mission critical
trading, clearing, settlement, and surveillance solutions. We provide the technology that powers
from Singapore to Brazil, from Japan to Switzerland,
and we have not stopped there. In recent years, we've applied
our financial markets expertise to build new marketplaces, including crypto exchanges
and sports betting platforms. Our footprint as a technology
provider gives us a unique ability to bring the world's markets
to the cloud and to understand the complexity
of completing that journey. Because our systems are responsible
for seamlessly transacting hundreds of billions
of dollars a day, there are certain key attributes that are critical to successfully
executing that activity in the cloud. They are to be hyper-scalable,
ultra resilient, highly-performant
down to the nanosecond, and fair for all participants. Now today within Nasdaq systems,
we perform an extremely low latency. Our end-to-end order to trade
processing occurs in 20 microseconds or less.

Now that is… to put that
in perspective, that's 10,000 times faster
than the blink of an eye. And in any given day, our systems manage three
million messages a second. And on our peak days,
we've had 60 billion messages entering our trading engines
and 120 billion messages exiting those on our busiest days. And guess when those days occurred? Right at the height of the early days
of the pandemic, as the virus was gripping economies
all over the world, and retail investors started
entering the markets on mass. Now that's just our own markets. Today, Nasdaq is beyond our markets
and our market technology. We provide SaaS solutions to
the broader capital markets ecosystem, serving corporate clients,
investment management clients, and banks with a suite
of AWS-based SaaS solutions. We provide highly advanced
anti-financial crime technology serving 2,000 banks
and credit unions.

And the data centric
architecture of the cloud really allows us to deploy
very advanced algorithms to catch criminals
across the whole financial system. We are also helping to modernize
investment management firms with a comprehensive suite
of workflow and analytics tools. And we help thousands
of corporate clients manage their investors
with a suite of AWS based built-for-purpose solutions. Now, as we've deployed all of
these solutions in the AWS cloud, we've gained enormous insights into… and that really are helping
to guide us on the next step of our journey
in partnership with AWS. Well, our partnership with AWS really has been built
on many milestones across years
of innovating together. In the early days of the cloud
back in 2008, we launched a new product
called Market Replay, which fittingly allows users
to replay a full market day in every single stock
down to the millisecond. So it was a perfect early application
for the cloud, a lot of data
but not a real time application.

And then we received our first bill
from AWS, and it was like $20, so we knew we had
a really good relationship going. But of course, we've grown
that relationship a lot since then. So, in the last five years,
we've really been focusing on bringing our market services
to the cloud, and we've started with what
we call our surrounding systems. They include data distribution
systems, market operation systems, revenue management,
regulatory reporting, think of it as all of the systems
that are foundational to well-functioning markets.

And now we are ready for
the biggest challenge of all, the matching engine – the technology
that makes the markets tick. It's the technology that changes
an order into a trade. And we only wanted to take
this journey when we were confident that we could do it while maintaining
or even improving the latency and resiliency of our markets. Now through our engineers’
deep engagement with AWS, we now know we are ready. And we are going to start this new
phase with our U.S. options markets. Nasdaq operates multiple
options markets in the U.S., and combined,
we are the largest marketplace for multi-listed options
in the United States. We will work with AWS to bring
our first U.S. options
markets to the AWS cloud in 2022. We will then follow– [applause] Can you tell where all the Nasdaq
people are sitting? We will follow with more
of our markets as we work closely with our clients,
regulators, and other constituents.

Now the real power of our partnership
has been our ability to work together to solve extremely
complex technical challenges. Over the past two years,
we've worked with AWS and together we have
pioneered the creation of ultra-low latency edge
compute systems that are designed specifically
for the capital markets, leveraging the hybrid
cloud capabilities of AWS Outposts, we can now bring the edge directly into our primary data center
in Carteret New Jersey, creating the first ever capital
markets private local zone. [applause] Together, we will create
a cloud center right inside
the Nasdaq footprint, which allows us to transition
our U.S. markets into the cloud. Now our partnership with AWS
extends beyond our own markets. And we are going to include
a go-to-market approach for our 130 market technology
clients. In fact, we're already
engaged with AWS to deploy cloud-based trading
and clearing solutions for several marketplaces
and clearing houses around the world. And as we consider our largest and most complex
exchange technology clients, those with similar latency and scale
needs as Nasdaq's U.S.

Markets, the steps we will take
in our own markets will help standardize
the cloud journey for them as we work together to transform
the backbone of the global economy. Now combining the global
footprint of AWS with Nasdaq's, you can see the global power
of these combined systems, the market technology
footprint reach around the world
and AWS reach as well. So now, why does this all matter
to the world around us? Well, our purpose at Nasdaq is to champion inclusive
growth and prosperity. And we fundamentally believe
that if you have a great idea, markets can provide you the capital to turn that great idea
into a great business. And if you are saving for your
future, markets can help grow your wealth
and achieve more with your money. So, it is our responsibility
to investors and to corporate issuers to continue to advance
the markets to make them safer, more accessible, more global,
and more affordable.

Our ambition at Nasdaq is to become
the first financial marketplace and solutions provider that is 100% cloud
enabled with AWS’s, our ally, and combined with our amazing
technology talent at Nasdaq. We believe we have the joint team
to make this a reality. Together, we will truly redefine
the future of the capital markets, a future that is safer, stronger,
and more accessible in the cloud. Thank you. [applause] Wow! Okay, to the Nasdaq's U.S.
markets and matching engine, unbelievable financial
marketplace and solution provider that's going to be
100% cloud enabled. If that isn't pathfinding,
I don't know what it is.

Thank you, Adena,
for joining us today. [applause] Now, if we take a closer
look at pathfinders and the way
they approach challenges, patterns start to emerge,
you can see common traits. So, some don't let the rules
define them, they don't accept
the status quo, they instead find new
and better paths forward. And in doing so, often change
the game for everyone else. Others have the ability
to see what others don't, and they use that knowledge to help
people understand challenges at hand, so that they can understand and
overcome them to make a difference. And still others are intentional about giving those
who come behind them the tools that they need to forge
their own paths forward.

We look at pathfinders because
they can help us to understand that to succeed in our own efforts
and our own innovation. There are these characteristics
that, you know, we want to possess and that, you know,
we are in our organizations need as we create our own journeys,
including the cloud journey. So, I recently came across
a story of a pathfinder who possessed that first trade
we talked about, he pushed against the status quo
in pursuit of the better way forward. And in doing so,
he truly changed the game, in this case,
the game of basketball. So, in the mid-1930s,
before basketball was very popular, 30 points was actually a lot
for one team to score. Definitely not a high
flying game you see today, where NBA teams routinely
average about 110 points. So, these early games
were pretty slow. And to shoot, players would stop,
plant both feet and shoot the ball
using both hands. Not exactly dynamic. People talk about Bill Russell,
Michael Jordan, Steph Curry, is changing
the game of basketball. But long before any of these icons, was a player who might have been
even more influential, Hank Luisetti.

So, Hank, story is not as well known, but some basketball fans believe
that Hank might have been the most disruptive basketball
player of all time. Without him, the game
of basketball wouldn't be anything
like it is today. Hanks contribution, he did
nothing less than change the shot. Hank played for
Stanford University in 1935, onward, and back then everyone
was playing the same old way. And no one was really scoring much. So, Hank was dissatisfied
with the state of affairs. And he'd been practicing a new shot,
a very different shot, a one-handed running jump shot,
and Hank was ready to attack.

Picture this, your college basketball
player for Long Island University. Your team is on an unheard
of 43-game winning streak. You're about to play an overmatch Stanford team and you have no doubt
you're going to beat them. But Hank comes onto the court firing. He makes one shot,
and another, and another. You have no idea who this guy is or where his running
one-handed jump shot came from that is using
to beat your team 45 to 31, which was a slaughter at the time. So Hank, single handedly, literally, scored 15 points,
a huge tally for that era. And as he left the floor, the Long Island team
gave him a standing ovation. But Hank was just getting started. Soon after, he became the first
50 points scorer in college basketball history. And 83 years later,
he still holds Stanford's record, for the most points scored. And again, when you get pathfinding
right, it is transformative. Hank proved that his shot, the predecessor to the modern
jump shot, was a winner. And soon every basketball player
wanted to arm themselves with a one-handed running jump shot.

Hank could refuse
to accept the status quo and truly created a better way. It was a revelation. Hank’s accomplishment show how pathfinders can leapfrog
what others think is possible. Just as we've all used the cloud
to be more efficient, to become more agile,
to challenge the status quo, and to change the game. When we introduced EC2,
the concept of compute on-demand, you could buy with a credit card and pay only for what you used,
well, this was very different. And it all started with an instance. So, we called it the instance. And we figured folks needed more,
they use more instances.

But once you started using EC2, it turned out you had something
very different in mind, you wanted bigger instances with more
CPU, more memory, more storage. And so then there were three. So the first instance
became the m1.small, when we introduced the m1.large,
and the m1.xlarge. And this was just the beginning. As more customers came on board,
we launched instances for Windows, and then features like Auto Scaling
and Load Balancing. But you needed more. You wanted more powerful instances
to run HPC workloads, databases and analytics,
and enterprise applications. So, we launched entirely new families
of instances, some more compute optimized
than storage optimized and memory optimized,
but you needed more, so we added GPU-based instances
and Mac-based instances. Compute is so foundational, the customers had an almost
insatiable appetite for new and specialized instances. And so as we did that, we continued
to imagine new apps and customers were inventing
entirely new businesses. This is why we now have over
475 different instance types more than
any other cloud provider.

And we're not even close
to being done. We keep running just as hard and fast
as we did when we launched EC2 in 2006. But even as we innovated here, we realized that if we truly wanted
to change the game with you in terms of price performance
for all of your workloads, we needed to rethink instances
entirely, we needed to go deep,
all the way down to the silicon. And so we set out to do just that, and started to design
our own Arm-based chips. And then we launched
our first processor Graviton. You loved it and immediately started
putting it to use.

But guess what? You needed more. You wanted the cost savings and
goodness of Graviton for more workloads, all workloads. So that's when we released our second
Arm chip, Graviton2. This was a huge leap forward and provided
the best price performance in EC2, 40% better price performance
than comparable x86-based instances. Today, thousands of customers
are using Graviton2-based instances, and are reaping the benefits
and price performance for a very wide range of workloads,
including big data analytics, game servers,
and high performance computing. You probably know what I'm about
to say, yeah, you still needed more. And so I'm really pleased
to announce Graviton3, the next generation of AWS
designed Arm chips. [applause] Graviton3 chips are another big
leap forward, 25% faster on average for general
compute workloads than Graviton2, and they perform even better
for certain specialized workloads.

They provide two times faster
floating point performance for scientific workloads, are two times faster
for cryptographic workloads, and three times faster
for ML workloads. And to help reduce the carbon
footprint, Graviton3 processors use up
to 60% less energy for the same performance
as comparable instances. We're incredibly excited for you
to get going with Graviton3. So today, we're also announcing
the C7g instance, the first of a new generation of EC2
instances powered by Graviton3. [applause] And you can expect to have
fantastic price performance for compute intensive workloads. And we're going to continue to bring,
you know, more improvements to Graviton3
to more of your workloads as we deliver more
and more instances that are going to be
based on Graviton3. So how's that for more, great? [applause] Yeah, but you told us
you still need more. So, training machine learning models
and running inferences are highly compute intensive,
and many of you have asked us to find a way to lower the cost
of those machine learning workloads.

So, Inferentia, our first machine
learning chip is optimized
for high performance inference. In 2019, we launched the first
Inferentia-based instance called the Inf1. Now, Inf1 delivers up to 70%
lower cost per inference and comparable GPU-based
EC2 instances. Then last year, we announced
our second ML chip Trainium, purpose-built
for training deep learning models. And we've been working hard
over the past year to get Trainium into your hands. So today, I'm excited to announce
the new Trn1 instance powered by Trainium, which we expect to deliver the best price performance
for training deep learning models in the cloud
and the fastest on EC2. [applause] You can use Trn1 to train machine learning models for apps
like image recognition, natural language processing,
fraud detection, and forecasting.

Trn1 is the first EC2 instance
with up to 800 gigabits per second networking bandwidth. So it's absolutely great
for large-scale, multi-node distributed
training cases. Sometimes with machine
learning workloads, you need more processing
than any single instance can handle. And we can network these together
in what we call ultra-clusters consisting of tens of thousands
of training accelerators interconnected
with petabyte-scale networking. These training ultra clusters
are powered by a powerful machine learning supercomputer for rapidly
training the most complex, deep learning models
with trillions of parameters. Now, with both Trainium
and Inferentia-powered instances, customers can have the best price
performance from machine learning, from scaling training workloads, to accelerating deep
learning workloads in production with high-performance inference, making the full power of machine
learning available for all customers, that's been a goal of ours
for a long time. And lowering the cost of training
and of inference are major steps in this journey. But we know we'll never be done
innovating within compute, and we keep working intently
with customers and partners in this pursuit.

That includes SAP which has thousands
of customers already running on AWS. And now we're working with SAP
on a new initiative to power the SAP HANA Cloud
with AWS Graviton processors. This should deliver significant
performance and efficiency improvements
for joint customers, and we're excited about it. So, you can see, we continue
to seek ways to make the cloud even more powerful and more
cost-effective for every workload. But do we really mean every workload? What about mainframes? Now a lot of companies, of course, have been running applications
on mainframes for many decades. And customers in every industry,
obviously, still rely on them.

But mainframes are expensive. They're complicated. And there are fewer and fewer people who are learning to program
COBOL these days. So maintaining a mainframe is kind of
like trying to shoot a basketball with your two feet
planted on the ground. You know you can do it, but you also know
there's got to be a better way. And this is why so many
of our customers are trying to get off
their mainframes as fast as they possibly
can to gain the agility and the elasticity of the cloud.

We've seen customers reduce
their costs by up to 70% or more after migrating. Now, there are a couple of different
paths that customers take. Some start with a lift-and-shift
approach and bring the application
pretty much as is. Others refactor and break
the application down into microservices
in the cloud. Neither road is as easy
as customers would like. And in fact, whichever way you go,
can take months, even years. You have to evaluate the complexity
of the application source code, understand the dependencies
on other systems, convert or recompile the code, and then you still have to test it
to make sure it all works. And all this is, before
you've actually moved anything. It can be a messy business
and involves a lot of moving pieces. And it isn't something that people
really want to do on their own. And while we have lots of
partners who can help, even then it's a lot of time to be
standing with two feet on the ground when your competitors
are shooting jump shots.

And so today, I'm really pleased
to announce AWS Mainframe Modernization, which is a new service to make it
faster to migrate, modernize and run mainframe
applications on AWS. [applause] With AWS Mainframe Modernization, we've seen the time that it takes
to move mainframe workloads to the cloud cut
by as much as two-thirds using a complete set of development,
test, and deployment tools and also a mainframe
compatible runtime environment. Mainframe Modernization
helps you assess, help you analyze your mainframe
application for readiness. Choose the path that you want,
re-platform or refactor, and then come up with a plan. If you want to re-platform
and move your app over to AWS with minimal code changes, you can use mainframe
Modernizations recompilers to convert your code,
and it's testing services to make sure you don't lose
any functionality in the translation.

And then you're ready to migrate
to the services new mainframe compatible EC2
runtime environment. Now, if you want to refactor
or decompose the application, so you can just run
the components in EC2, in containers, in Lambda, then our
Mainframe Modernization service can actually automatically convert
the COBOL code over to Java for you. Over the last 15 years,
as the cloud has become mainstream, your options for where you
run applications have proliferated. The number of workloads in
the cloud continues to explode. And we continue to believe that
the vast majority of applications and workloads will run in the cloud
in the fullness of time.

At the same time, many workloads
today still run in your data centers, because you've deployed so many
resources there for so many years. And the term hybrid emerged,
of course, to describe scenarios where customers are running them
in both places. So, as AWS pioneered cloud
infrastructure services, we obviously spent a lot of time talking about the benefits
of the cloud. But we've also been building bridges
back to your data centers for years. We've had virtual private cloud
or VPC, Direct Connects, storage gateway, and we've also worked
with partners like VMware, Red Hat, and most recently NetApp
to make it possible for you to use the same
familiar software and tools that you know and love your data
centers seamlessly on AWS.

And we'll continue to build
those bridges between your applications and AWS
for the data centers, or elsewhere. Now over the years, we've also talked
with you about the pieces of IT that you might want to run in your
data centers for a bit longer. You told us that for those
applications, what you really needed was to bring
a bit of AWS on-premises, not something like AWS,
or some marketing claim like you see from some other cloud providers,
but actual AWS, so you could use the same APIs,
control plane hardware and tools that you already use on AWS. So that's why we built Outposts,
which launched two years ago. Outpost isn't like AWS, it is AWS. We deliver it,
we do all the maintenance. And it comes with the same APIs,
same control plane, same tools, and the same hardware. We've seen a lot of momentum
for Outposts since we launched it.

And it's enabled
a lot of organizations to move more quickly to the cloud because they can bridge back
very well to their own data centers. And today, customers across
a number of industries are using Outposts, including DISH Wireless,
Verizon, Warning Star and Riot Games. Now, as the world of applications
continues to change, so is the definition of hybrid. And you've told us you need AWS in more new places
than just your data center. Many of you are running apps
in places beyond AWS regions and your data centers. These apps rely heavily on the cloud
for processing, analytics,
storage, and machine learning, and in fact, many are possible
because of the cloud. But in some cases, they need compute
and storage done locally. The edge of the cloud is pushing
outwards to facilities like factories and hospitals,
remote locations like oil rigs and agricultural fields,
and 5G networks. We've been innovating to bring AWS
to those places that your applications
are increasingly taking you. I think this is going to unleash
a wave of innovation.

But you've told us that
for this to work, the whole thing has to be seamless. Again, wherever AWS is,
it has to be AWS not like AWS. So, where else will you find AWS? Well, we have smaller Outpost
form factors, which we're really excited
to announce are generally available today. They take up less space,
you can bring AWS to offices, to factories to hospitals. But you also do work in remote places
like oil rigs and agricultural fields
that may not even have connectivity. Some are calling this
the rugged edge. And for those, you can use our Snow
devices to collect, store and analyze data
on that rugged edge. And finally, when you're building 5G
mobile applications that need to sit at the 5G mobile edge,
we have Wavelength. Wavelength puts AWS compute
and storage services within 5G networks, providing mobile edge
computing infrastructure to enable ultra-low
latency applications. Now last year, we announced
the general availability of Wavelength with Verizon
in the U.S., KDDI in Japan, and SKT in South Korea.

And this year,
we've launched Wavelength in new places
with Vodafone in Europe, and now we've announced
a partnership with Bell Canada to deliver Wavelength
in Canada in 2022. With these partners we'll have
17 Wavelength zones in the U.S., Europe, and Asia. What about networking? So for taking the path to the edge,
we need a more robust way to connect all of these devices.

Today, we're more dependent
than ever on high-performance, reliable connectivity. We are literally
connecting everything. Robots in manufacturing lines, tablets in the hands of workers
in factories, stores,
connected air conditioners, escalators, forklifts,
delivery vehicles that need a reliable data
connection to manage logistics. And all of these new use cases
require consistent, reliable connectivity. Now today, most enterprises use
local wired Ethernet or Wi-Fi networks
for their connectivity. These systems weren't designed to support connecting
all of these things.

Wired networks perform well, but they're expensive
to deploy and upgrade. And they don't extend very well
to mobile devices. Enterprise Wi-Fi is pretty easy
and cheap to use, but it has range
and coverage issues. And so this is one of the reasons
why the promise of 5G is so exciting. With 5G you can easily connect
tens of thousands of devices. The handoffs between access points
are seamless, and you can maintain coverage
over large areas, and you get high bandwidth
and low latency. But designing, building,
and deploying a mobile network takes a lot of time and is a complicated process
requiring telecom expertise.

Plus, you have to qualify and work
with multiple vendors. And each has its own pricing models, most of which include a charge
for each device. And that adds up when you're talking
about tens of thousands of devices or more, it is not easy. That's why I'm really happy
to launch AWS Private 5G, which is a new service
to make it easy to deploy and manage your own
private mobile network. [applause] With AWS Private 5G, you can set up
and scale a private mobile network in days instead of months. You get all the goodness
of mobile technology without the pain of long planning
cycles, complex integrations, and high upfront costs. It's shockingly easy. You tell us where you want to build
your network, and specify the network capacity. We ship you all the required
hardware, the software, and the SIM cards.

Once they're powered on, the Private 5G network
just simply auto-configures and sets up a mobile network that can spend anything
from your corporate office to a large campus,
to a factory floor, or a warehouse. You just pop the SIM cards
into your device, and voila, everything is connected. Ordering additional capacity,
provisioning additional devices, or managing access permissions
can be done easily just using the AWS Console. And best of all, you can provision
as many connected devices and users as you want without
any per-device charges. And with Private 5G,
it operates in shared spectrum, you don't even need
a spectrum license. There's currently nothing
like AWS Private 5G out there, a one-stop-shop, manage private cellular network
that lets you start small and scale up as you need with pay
as you go pricing, and we're really happy
to bring it to you. [applause] Now creating this new
connected world, where the boundaries of the cloud are going to entirely new places
like the 5G networks, there's a lot of path
finding to be done.

And one company leading the way, a really inspiring pathfinder
is DISH. So DISH started pushing against
the status quo in 1980 by selling satellite dishes and
then launching their own satellites. Now they're reimagining connectivity
through new platforms, all powered by the cloud. To tell you more, I'd like
to introduce Marc Rouanne, the Executive Vice President
and Chief Network Officer of DISH. Come on now Marc. [applause] Thank you, Adam. Two years ago, I joined DISH to build
and design a 5G, a new 5G wireless network
from the ground up nationwide, and something that had never been
done before entirely in the cloud. This is a world first and having a true open
cloud-native network will be a game-changer
here in the U.S.

Now, when most people
hear about DISH, they probably
don't think of wireless, but DISH has a long story
of being a pathfinder that is not afraid
to reinvent itself, from selling satellite dishes
in the back of a truck, to launching its own satellites, and to disrupting itself
with the launch of Sling TV. Now, we're forging another new path. It's a first in the telecom industry. And we're doing it with AWS. Existing 4G and 5G networks have been
built with voice and smartphones in mind. But at Dish, we're building to expand
beyond smartphones, to empower machines,
and humans beyond smartphones. This is not 4G Plus. Legacy carriers have been upgrading
the same hardware, and the same infrastructure
since the days of 2G. But again, we're not just building
another G, we see an opportunity to do more. So we're building the first
architecture that is truly optimized for the cloud. Now, it promises tremendous advances,
not just for human communications, but also for machine to machine. And of course, for humans to control
those machines, like cars, robots, drones, and the results
will be a game-changer for businesses
across the industry and enterprises.

Now, to help explain the impact
of Dish 5G cloud-based network will have over the industry. Let me take an analogy. Think back to the advance of the App
Store on smartphones, before capabilities
were built directly into the phone. You would pick up that brick out
of the box and the basic features
you'd get were where you would go. Then suddenly, with the App Store, anyone of you could develop
their own and unique personal apps to meet their needs
or their customer needs. And today as a consumer, I can design
the experience that I want. That is exactly
what we are going to do. We've 5G for the enterprise and we're going to do
that together with AWS. Each enterprise customer
of DISH Wireless will be able to define
the experience they need. We call it the DISH 5G network
of networks, where each sub-network is defined by the enterprise
for their customers.

Now, it's not just one-size
fits all like we used to. It will be customizable by speed,
latency, and data requirements, and many other features. In fact, some say we are
the AWS of wireless. AWS transform our companies, and many
of you will forever handle IT. And DISH, with the power of AWS, will now transform
wireless connectivity, providing a customer experience
the way AWS does for cloud computing. Now, what can that look like? There are three things that our
collaboration with AWS will enable. First, data-centric, because this 5G is in the cloud, we'll be able to unlock,
but also protect information in a way that will change any market,
any enterprise. Today networks are like
that brick phone. But on this 5G, companies
will be able to utilize aggregated and anonymized data
to identify patterns and improve customer experience. Imagine you run a company
with a fleet of vehicles, some shops, warehouses, then you will be able to augment
the data from your vehicles, from your shops, from your warehouse, and your customers
with the network data to make your service
much more competitive.

AWS is helping us drive automation
at scale, and it matters. It matters because we are going
to have hundreds of networks and we'll need
to manage the complexity. And the truth is today
nobody can do this. Actually,
existing operators manage one-single
monolithic network. We'll manage a network of networks
because of our ability to automate by leveraging AWS services,
like data lake, analytics, AI, and machine learning. Third, connecting the edge
to the cloud in a simplified manner. Now, many enterprises, many of you
have some sort of project to bring services capability
to the edge. But the truth is,
it's very complex today. With our cloud architecture, we want to drastically
simplify the edge and make running software at the edge
as simple as making a phone call.

Now, one of the benefits
of this cloud-native software is innovation
and the speed of the cloud. And it is so much faster than what we saw in the telecom
in the last decades. To give you an example of what
we already see today, in our 5G network in the cloud, we can create a nationwide replica
in the network in days and scale it up and down at will, which would have taken years
on the classical 4G or 5G network. And we can literally move
the software around in North and South in hours, which again would have taken years
in existing networks because it's tied
to tons of hardware.

Looking ahead, DISH is going to be
the enabler of technology that people have not
even imagined yet. And there is so much potential. And I'm excited that AWS is right
alongside us in this journey. Thank you. [applause] Thank you so much, Marc. We're incredibly honored
to have the chance to work with DISH, and so many of our other customers
to innovate, push the boundary of what's possible
with the cloud infrastructure.

Hank Luisetti was a pathfinder who was constantly in search
of a better way. But let's talk about a different
kind of pathfinder, the kind who has a special ability
to see what others don't and to help understand
new ways to solve challenges. I'd like to tell you about another
pathfinder who had that ability. She took the information
on hand, analyzed it, and used it to make
a big difference. This Pathfinder has a familiar name
to many of us, Florence Nightingale. She was known best as,
"The lady with the Lamp," a title she earned during
the Crimean War in the 1850s as she checked on injured soldiers
throughout the night, carrying a small candlelit lantern
to light the way.

She's known as one of
the most compassionate and famous nurses in history. But did you know
she was also a data geek? Florence rebelled against
the expected role of women at the time and instead
earned her nursing degree. And then when war broke out, she enlisted to serve
in a military hospital. When she arrived, she was shocked, 2,000 sick and injured men
lay on the couch hoarders, their mattresses were only
18 inches apart and many had no bedding. Cholera, gangrene, lice,
and fleas were rampant. Soldiers were washing
once every 80 days, sharing the same sponge. It was a cesspool. More soldiers were dying
in the hospitals than on the battlefield.

Conditions were so horrible
that Nightingale dubbed it, "The Kingdom of Hell." But no one knew exactly what was
causing such a high death rate. The importance of hygiene
wasn't well known at that time, as germs weren't quite yet
well understood. But Florence believed that there
was a link between hygiene and disease. She just intuitively knew this. So she and her nurses
sprang into action. They insisted on clean linens
and soap for the men. When the hospital doctors
refused her demands, she wrote to England for donations, and actually broke into supply
closets late at night to get what she needed.

But this wasn't enough,
she needed the army and the medical establishment to act. She could see the pattern, and she
had to find a way to prove it. So what did she do? She started collecting data. You see, Florence had a passion
for statistics. So she started taking detailed notes
on what she saw. The number of injuries,
the number of deaths, the connection
between hygiene and recovery. She took all of that data she'd
gathered and charted it all visually, into what she called,
"The Nightingale Rows Diagram," similar to the pie charts
we use today. She used blue wedges to represent
deaths from preventable diseases, red wedges to show deaths
from wounds, and black wedges to show deaths
from all other causes.

And she made some shocking
discoveries. Ten times as many soldiers in
the hospital were dying from illness, as were dying from their wounds. We know today that most of them
died from contagious diseases. With 3,000 deaths in a single month,
if no action was taken, Florence estimated that the entire
existing British forces could be wiped out within 10 months. However, based on her data,
and on her insistent appeals, the army approved new hygiene
standards, sent more supplies, and kept the hospital air
and sewage clean. And just six months after
implementing her changes, her data showed that the mortality
rate had dropped by over 90%. That's thousands of lives
saved for needless death. It's staggering. After the war, Florence wrote a book
using the analysis of the data that she collected during the war
to propose hospital reforms. The book triggered a total
restructuring of the health efforts, including the establishment
of the Royal Commission for the health of the army. For her achievements,
Florence became the first woman admitted to the Royal
Statistical Society and the Queen.

The Queen awarded her
large financial prize. Florence used that prize money
to establish thank Thomas's Hospital and within that the Nightingale
Training School for Nurses, where she used
what she had learned to train the next generation of nurses. Florence was a pathfinder
who truly saw what others didn't. Well, over a century before
the term evidence-based medicine was coined, Nightingale was using data
and statistics to save lives. In the 1800s, Florence could
gather data by hand and take notes, today we have unfathomable
amounts of data and it's only
continuing to explode. Recent studies show
that the amount of data created in the three years
that's going to end in 2024, will be more than all the data
created in the past 30.

And it's not just growing in volume,
it's also getting more diverse. Organizations are storing
and analyzing data from all kinds of sources, including data
from industrial equipment, log data from websites,
financial risk simulations, and genomic research. If we could become like Florence and find the patterns
and the insights within the data, the data will revolutionize
how we think about solving problems and will transform every field,
every business all of our lives. It will lead to more breakthroughs
in areas like healthcare, manufacturing,
and sustainability. It will drive waves of innovation
and entirely new businesses. But working with data is tricky. Your data is not static, and it's not used for
just a single purpose. And every organization
has different data sources, different analytics needs, and
different governance requirements.

So modern data strategy understand
the data is dynamic. The data goes on a journey, and that you need the best tools
for each stop along that journey. Having a few capabilities
or being able to handle a few stops on the journey
just doesn't do the job. While every data journey
is unique, there are some critical components
that are shared by all. For example, you need a database
to store and process the data
that powers the application. You need to unite all the data stored
in different places, and data lakes. Of course, you can't have
a data journey that doesn't include
the analytics tools that you need to develop insights
and understand the trends and all of this data.

More and more machine learning
and artificial intelligence are part of the data journey. So you need the tools to build
models, to make predictions, and to add intelligence
to your applications. Finally, for any of this to work,
you need to make sure that you have the right security
and governance in place, so you can put data
in the hands of people at all levels of your organization.

You need to have complete control
over where your data sets, who has access to it, are and what can be done
with it at every step. AWS knows how important
this is to every customer. Not just those in
regulated industries, or those complying with
privacy regulations like GDPR, but every single customer. And each of the steps
in this journey is critical. You need the right capabilities
in each area or you'll end up making painful
compromises in performance and cost, the insights you can glean,
or the speed at which you can move.

And that's why AWS continues
to be focused on building out all of the critical capabilities
you need at each step. And that's why we continue to be
the best place for your entire data journey. Since virtually all applications
have a database, let's start our journey there. So for a long time, the relational
database was the only game in town. And that worked pretty well
and applications stored gigabytes, and occasionally terabytes of data. But applications have changed
a lot over the years. And today's applications
routinely deal with terabytes, often petabytes,
and sometimes even exabytes of data. And today's applications
are serving also exponentially more
customers in some cases, hundreds of millions of people
around the world. These applications can peak
millions of requests per second with very low latency. So there's just no way that
one database works for all of these applications.

Which is why people are now tailoring
their database choices to the needs of their application. Sure, you can still go down
to one database road, and the old guard vendors
would love it. But this puts a drag on performance
on scalability, and on cost. One size fits all doesn't really. That's why we've invested so much to bring you the broadest selection
of databases that you'll find.

On the relational side,
we have Amazon RDS, which supports five different
relational engines. Amazon Aurora, which is built
from the ground up to give you the performance
and availability of commercial-grade relational databases at 1/10
of the cost of other providers. Aurora continues to be
the fastest-growing service in AWS history. With eight purpose-built
non-relational databases, so you can choose whichever fits
the need of your application. If your application is global,
and needs to scale to support tens of millions of reads and writes
per second, with millisecond latency, you can use our key-value
database Amazon DynamoDB. If you need a graph database, for applications with connected
datasets use Amazon Neptune. If you need a blazing fast Redis
is compatible in-memory database that can process more than
13 trillion requests per day, you use our newest purpose-built
database, Amazon MemoryDB for Redis.

Customers need all of these
powerful tools, not just one or even two, to deliver the experience
of their end-users demand. Some people will try and tell you
that you don't need any database. They say, "Oh no, I have this
one database that can take care of it
all for you." But if you want the right tool
for the right job that gives you differentiated
performance, productivity, and the customer experience
you need, you want the right
purpose-built database for that job. So that's the database stop. But not all data lands here first. Much of the data
finds its way to a data lake. Data lakes bring together
structured data from databases and data warehouses
with unstructured data from an unlimited number
of locations and make it all available for people
to run analytics or machine learning. Today customers have hundreds
of thousands of data lakes on AWS taking advantage of Amazon S3's
unmatched security and reliability, deep set of features,
and cost-effective performance. To make it easy to get your data lake
up and running in just days, we created AWS Lake Formation.

Lake Formation helps you collect
and catalog data from databases
and object storage, move the data
into your Amazon S3 data lake, clean and classify your data
using machine learning algorithms and then secure access
to the sensitive data. Lake Formation gives you a single
place to enforce access controls, and operate it the table
and column level for that so that all the users and services can access your data
in the right way. This is incredibly important, especially when you want to give
multiple teams applications and tools access to your data. Take for example sales data. Not everyone needs access to all
information nor should they have it. You might want account
managers in France to see only French accounts, or the marketing team to see only
accounts with marketing activity. You probably want your finance team
to have access to everything.

To give customized access
to slices of data, traditionally you've had to create
manage multiple copies of the data, keep all the copies in sync
and manage complex data pipelines. It's a lot of heavy lifting. So you asked us for a more targeted and direct way to govern access
to your data lakes. And to eliminate this heavy lifting and provide you with
that precision over access, I'm really happy to announce
the general availability of row and cell-level security
for Lake Formation.

Now you can enforce access controls
for individual rows and cells. Lake Formation automatically
filters data and reveals only the data permitted
by your policy to authorized users. So if you take a sales data example,
instead of creating multiple tables for each containing
particular sales teams and countries, you just define a set of policies that provide access to specific rows
for specific users without having to duplicate data
or build data pipelines. It puts the right data in the hands
of the right people and only the right people. Now another challenge at some
accounts with data lakes is that the data
that it house isn't static. Often multiple systems
are feeding in data, and more and more data is being added
and removed rapidly, which makes managing conflicts
and errors and making sure that the query
is returned consistent or up-to-date results.

This is another problem that customers
manage by building data pipelines. But many customers still
run updates to their data lakes, maybe only once a day,
or best every few hours, but everyone wants to get
to real-time. To make this easier today we're
announcing the general availability of transactions for govern
tables in Lake Formation. Now you can create a new type
of table, a govern table
that supports ACID transactions.

So this data is added
or changed in S3. Lake Formation automatically
manages conflicts and errors to give all users
a consistent view of the data. This eliminates the need for custom
error handling code or for batching updates. So now multiple sources and data
pipelines can keep updating data in real-time while users
are querying data instead of having to wait
for data to be updated in batches. Both of these new capabilities
make it a lot easier to set up, to govern,
and to manage your data lake, and we're really happy
to make them available today. So now you're ready to analyze
your data. And to help you do this, AWS has the broadest set
of analytics services with the most capabilities
within each service so you can choose the right tool
for the right job.

If you want to process vast amounts
of unstructured data using popular open-source distributed
frameworks like Spark Hive, or Presto, you use EMR, which supports more
of these frameworks than any other top cloud provider. If you want to quickly search
and analyze large amounts of log data to monitor
the health of your production systems and troubleshoot problems,
you use Amazon open search service which is the successor
to our Elastic Search service. It supports open search 1.1, a community-driven open-source fork
of Elastic Search and Kibana, and is available under
the Apache License. Open search has been
downloaded millions of times since we released it just this July. And if you want to do real-time
processing of streaming data, you use Kinesis or manage
streaming for Apache Kafka. And if you have structured data, where you need super-fast
querying results, Redshift is the fastest cloud data
warehouse with up to three times
better price-performance than any other cloud data warehouse. Now you may again be thinking, "Do I
really need all of these services?" Yes.

Each of these analytic services
is optimized to give you the best performance
and cost for each analytics job. The difference between using
the right one, and another one can be profound. And customers of all sizes
across many industries are using each of these services to handle large amounts
of data and complex analytics. One of the reasons the cloud
is so attractive for analytics is that AWS services
like these automate many of the manual
and time-consuming tasks. We've always called it, The Muck,
that's involved in running analytics, all with high performance
if you can get quickly to insights. But there are some people who want
these benefits without having to touch
any infrastructure at all. They don't want to tune
and optimize clusters, and they don't need access to all
the knobs and dials of the servers. Others don't want to have
to deal with forecasting the infrastructure capacity
that their applications need. We already eliminated the need
for infrastructure and capacity management
with Athena and Glue. We asked ourselves whether we could
take infrastructure completely out of the equation
for our other analytics services. So we went to work.

And I'm really excited
to announce new server less and on-demand options for not one, not two, not three,
but four analytics services. And that's Redshift, EMR, MSK,
and Kinesis data streams. So with these options, you don't
have to configure, scale, or manage clusters or servers, and you don't have to worry
about provisioning capacity. Fire them up and the services
automatically scale in seconds when busy,
and scale back down when not. You pay only when the service
is in use. These new server less
and on-demand options are great for workloads where you
don't want to have to plan capacity. Maybe it's a new application or one
that's growing unpredictably.

For example, a website
selling tickets to a major event. This is big. So we've got four new
analytics services that not only deliver incredible
performance and capabilities but now give you the option
to take your analytics server less. Now Analytics helps us
understand what's going on today. Machine learning allows us to predict
what will happen in the future and to build intelligence
into systems and applications. One customer who used ML to drive
business outcomes and customer satisfaction
at very challenging times is United Airlines. Yeah.
Thank you United. I'm so happy just to bring you
onto the stage and to tell us
more about United efforts. I'd like to introduce Linda Jojo, the Executive Vice President
of Technology, and the Chief Digital Officer
for United Airlines, Linda. Thank you, Adam,
and good morning everybody. You know what, United our
shared purpose is connecting people
and uniting the world. Back in 2019,
we operated 1.7million flights, carrying 162 million customers
to over 300 destinations worldwide.

Now, even with this Thanksgiving
weekend travel nearly 2019 levels, it's most certainly
not 2019 anymore. There's no doubt that
the last 22 months have had a tremendous impact
on the airline and travel industry. But I'm here today, to tell you
a little bit about how we at United have managed that journey. About our philosophy,
that inclusion propels innovation. And how innovation was
significantly accelerated due to our work with AWS. Specifically, AWS enabled us
to increase the speed of innovation
during a crisis. It helped us variablize
our cost structure and put us on a path
to replace aging legacy platforms. In 2019, we knew that
our legacy platforms were becoming costly to operate
and we were really concerned that they weren't going to be able
to scale as the airline growth. So first, we debated a single cloud
or multi-cloud strategy.

During all of our discussions, we kept coming back
to the importance of resilience, because any glitch
in the smallest of systems has the potential
to cause a flight delay. And within minutes that could
become a Twitter storm, or worse, hit the new cycle. So that meant a single cloud
provider. And the team quickly concluded that
the quality and breadth of products and the continued pace
of innovation meant that AWS
was the only real choice for us. So in February of 2020, we had
a top-to-top meeting in Chicago. We kicked off our relationship
and we learned about best practices. The COVID 19 pandemic hit
a few weeks later. To say that it changed our focus
would be an understatement. Let me give you a perspective.

There was one day in April of 2020, where we had fewer passengers
than pilots. We stopped all projects and it
also became painfully obvious, not only could our current platforms
not scale up, they couldn't scale down either. Now, like you, the United
digital technology team found themselves
suddenly working from home. But instead of a two-pizza team,
we had a one-screen team made up of no more than
the number of video squares, you can fit on your computer monitor.

So yes, Adam, we took this idea
directly from AWS. Now, let me tell you about one
of the first big ideas that popped out
of one of these teams. It's a product called
the Travel Ready Center. It runs on our digital channels. As travel restrictions increased, it was really confusing
for our customers, and our team knew we could do better
when international flying restarted. The team built a machine learning
model to address the travel chaos. I used Amazon S3 to pull in COVID
test forms. They categorized them
with Amazon DynamoDB, and then ran forms through Amazon
Textract, and Amazon SageMaker. Now, these models are continuing
to improve but we've automatically validated
two-thirds of all documents and over 75 percent of all COVID test
forms for over 4 million customers. Yeah. I'm telling you what,
customers love it.

They get their boarding passes
before they get to the airport, they zip right through
the airport lobby. They don't stop to get
their documents checked. Our gate agents, they're static. Travel requirements
continue to change. The forums vary by country. It's complicated, it's time-consuming
and incredibly complex. So we solved this quickly using
Amazon SageMaker. And it was significantly better than
our own internally developed models. Now, as far as I know, United is
the only airline doing these checks completely within our mobile app. Now while the travel-ready center
was underway, other teams saw that refactoring
and moving workloads to AWS would save cash since our airline was
running at such a scaled-down rate.

We used that time to take
a little bit more risk. As a result, many of the technology
used by our employees, and much of what our core
customer-facing technology run on over 100 applications
and all, they now run on AWS, up from less than a handful
when the pandemic started. This was not a lift and shift. These applications have a more
intuitive user experience. They have better operational
instrumentation and better security. Our frontline teams love
these new tools. These tools make it easier
to assist our customers.

They scale up and they scale down
with passenger demand. And it's definitely one
of the reasons that our customer Net Promoter Score
is up over 30 points since the start of the pandemic. That means our chief customer officer
is happy and so is our CFO. I mean how often does that happen? Now, much of what has propelled
our success are the native AWS products
on the screen behind you, actually all over this room. They're enabling us to rapidly
transform our legacy platforms. But equally important, is our
decision was to standardize on AWS is the importance of having
a diverse, inclusive team. My team is non-traditional
by airline standards, and maybe even by
the standard technology team. Fifty percent of my direct
reports are women. Yeah. Sixty percent of our leadership team
is diverse. And it's not just that, many of our
teams come from the airline industry, but just as many come
from other industries as well. So what I want to leave you with
is an Amazon success story and it's a good one. But I also want to remind you
about the importance of working in an inclusive way. The results, particularly
in a crisis, speak for itself.

It's why we believe so strongly
that inclusion propels innovation. And we're doing that together
with AWS. Thank you. [music playing] Thank you so much, Linda. I mean, what a great example
of how machine learning capabilities are driving innovation
in such impactful ways. It's such an important time
for all of us. And obviously,
for United Airlines as well. We want more customers to be able
to fully unlock the potential of machine learning, you aren't thinking about ML
as part of your journey, you're going to want to get moving.

And we can help. Because as with other pieces
of the data journey, again, we provide the broadest
and the deepest set of machine learning and AI services. For expert machine learning
practitioners who have optimized versions of the most powerful popular
deep learning frameworks, including PyTorch,
MXNet and TensorFlow that offer the best performance
for the cloud. And you saw earlier what we're doing
with chips and GPUs instances to enable the best price performance
for ML training in the cloud. And then we have SageMaker, which makes it easy
for everyday developers to build, train, tune
and deploy machine learning models. Since we launched SageMaker in 2017, we've added 150 capabilities and
features and we are not slowing down.

You want to see for yourself,
I'll encourage you to tune into Swami's Keynote tomorrow,
it's going to be action packed. Speaking of action, tens of
thousands of customers are using SageMaker today to train models
with billions of parameters, and to make hundreds of billions
of predictions every month. And then for developers who don't
want to build train models who just want to add
intelligent capabilities like image and video recognition, speech, and natural language
understanding, personalization, or highly accurate enterprise search,
we have a range of AI services that make all this
as easy as an API call.

Everything we've talked about
so far, databases, data lakes, analytics,
and machine learning are acquired elements
of any modern data strategy, which is why we built
powerful capabilities in each. And they become even better
and even more powerful when you can combine them
and do things like easily query and move data
between your data stores, data lakes, analytics,
and machine learning tools. To help with this, we build
capabilities like Federated Queries, and Athena and Redshift. With Federated Queries,
you can write a single SQL query that combines data
from multiple systems like RDS, Aurora or your S3 data lake, and it sends those results
back to you on the fly. For example, you could write a query
in Athena that combines gaming player
profiles from Aurora MySQL, within gaming actions from DynamoDB to get a picture of what users and
specific zip codes took what actions. We also continue to build out
direct integrations between the various services.

For example, today you can access
SageMaker from Redshift, Aurora or Neptune. This means that a Redshift user
could write a SQL statement that called SageMaker behind
the scenes to automatically build, train and optimize and bring a model
back to Redshift as a function. So you can bet from SageMakers
best in class ML without ever leaving Redshift. We're also enabling integration
with a lot of AWS partners. For example, earlier this year, we announced an important expansion
of our relationship with Salesforce, where you can now automatically
see Salesforce data inside of AWS
and AWS data inside of Salesforce. It's really powerful and just
incredibly useful interaction. The key point, again, is that we know
your data is on a journey, all stops along the journey matter.

You can't skip any of them
and at each step, you have to have
the right capabilities, we're focused on continuing
to build out all the tools you need for the entire journey. Now, anytime you're combining and
sharing data from different sources, data governance
is absolutely critical. In fact, the ability to know
where your data is, what form it's in,
and who has access to it, is absolutely vital
for modern data strategy. Having the ability to govern who can and cannot do various things
with which data is the very thing which will set data free
to travel its cloud journey.

Part of governance is having
very strict control over where your data sets
and with AWS, you control your data, full stop. If you put your data in Germany
stays in Germany, if you put it in Japan,
it stays in Japan. And if you put an Oregon
stays in Oregon. And of course, you also need control
over who has permission to access and use which data. Earlier we talked about granular
access control and Lake Formation. And when it comes to data lakes,
Amazon S3 provides the most comprehensive security
and auditability of any storage service, giving you complete control of
buckets, objects and access points. Another big piece is data encryption, we highly recommend
that you encrypt your data, we provide a lot of capabilities
to make it really easy to do that. Most AWS services have built
in encryption capabilities, and you have the option to actually
control your own encryption keys if you want to.

Now we've come to a very exciting
point in this story of this journey. In Florence's story this is the part where she used the rose diagrams
that she created to help others see, she helped those who weren't nurses hadn't seen the conditions
in the hospital or didn't have the skills
to work with the raw data. The next point of the data journey is to enable everyone in your company
or organization to act, which means they need to be able
to find, analyze and understand that data.

Today, customers are using
Amazon QuickSight. Our ML-powered business intelligence
solutions to easily create publish and embed interactive data
visualizations and dashboards. And now they're using it to ask
questions of the data using plain language. We launched QuickSight Q,
a couple of months ago, and it makes the data
even more useful. With Q, any business user can just
ask a question using natural language, like what are the top
10 products by sales in 2021? And Q will return the answer the form
of the visualization in seconds. NoSQL to write, no dashboards
to create, no need to wait on a busy data
science team to help, you just ask questions, receive
a visualized answer, and move on.

Q is unique in that it returns
the answers both with a high degree of accuracy, but also does not require extensive
preparation of the data before Q can be used. It's really the best of both worlds. QuickSight gives business unit
users easier access to data and to analysis,
but we've also seen the same users want to create their own ML models
to move from descriptive analytics, where they're simply
summarizing insights of data, to make predictions
about future outcomes.

They're asking for more
powerful capabilities, so they can quickly make
more precise predictions and apply machine learning
to a range of problems like reducing churn, detecting fraud, forecasting sales
or optimizing inventory. But most analysts don't have
the skills to do this today. If you look back a few years,
in fact, only a very small
number of experts in machine learning actually had
the skills to do this. So we built Amazon SageMaker as a
way to democratize machine learning. And the momentum we're seeing
around SageMaker is really exciting. So we asked ourselves whether there
was something we could do to further democratize ML, by helping an entirely
new group of users with no ML experience,
no data science experience, and we're not even developers
to be able to do machine learning.

And today, I'm excited to announce
a new capability of SageMaker to enable business users and analysts
to generate highly accurate machine learning predictions using a visual
point and click interface with no coding required,
Amazon SageMaker Canvas. [applause] Now business users and analysts
can use Canvas to generate highly accurate
predictions using an intuitive, easy-to-use interface without writing
code and no ML experience. Canvas uses terminology
and visualization that are already
familiar to analysts and complements the data analysis
tools that they are already using. With Canvas you can browse
and access petabytes of data from both cloud
and on-premise data sources such as Amazon S3, Redshift, RDS
and your own local files. Canvas uses powerful
automated machine learning technology to create models that
enable business users and analysts to create predictions of the same
high quality as data scientists.

Once the models are created,
users can easily publish results, explain and interpret models
and share dashboards and those models with these
other analysts and collaborate
and enrich insights. With Canvas we’re making it
even easier to prepare and gather data from machine learning to train models
faster and expand machine learning to an even
broader audience. It’s really going to enable
a whole new group of users to leverage their data and to use ML
to create new business insights. So we’ve just talked a lot about data and all of the amazing
things happening with it. One thing that is clear is that
building a successful data strategy requires a full end-to-end
set of solutions for storing, securing and analyzing
and driving insights and predictions. And while we’ve delivered a lot
to achieve this vision, there remains a lot more innovation
on the horizon. Just think about
Florence Nightingale, can you imagine what she would
think if she were alive today? When she began designing new ways
of modeling data to understand basic health,
data was an afterthought.

Today, it’s a way of life
and it touches everything we do. Just as Florence used a lamp so that she could see in
to the darkness, we all use data to help us see
more clearly what lies ahead and we’re doing more with data
than we ever thought was possible. So when I first referenced
pathfinders today, I said that they possessed
certain traits. Well some are constantly
in search of a better way to do things like Hank Luisetti, and others have the ability
to see what others don’t, like Florence Nightingale. Yet other pathfinders work
with the intention of giving those
who come after them the tools that they need to forge
their own paths forward. And I’d like to tell you about
a pathfinder who did just that. The year is 1946, World War II
has just ended and Roscoe Brown is watching his application
for a commercial pilot job literally torn up in his face. He’s told, “We don’t
hire black pilots.” Unfortunately, this is a scene
that’s played out way too many times for way too
many black men and women.

Roscoe, he should have been
a shoo-in for that. He wasn’t just a pilot, he was one of the first
African American pilots ever to fly in the US military
as one of the Taskegee Airmen. The Taskegee flew over 15,000
combat missions during World War II
destroying hundreds of enemy planes and demolishing over 1,000 rail cars
and transport vehicles. Roscoe flew 68 combat missions
and was credited as the first pilot to shoot down one of Germany’s new
state-of-the-art fighter jets. Roscoe was awarded
the Congressional Gold Medal and the Distinguished
Flying Cross for his bravery
instilled during the war. So on that day in 1946, Roscoe was already a decorated
military veteran and a pathfinder, already. What he had accomplished
as a member of the Tuskagee Airmen would inspire so many
for generations to come. But Roscoe wasn’t done pathfinding. When racial discrimination unjustly
ended Roscoe’s piloting career, he set out on a new path
to help generations of black men and women become pathfinders
in their own right. Here’s just a taste
of what Roscoe did. He earned a Masters
and PhD in Education and became a professor
at New York University, where he educated thousands
during a 25-year tenure.

He founded the 100 Black Men group, to mentor and give financial aid
to young black men. He became president of
the Bronx Community College, a school with a predominantly
minority study body and open enrollment
admissions policy. He created programs there
to help students enter well-paying professions
like healthcare and technology. He established the Bronx
educational opportunity center which has helped more than
35,000 receive diplomas, enter college or find a job. And he produced
an Emmy Award-winning TV program on black leadership,
equality and justice and those are just a few
of Roscoe’s contributions.

Before he passed in 2016,
Roscoe was awarded both the NAAPC Freedom Award
and the Congressional Award for service
to the African American community. As a black leader, community
activist, educator and volunteer, Roscoe wasn’t just breaking down
barriers by being the first, he wasn’t just giving
his time to help others and he wasn’t just teaching. Through all of these efforts, he was giving to so many
who came after him, the tools that they needed
to forge their own paths and create their own
opportunities to grow, to succeed and to become
pathfinders in their own right. Roscoe touched pathfinders
who led in healthcare, in technology,
in education and more. And, and, I’m honored today to have here, Roscoe’s great grandson in
the audience with us, Corey O’Brien. [applause] Thank you so much for being
with us here Corey, you’ve got to be so proud of your
great grandfather’s achievements and I’m sure
he’s just as proud of you. And I’m really personally happy
that you could help join us to help honor
and learn from his memory. Thank you again. [applause] As a leader, I’m incredibly inspired
by Roscoe’s story and his commitment
to providing others the skills that they needed to reach
their full potential and achieve their dreams.

And it’s in this spirit
that AWS is committed to train 29 million people
in cloud skills by the year 2025. As companies are moving
so quickly to adopt the cloud and to transform their businesses, there’s an exploding need
for digital and cloud skills. Since the pandemic, 85% of workers
now feel that they need more
technical knowledge to do their jobs. I mean that’s a big number. We’ve an incredible opportunity
to train millions of people, close the technical skills gap and to enable many,
many more people to be pathfinders. So as part of this commitment,
we’ve launched new programs like AWS Skill Builder, which is
a digital learning experience providing access to over 500 FREE
on-demand courses in 16 languages.

And we’re putting a lot of these
courses up on with One-Click access, making learning cloud skills
as easy as buying a pair of shoes. We also have the AWS re/Start program
which is a free, intensive 12-week program
that prepares people with little, in many cases, no technology
experience for careers in cloud computing and connect them
with potential employers. We’ve seen fitness instructors, folks working at fast
food restaurants become cloud engineers,
it’s honestly inspiring. And we’re now more than tripling
the number of locations where re/Start is available
to cover over 95 cities in over 38 countries
by the end of this year. [applause] So even as we add more powerful
compute capabilities and build more comprehensive
offerings around data, we also need to develop solutions
that are purpose-built to address specific issues,
that help more people use the cloud to find a path through
problems and opportunities.

These solutions can be horizontal
or function-specific. A specific use case. An example of this is Amazon Connect,
our contact center in the cloud. So folks using Connect out there. I’m not surprised because Connect
is actually one of the fastest growing services in AWS history,
with tens of thousands of customers and over 10 million contact
center interactions every day. With Connect, customers can have
a contact center in the cloud up and running in minutes,
that literally can scale to millions and millions of their customers. Now other solutions that we’re
building are built to help solve problems
in specific industry verticals and over the past couple of years
we’ve built abstractions or higher level services that make it
even easier and even more accessible for people to consume the cloud
and to interact with AWS across a wide range of industries
from healthcare, where we’ve launched targeted
services like Amazon HealthLake, to financial services or manufacturing
or automotive and many more.

For example, financial services
leaders like Capital One, JP Morgan Chase, Robinhood, Stripe,
Barclays and of course Nasdaq are innovating and reinventing
new customer experiences on AWS. Data is the lifeblood of
the financial services industry, but usually, as elsewhere,
the data is siloed and hard to access and use,
and the industry is heavily regulated so that compliance
at each step is a must. So out of our work
with financial customers and partners
came Amazon FinSpace, which is the purpose-built
analytics service that reduces the time it takes
financial institutions to find, prepare and analyze financial data,
down from months to minutes while enforcing data access controls
and compliance requirements. Now at AWS, we’ve been lucky
to have worked for years with some of the financial industry’s
leading pathfinders.

For example, we’ve been working
with Goldman Sachs for over a decade
as they have innovated and introduced new experiences
like Markets, their consumer-facing digital
banking business. Recently, we realized together, that there was a really
big opportunity to help other financial services
companies also leap forward in data management and analytics. And so today, I’m really,
really excited to announce a collaboration with Goldman Sachs to launch the Goldman Sachs
financial cloud for data. [cheering and applause] The Goldman Sachs financial cloud
for data combines Goldman’s decades
of investment, data and analytics experience with AWS’s
industry-leading cloud capabilities. And together, we’ll reduce the time
and the developer resources that investment firms spend managing
infrastructure and wrangling data and provide them with advanced
analytics capabilities. With minimal setup, clients can now analyze a range
of financial data sets at scale, using the latest
quantitative techniques that Goldman Sachs uses
to analyze markets in real time and facilitate
millions of trades, per day. We’re really excited that
this collaboration is going to put exciting new technology
in the hands of more funds, asset managers
and other institutional clients to power innovation throughout
the financial services industry.

Yet another area where data is
driving incredible transformation is the industrial sector. AWS customers are transforming
their manufacturing operations using data and machine learning to revolutionize
supply chain management, to improve quality
and to deliver smart products. For example, Volkswagen brings
11 million cars per year to market and adds 200 million parts
each day into its factories. Working with AWS, they’re actually
uniting over 120 factory sites, and data from over 30,000 suppliers into a single architecture
in the cloud, a single one, with productivity
expected to increase by 30%.

It’s incredibly exciting
to see companies who have been innovating
in the world for decades, many in the physical world. Some have been doing it for decades, some have been doing it
for more than a century. But to see them all continue
to forge new paths and to transform
within the new digital world. So I’m really excited to bring out
one of these companies to share their story.

3M. 3M is a company
we’re all familiar with. I mean, who doesn’t use
a Post-It note? But did you know, they’re also one of the most diversified
industrial companies in the US, selling more than 55,000 products
across 26 business lines. So to tell you more,
please help me welcome, Shaun Braun, Senior Vice President
of Digital Transformation at 3M, to the stage. Welcome Shaun. [music playing] Thank you, Adam
Good morning everybody. So as Adam said,
many of you know 3M best for Post-It notes and Scotch Tape, but what you may not know
is we also help planes fly, keep frontline workers safe,
enable the future of communication and connect healthcare
systems globally.

3M is much more than
its 60,000 products, 95,000 employees
and 125,000 patents. It is built on one simple idea. Take innovation from one business and apply it in new and innovative
ways to create breakthroughs. We have been a pathfinder
of innovation for over a century. Our purpose is to unlock
the power of people, ideas and science
to improve lives everywhere. Simply put, that’s 3M. Today, you’re never more than
3 meters away from a 3M innovation and as we think about tomorrow, we are advancing our materials
science leadership in bold new ways
with digital science. And if you’re wondering, yes,
that’s where you all come in. When you’re on a journey like we are, it helps to have a guide
with stamina, smarts and strength. Digital transformation requires
all of those and the uniqueness of 3M’s
high speed manufacturing, globally connected supply chain
and scaled enterprise IT systems, molecular and physics
based R&D, a trusted advisor
that’s fluent in all of these areas and knows how to connect them is key,
and that’s what AWS does for 3M.

They are helping us become
a digital company of the future. We began our cloud journey
like many of you. We had to move away
from our aging data centers. But most importantly we knew
we needed a strong foundation to deliver on our digital
transformation vision. For us, building this foundation
required moving away from our past legacy environment. We transitioned over
2,000 applications, 9,000 virtual machines,
a global SAP instance, 45 petabytes of data,
all while closing a large data center, and moving 61% of our tier-1
applications in one year. Simply put, that was a lot to do. It was the strong collaboration
and amazing teams that 3M and AWS brought together that delivered
these results on time and on budget. Most important though for 3M, this unlocked the potential
of data-driven insights. It collapsed the disparate
data sources we had in applications into digital building blocks
on a cloud environment. And so from this strong-base
platform, we took the next step
in our journey with AWS, driving digital impact
into our products and operations.

Today, I’d like to share
three examples with you. First, for 3M manufacturing, it is a key competitive
differentiator for us, but it’s also very complex. With over 200 plants
across 37 countries, we solve customer problems
in every part of the world by leveraging over
51 technology platforms. With this complexity and our need
for agility and scale, we migrated our key manufacturing
sites to AWS over a year. From that, it enabled us
to leverage AWS serves to develop and deploy a system
to track raw material flow all the way to finished goods. By leveraging Neptune Graph
Database and API Gateway backed by Lambda,
3M teams can now complete these searches from raw material
all the way down to plant name in seconds instead of days. This obviously, has been
very valuable to better navigate
the raw material shortages and the disruptions that we’ve all
seen across our global supply chains. As a result,
our manufacturing is smarter.

Second, clean air. Clean air has never been
more important. 3M Filtrete is the most trust brand
of indoor air cleaning filters on the market. By leveraging AWS services we combined material science
with digital science. We deployed AWS Lambda for advanced features
like Amazon Wi-Fi Simple Setup, Smart Reorder, Smart Home Alexa Multi
Capability Skills. We use AWS CloudWatch
to conduct complex usage analysis and to detect
and protect our services from issues and cyber threats. As a result,
our products are smarter. And finally, third, healthcare. Healthcare has experienced
a dramatic shift to data-driven care
over the last year. We leverage AWS to power many of
our most critical platforms across our healthcare
IT solutions portfolio, including our flagship product
360 Encompass.

EC2 Compute, S3, CloudWatch, CloudTrail monitoring
give us flexibility, performance and scalability
across our operations and security monitoring beyond our traditional
data center deployment. But most importantly, for our
customers, we have reduced our deployment time
from weeks down to hours, enabling us to deploy multiple
AWS regions across the globe. As a result,
our healthcare systems are smarter. For becoming smarter means
learning something new. At 3M that means transitioning from
a material science to digital science where we become as well known
for our digital products as we are for our physical products. That takes a different mindset
and a collection of capabilities. To make the most of future trends
that shape the world, we are building
those capabilities now.

3M needed to define digital. It’s a very fuzzy word. A lot of people use it
in very broad manners. We defined it in four specific
outcome areas. Digital customer, focusing on
a seamless customer experience. Digital products, delivering
differentiated products and services across connected products
and services globally. Digital operations,
optimizing planning, quality, logistics
and finally, digital enterprise, generating corporate
functional efficiency. The next phase of the digital
transformation with AWS spans across our operations
and product portfolio. Together with AWS, we have built
a digital marketplace that gives 3M builders access
to tools, code building blocks, and machine
learning algorithms to cross share. This allows our teams to quickly
develop digital products and solutions for our customers
without having to reinvent the wheel. As an example, we're implementing
foundational elements like a data-driven
everything strategy, which encompasses projects
like our AI/ML workbench on SageMaker suite of services. This provides tool sets for data
scientists, developers and analysts, and our model hub application to support
high performance job computing. In our digital transformation
journey, we have learned how important it is to build a strong foundation
of accessible tech services, build a digital mindset
across the organization, and to celebrate
digital successes.

Each success builds on the one
before it, and the next thing you know
you have a digital transformation that will become real to all of
the employees across the organization, and for 3M this means
expanding our portfolio, our digital products for customers
all over the world, and finding new ways to reach
our vision of improving every life. AWS is enabling us
to innovate at scale and at a different
speed velocity. For over a century 3M has been
a pathfinder creating breakthrough technologies
in healthcare, safety, industrial, consumer, electronics,
and transportation. We're now excited to transform
the company 3M from a company with household name
material products to accompany with iconic
material and digital products. We're doing that together
with AWS. 3M leads in material science,
3M leads in digital science, 3M is science applied to life. Thank you. [applause and music] Thank you, Shaun.

The work that 3M is doing
such a great example of how the cloud’s
really helping manufacturers reimagine their businesses,
improve supply chain operations, and create
all new revenue streams. It’s just a fascinating story. And you can see from 3M’s story
that just like in financial services, industrial companies
are transforming. And they're looking
for purpose-built solutions that help them use
the troves of data that they can now collect to improve
operations and speed up innovation. That's why last year
we launched AWS for Industrial which brings together
AWS services with solutions from hundreds of AWS partners, with expertise across every area
of industrial operations. AWS for Industrial includes
five purpose-built AWS services to make it easy
for industrial customers to use machine learning
in their production processes. For example, Amazon Monitron
is a system that uses ML to detect abnormal conditions
in industrial equipment.

All you have to do is mount
sensors on your equipment, install the app on a smartphone
and start sending the data to AWS through a gateway device. Monitron applies pre-built ML models
to identify potential issues and send that information
to you in the mobile app. And again, no coding and no
ML experience required. In fact, we use Monitron in Amazon
fulfillment centers to prevent problems
like broken conveyor belts, so that every Amazon package
arrives to your front door on time. This is one example of how
industrial settings are being revolutionized. The opportunities are huge
if customers can figure out how to collect and synthesize
all the data flying in.

Some customers have begun using data
to build digital twins of equipment, of production lines or factories. A digital twin is a virtual
representation of a physical object, a process or a service. It mimics the object
that it represents. You could have a digital twin
of a jet engine, or of the production line
with a jet engine is assembled, or of the factory floor
that houses the production line. In the past, when you wanted
to test a new product, you had to build a physical unit
in order to do so.

Now you can build a digital twin,
and run thousands of tests to quickly model behavior
and improve their operations. We believe that digital twins
have the potential to change the game for any company
with industrial processes, except, today it's difficult
to create them. You have to bring together data
from many sources, figure out how to map
the relationships, figure out all those sources, and combine the data
with the 3D modeling to create the virtual representation. Plus, you have to keep it up to date
when things change, which is all the time. Doing this today means teams
of specialized developers and a lot of heavy lifting. We wanted to make
digital twins available to as many companies as possible. That's why we're really pleased to be
announcing AWS IoT TwinMaker, a new service to make it easier
for developers to create digital twins
of real time systems like buildings, factories, industrial equipment,
and product lines.

[applause] Again, getting started is easy. Developers connect IoT TwinMaker
to data sources like equipment sensors, video feeds,
or business applications. If you're already using AWS, many of those connections
are built right in. TwinMaker automatically creates
a knowledge graph that understands and maps out the relationships
between the data sources. So, it's kept up to date in real time
as the data changes, then you import your 3D models and TwinMaker
creates the digital twin, and you're now ready
to use it to understand and improve your physical system.

Customers are really going
to innovate faster and operate more efficiently
with this new capability. We're also collaborating
with Siemens to include TwinMaker along with 60
other AWS services and their industrial
solutions offerings. Working together will make it
easier for companies of any size to use digital twins
in their business. So far, I've talked about
how purpose-built services are allowing new types of users
like financial analysts and manufacturing facility operators
to interact with AWS. There's one more industry
that I'm excited to talk about as it speeds into the new world
of the cloud of data and of machine learning.
And that's the automotive industry. So the cloud is fundamentally
changing this industry too, including how vehicles are designed
and manufactured, the features they offer,
and how we drive. And it's all happening
at Formula One speeds, as customers like Toyota, BMW, Volvo,
and Honda are designing vehicles that are infused
with software connected by sensors, and systems generating unheard
of amounts of data.

We will see cars that are
intelligent and autonomous, energy efficient
and inexpensive to maintain. However, the advanced
vehicle sensors on one car can generate up to two terabytes
of data every hour, two terabytes. Now multiply that by an ever
increasing number of vehicle makes and models, all generating
data in different formats. It's easy to understand
why it's necessary to build
custom data collection systems. But building these systems
is difficult and time consuming. That's why today we're announcing
AWS IoT FleetWise, which is a new service
that makes it easier and more cost effective
for automakers to collect, to transform, and to transfer vehicle data
in the cloud in near real time. [applause] Automakers
can use IoT FleetWise to easily collect and standardize
vast amounts of data from millions of vehicles. IoT Fleetwise can then
send this data for processing
in near real time by applying intelligent
filtering capabilities to sift through petabytes
of connected vehicle data, and to extract
only what's needed, and that drastically reduces
the volume of data being transferred. Once the data is in AWS, automakers
can perform remote diagnostics, they can analyze fleet health
to prevent safety issues, or improve autonomous
driving systems.

What all of these capabilities
have in common that are purpose-built
for specific industries or specific business
functions or use cases, it's that they put the power of AWS
cloud in the hands of more users. And we're going to continue
to build more of these, be it abstractions on top of
our existing foundational services, collaborating with industry
leaders on new offerings, or building brand new applications.

And we hope that these capabilities
will make it possible for many more people to become
pathfinders in their companies, and their industries,
and their communities. We've talked a lot
about pathfinder today, and all the incredible work
that you are doing, and the unstoppable cloud movement of what each
and every one of us is a part. And while so many companies
have found their way to the cloud because of the cost savings, the security,
the unmatched performance, it's apparent that there's
something also bigger at play. I'm talking about agility, the ability to be curious
and experiment while failing fast, and putting data at the center
of your organization. Ultimately,
what the cloud and AWS offer is the opportunity
to truly transform. CEOs and other leaders
tell me all the time that the move to the cloud
is reshaping their culture. It's allowing them to harness
their data in new ways, and drive better decisions.

It's freeing up resources
and moving obstacles out of the way to make innovation
faster, easier, and better. We've heard so many examples
of that today. Yet, despite everything we've done
over the past 15 years, we're still only getting started
with this transformation promise. In the future, the cloud will empower
even more people to innovate, create new businesses and solve
the world's most pressing challenges. All of us will have the opportunity
to find a better way or to see what others didn't,
and along the way, we'll find the opportunity to enable
others to forge their own paths. With that as our destination,
our promise is that we will continue to keep our customer's needs
at the center of everything we do. We’ll continue making AWS
more powerful with foundational capabilities, more intelligent with data,
and ML insights, and easier to use for targeted
functional and industry use cases.

We’ll continue to improve
performance and security, drive down cost
and enable more innovation. And we will continue to push
the edge of the cloud to put it into the hands of anyone
who wants to use it. While we say the cloud has become
a movement, it's actually even more than that. The cloud is an opportunity
to reimagine everything. It provides a pathway
to true transformation. I just want to give a very big
thank you to all of you. And of course, to the customers
who joined me on stage today for being here with us. And a big shout out to the AWS team who works so passionately
every day on your behalf. I really look forward to being
with you this week as we take in all
that re:Invent 2021 have to offer. Please be safe
and enjoy the conference. Thank you. [applause] [music playing].

test attribution text

Add Comment