How Google Built the Pixel 2 Camera

Over the years a very wide selection of cameras has been designed Cameras are specially made to photograph space and within the human body And cameras to help the driver see and, of course, take pictures of our daily lives Most of us today take most of the photos and videos with our phone cameras Which enables us to do amazing things like taking 360-degree photos 4K video recording and help us meet new friends Since I had a camera experience on Google Pixel last year Curious to see how these images come out of a camera smaller than the nail? What is happening inside it and I can not see? And when I recently launched the Pixel 2, I thought this was the perfect opportunity Get around Google, and meet the people who developed the phone's camera And knowing as much information as possible Are you ready for this, noodles? There will be no photos of turtles but I promise you there will be other cool pictures Let's Begin Camera device The main challenge in building a phone camera is size Because we want light and thin phones We basically have a grape-sized space to put the camera in From the outside of the phone, the lens can be seen The Pixel 2 includes a group of 6 lenses Its shape is very strange, for in some there is a strange W-shape Because we are trying to correct so-called deviations In order to confine the image to a very small area Also this year there is the optical image stabilization feature There is an actual physical piece wrapped around the lens They have motors and can align the lens in a number of dimensions Focus moves the lenses in and out, while optical stabilization Image up, down, left and right.

You can see it moving It compensates for all manual operations And behind the lenses just one millimeter there is a sensor Opposite to the film strip in the digital camera It is covered with light sensitive cavities, also known as pixels That captures light and turns it into an electrical signal This year, the image sensor has a resolution of 12 megapixels But each pixel is divided into one part on the left and another on the right. So it can be said that there is actually 24 megapixels sub We'll talk more about this later But the important thing is that this gives the sensor new capabilities Depth related to field depth and autofocus Image processing Can you give a brief overview of what happens when you take a picture? Surprisingly, we could use a piece of silicone to take a picture You will not want to look at the image that comes out directly from the sensor It's dark green, and has bad pixels stuck in Even if we don't do digital photography, there's a lot of processing Required to convert this image into a good final image This type of processing takes place in all digital cameras Each camera does things a little differently On the Pixel 2, there are approximately 30 to 40 steps For me, the first step was the most interesting one The sensor has a chessboard-style color filter and contains modules Red, green, and blue pixels, instead of pixels, detect all spectrum colors It only senses red, green only, or blue only It collects a double amount of green light Because our eyes are more sensitive to green.

Therefore it must be combined The red that we see here with the green that we see here With the blue that we see here for a color image This process is called mosaic installation. And then the gamma is modified in this picture, White balance, resolution, sharpness, and much more All of this is basically work done by the hardware This means the departments specialized in this field But with cameras turning more towards digital photography Processing has become the work of electronic software "Electronic photography" may have many meanings But it's basically advanced algorithms that enhance image processing Pixel 2 supports two main features, HDR + and Portrait mode.

When we decided to create an HDR + we wanted an algorithm Which can take a small sensor and make it work like a large sensor This means excellent performance in low light and has a wide dynamic range So we can capture very dark and very bright details in the photo To achieve this on the phone's camera, every photo is taken It is not a single image but rather a mixture of up to 10 images All of which were exposed to dim light for preservation On the dark and luminous parts of the scene.

But HDR + just doesn't produce average versions of all of these images This is because hands can move or some things in the scene may change. So we review each section of the image and examine .. Does this move from the other section, can we move it a little and make it match? We don't know where that section went for that to ignore from that snapshot We are very careful about avoiding "ghosts" I like to use ghosts as a technical term. Yes, it means double image After scaring and removing the ghosts, there is an aesthetic decision It relates to how well dark and light portions of the image are combined If you took a photo in very dim light, we could by taking a bunch of pictures Putting them together made that shot look pretty good. But should we make it look as bright as if it were taken in broad daylight? If we highlight all the dark shades and keep the lights highlighted We'll get a surreal or cartoon-like image So we have to decide what to remove Okay stop.

Do you see how the focus here is on Mark while the background is not clear? This is called low field depth and is achieved through Shoot with fast lens and wide aperture settings Portrait mode is a new feature in Pixel 2 that recreates this view But, of course, things on the phone are a little trickier The lens is very small and the aperture is very small When taking a regular picture on the phone, the details are somewhat prominent To work around this, Portrait mode uses a combination of Machine learning and depth of field. Instead of just looking at each pixel Just as a pixel we're trying to understand "What is it?" Is she a person, is she [incomprehensible] what exactly is this pixel? The team trained a neural network using nearly a million examples About people, people wearing hats, carrying ice cream cones, They envision pauses with their friends, and with their dogs, for identifying pixels Humanity to highlight it and any units related to the background.

This allows the algorithm to create a mask. This mask means that everything inside must remain sharp. Then the question becomes how dark things should be outside the mask? When we chose the devices we knew that we'd get a dual pixel sensor. Where each pixel is actually divided into two sub pixels. This is similar to a person’s eyes, as they get two different worldviews. On the left and right sides of the micro camera. This small difference in perspective is smaller than the tip of the pencil Sufficient to generate depth of primary field. We appreciate the degree of opacity that should be applied Depending on how much we appreciate the image’s dimension.

So even if we take a picture of something other than a person Using depth of field and Portrait mode, we can get a thumbnail For selfie lovers, Portrait mode is available on the front camera Check and adjust Before preparing this episode I didn't realize how much The cameras on our phones have been tested and tuned. There is an old saying in engineering: "If you don't test something, it doesn't work." The quality of the camera depends largely on how you create it A set of tests that allow you to evaluate performance Camera tuning is a mix of art and physics, with thousands of The parameters to be set.

The problem is that they all overlap with one another. You are making one change and you must discover 10 other things Affected by the change and should be changed as well This is why specific labs test the camera Through a set of automated tests And check the white balance in auto focus, color grading, accuracy, and more. If only this type of test was not possible What do you think will be the consequences? Without it, it would take us several weeks to get one data set It was not possible to carry out the review process used in engineering One of my favorite settings is a robotic screening stage It's called Hexabud as it tests video stability We can enter different coordinates for the scan so that we may give it Slow light wave or we might ask him to work at a very fast speed This year, optical and electronic image stabilization were used in video shooting The first corrects small movements like a handshake While the latter corrects the larger movements It works by looking at the video clip and then comparing it with Some shots are then taken using gyroscope measurements The gyroscope will tell you if you move in this direction Or that direction, and we use this to define If this movement is random, then we take this movement and cancel it.

A lot of operations happen inside the phone's camera Like those on my Pixel 2 so I could easily make it Video only about stability, auto focus, or any other feature While preparing this video, I learned a lot about many different things For autofocus in the dark, Pixel 2 has a small infrared laser beam The rear camera weighs 0.003 lbs, which is roughly the weight of the paper clip I realize that I am just beginning to explore how the phone's camera works It is an amazingly complex process that you may have noticed during This episode is the appearance of some of the photos taken by Pixel 2 The best smartphone camera score in DxO tests.

And if you want to see more, you should watch this video That my girlfriend and I had photographed with a Pixel 2 So the episode ends, bye! Noodles will take a nap, and you will have to go and watch another video Bye! .

Add Comment