Within just a few years, smartphones have become very powerful, and one of the areas that have benefited from this is the cameras. You can now take pro-level pictures and shoot Ultra HD videos on devices that can fit into your pocket at a fraction of high end digital single-lens reflex (DSLR) camera.
The most significant facilitator of these advancements is not lenses or sensors but AI, working alongside powerful processors.
The smartphone industry was taken by storm when Google launched its first smartphone after ending the Nexus line of phones. The phone was called the Pixel which from its name it implied a camera-centric smartphone. The Pixel camera had a score of 89 on DxOMark, a company that benchmarks lenses and cameras. It was the highest-rated smartphone camera at that time, beating competitors like the iPhone 7.
So how did Google achieve this magic using a single lens when most of the competition that year had two or even three cameras? It was all thanks to computational photography, which in simple terms, refers to the use of software to achieve better results in photos mimicking real dedicated cameras hardware.
While computational photography has been around in DSLRs for years, Google had the edge over its competitors when it came to software. Using its Intel made processor, the Visual Core, Google was able to bake in AI into the phone by applying machine learning, which uses algorithms to teach the software to distinguish with accuracy photos taken in different landscapes, light conditions, and subjects.
By knowing the lighting conditions and the subject’s presence in the frame, the software could automatically adjust the camera setting like exposure, focus, and blur, colors, and white balance to give the best results.
Electronic image stabilization
Image stabilization means exactly that, stabilization of an image or a video shot on a camera. The traditional type of stabilization is optical, where some parts of a camera lens move independently of your smartphone to balance out shaking of your hands, which usually result in blurry photos.
The other type of image stabilization is electronic image stabilization (EIS), which does the same thing only this time using software and accelerometers. EIS is another area where Artificial Intelligence algorithms shine by giving you sharp photos and smooth videos, even when taken while running or in a moving vehicle.
While with most smartphones, you can still get better results with a gimbal, in more recent smartphones, the AI has become so good to the point where EIS videos are indistinguishable from those shot on a gimbal.
Merging of frames
Apple and Google have found a way to make the smartphones take multiple frames from different lenses when you press the shutter button and later use AI merge the frames into one.
The Pixel 3, for example, produces the best night shot images by stitching frames taken with long exposures and uses machine learning to calculate the perfect balance of colors of your photos, resulting in subjects and objects having natural-looking colors at night.
Artificial Intelligence has arguably had the most significant impact on smartphone photography in a while, but it is still not without its flaws. Color adjustments and sharpening might get too aggressive sometimes, and you might find yourself having to do some post-processing to get the colors just right.