Google is a company that has been influential in the journey of the smartphone. After acquiring Android in 2005, the company led the way in software and licensed out it’s software to a plethora of devices. Today, almost every phone with the exception of the iPhone and Huawei/Honor devices runs on Android, that is accentuated with a custom skin of their own.

However, if you are into tech or know this community well enough, you might even know that the brand also has its own line of smartphones, named the Pixel. The Pixel was meant to showcase what Google thinks Android is and is in ways a showpiece of their latest advancements in both software and hardware.

While the former has been spotless, the latter is seeming to go through an ageing problem. Again, if you are familiar with Google and the Pixel, you may ask; “But Shloke, isn’t the Pixel the best point and shoot smartphone camera?” Well, yes, but, the reason for writing this article is the fact that while Google is making incremental updates to its cameras, competitors such as Apple, Samsung and even Huawei are making a gigantic leap and some of them are in arm’s reach of the Pixel.

Before we begin on why the Pixel needs an upgrade, we ought to discuss the Pixel way of taking images and the software that goes behind it, since, in the case of the Pixel, the software plays a monumental role, in contrast to the hardware. 

How Does a Pixel Camera Work?

Pixel 4a 5G and Pixel 5

Google relies mostly on software for the magic that is its cameras. There are separate kinds of software trickeries it pulls in the case of different scenarios. So, why not list out some of them.

In a vacuum, the most basic feature that is a part of Google’s playbook is known as computational photography. While going in-depth would result in a podcast like script, we do need to discuss what Computational photography is.

To explain it in a brief and short way, computational photography is digital processing to get more out of your cameras. A common example is improving the colour and lighting of the image while processing to get more details in dark areas or under-lit scenarios. 

Computational photography pays a major role in the Pixel considering how most brands have moved on to larger sensors with more megapixels, while Google has stuck to it’s tried and tested 12.2-megapixel Sony sensor. The smaller the sensor, the more computational photography is required for better results. 

Another way Google brings in better details is through a mode that it has dubbed HDR plus. Instead of combining pictures taken at dark, ordinary exposures, it captures a larger number of dark, underexposed frames. This results in the ability to build the correct exposure and a common result of using HDR+ is the fact that the skies appear blue instead of featuring a glow to them or looking washed out. 

Night sight is another one of Google’s most famous tricks and it pulls that off via Night Sight, which uses the same technology as HDR+, which is picking up a steady main image and layering multiple frames to build a single bright and exposed picture.

Moving to the final point that makes Google’s cameras good is something that Google calls the Pixel Neural Core. 

What is the Pixel Neural Core?

The Neural Core from Google takes a different approach when it comes to processing. Instead of using a traditional CPU to handle computational tasks, they use a machine learning core, like the Neural Core.

It is optimised for specific complex mathematical tasks and bears a resemblance in terms of usability to DSPs (Digital Signal Processors) or GPUs, but, in this case, the chip is optimised for specific operations used by machine learning algorithms.

Additionally, the Neural Core builds dedicated arithmetic logic units or ALUs into hardware, which help to handle certain instructions quickly, with reduced power usage, instead of placing the load on the CPU, which leads to taking up multiple CPU cycles. This chip comprises hundreds of ALUs in multiple cores, with shared local memory and, a microprocessor that oversees task scheduling.

Having discussed what the Neural Core is and, how it works, a question might arise in your mind; “What is the use case of the Neural Core?” Well, if you were wondering about this, worry not as we will be discussing the same.

When we talk about the photography side, the Neural Core helps in dual exposure controls, the new Astrophotography mode, live HDR+ previews and Night Sight. These are far easier to use and taking these shots take far less time, thanks to the Neural Core. 

In addition to this, white balance in certain situations and multiple exposure controls are handled in a better way due to the Neural Core.

Having read all this, you might be thinking about how good the Neural Core is and, yes, it certainly does help out a lot, but Google removed it with the Pixel 5 and 4A series, meaning, as of now, no new Pixel sports a Neural Core or any other similar feature. While that is not troubling, it does not bode well for the brand.

How is the Competition Doing

Pixel 4a 5G, Samsung Note 20 Ultra and iPhone 12 Pro Max

One of the most important reasons, because of which this article exists is the current state of competitors. Three years ago, during the launch of the iPhone X and the Samsung Galaxy S8, the Pixel 2XL was one of the main competitors. The device had its fair share of issues, which had it being subjected to criticism and scepticism. All those issues aside, the Pixel 2 and 2XL had one trump card that beat its competition by a huge margin. This feature was the camera. In 2017, Samsung was doing adequately well, while the iPhone X cameras had focus issues and some other problems with the cameras, which did not bode well with the reviewers. Video capabilities were still better on the iPhone, but, in terms of regular pictures, the Pixel took the cake.

Three years down the line, the competition has caught up. The iPhone with deep fusion, Night Mode and Pro Raw can now compete with Pixels and, even beat them out in some instances. Samsung has taken a different approach, with larger sensors, which bode well in terms of the clarity but mess up the focusing capabilities. This has some implications, but, as a whole, Samsung too is within arm’s reach of the Pixel and, the Night Mode capabilities perform well.

This is why I write this article, three years ago, the Pixel could afford to make mistakes, but, in today’s day and age, the Pixel needs to tread carefully, since Apple has upped it’s anti in terms of the photographic capabilities with the iPhone 12 Pro Max and, Samsung has amazing clarity and details, thanks to its 108-megapixel sensors. Video capabilities are better on both competitors, which is one of the most well-known Pixel shortcomings.

What can Google Do to fix this?

Google Pixel

Google once mentioned that they do not upgrade from the 12.2-megapixel primary Sony Sensor because it is well optimised with it’s software. While this is not wrong, having used the same sensor for over 3 years, the ageing process has increased massively and, one can easily make out differences between a modern sensor and Google’s sensor these days.

Sticking to something tried and tested is not wrong, but, Google seems to be playing it too safe. While the company does not need to make major and radical changes like Samsung, trying out a new sensor is not the worst idea. While it is known that more megapixels do not result in better pictures, the added megapixels can be quite helpful in certain scenarios.

Some comparisons have shown that, when compared to other mid-range competitors, the Pixel 4a, which uses the same sensor as the Pixel 2,3,4 and 5, can output amazing pictures, the clarity and dynamic range leave something to be desired. Samsung, in this regard, is capable of doing something really good. 

While the Pixel is certainly still on top when it comes to taking photographs, the lead that it holds over its competition is very minor and, if the company does not make changes in the future, one might breach the thin line that the Pixel leads by and become the next best camera smartphone.