Are Hidden Rear Cameras a Plus?

  • Post category:Opinion

Introduction

As January rolled around, all eyes were on CES– an event that has been used by many a consumer electronics company to introduce their latest and greatest to the world.

Here, products are lauded for a couple of reasons including:

  • Taking recent innovations mainstream
  • Making technology financially accessible
  • Using technology in an innovative manner
  • Reaching new levels of performance
  • Making existing products smart

On the Third of January, days before the actual show, an article by WIRED caught my eye. OnePlus was about to debut a phone with a Disappearing Rear Camera at the show and news outlets all over the world were picking up on it.

WIRED’s article on OnePlus’ innovation (WIRED)

As a photography enthusiast who happens to know a little about Electrochromic Glass, this decision seemed strange to me. Phones these days are generally competent enough to run their respective OSes without getting in the way of the User Experience. Imaging prowess has since risen to the top of the priority lists to feed our voracious appetites when it comes to consuming media. With camera manufacturers increasing aperture sizes in the battle to welcome every available photon, it made no sense to me that OnePlus, a company that has its roots in affordable, high performance devices, would compromise performance for what I guessed was aesthetic appeal.

A Primer on Imaging

For the unacquainted, megapixel count is far from the only specification important when it comes to the theoretical performance of an imaging system.

The Exposure Triangle (Petapixel)

While contentious, the Exposure Triangle gives a good introduction to the 3 main controllable parameters that determine the Exposure (how bright) the image is. Of the 3, Shutter Speed is perhaps the easiest to make sense of. By changing how much time the sensor is exposed to the scene, the photographer is able to vary the collective amount of light that hits the sensor over time. However, doing so isn’t great with fast moving subjects as the scene will change as the sensor is being exposed, resulting in Motion Blur. One way of reducing shutter speed while preserving the brightness of the scene would be to increase the ISO. With a higher ISO, the Gain, or Sensitivity of the sensor is increased. Multiplying the intensity (brightness) of each image in the scene may sound like a great idea until one realises that the Noise, or unwanted signal, in the image is also being amplified. Noise, or Grain is introduced due to environmental factors like heat, and is made worse by increasing ISO. That leaves us with Aperture.

Pinhole Model (Wikiwand)

As some of us might remember from our physics classes in Primary School, the basic principle of operation of a camera involves light rays congregating through a small Opening or Aperture, before being projected upside down on the image sensor (although this experiment was likely done with a box in school). For the image to be entirely in focus, the pinhole would have to be of a diameter approaching zero. However, this limits the amount of light that hits the sensor eventually. To allow more light through, a larger aperture can be used, with the tradeoff being that more of the image will appear out of focus. This defocus is what is widely known as bokeh, or Background Blur.

Mobile Imaging

Over the years, imaging tech on mobile phones has improved greatly. The megapixel war has resulted in Xiaomi releasing a phone with a 108 Megapixel Camera. However, one key measure of performance in modern Smartphone Cameras would be its ability to take photos in low light conditions, which is a difficult task, as my primer above would have shown. To do so, manufacturers duke it out on 2 fronts- increasing Aperture sizes, and performing Algorithmic Enhancements (like Google’s Night Sight).

Focusing on Hardware, we can see how Aperture sizes have increased significantly, with the HTC Nexus One (2010)’s f2.8 Aperture eclipsed by the offerings we see today, which go as large as the one found in the camera of the Samsung’s Galaxy S10 Plus at f1.5. That translates to an image over 4 times as bright with all other factors held constant.

Electrochromic Glass

For anyone who has meddled with Electrochromic Glass, this decision made by One Plus would come off as strange. Apart from the latency when switching between transparent and opaque, the glass never becomes fully clear either. My encounters with Electrochromic Glass used in real world projects does not bring me any more confidence.

Electrochromic Glass on Singapore’s LRT System

On its debut, the Electrochromic Glass on Singapore’s LRT System seemed like an ingenious idea. Instead of hideous privacy screens where tracks were deemed too close to housing, the glass in the carriage could simply turn transparent and opaque at a flick of a switch. Performance in terms of switching time was not critical and full transparency or opacity was not a requirement. However, it was evident as time wore on that this solution would be difficult to maintain. Failure rate was high, as panels started losing their ability to change states fully, before becoming entirely unresponsive.

Change of Transmittance of Electrochromic Glass

A quick search yielded this graph, which confirms my initial suspicions that Electrochromic Glass may very well stay a novelty when it comes to applications that demand maximum optical performance. For one, switch times are far from instantaneous. Even with the switch time of under one second claimed by OnePlus (we’ll have to check out the fine print on this claim for sure), it might still prove a sticking point in the User Experience for those who are used to the increasingly snappy camera launches seen in smartphones today. After all, those fractions of a second could be the difference between making and missing a shot.

Edmund Optics Glass Transmission

Even after the Glass clears up, we will be left with other issues. A maximum transmittance of under 80% is atrocious. As can be seen, the worst optical glass comfortably transmits over 95% of light, with some getting close to 100%. All the work that has gone into maximising Aperture sizes would be for nought with a piece of glass in front of the sensor taking away 20% of incoming light.

Stranger still is the sensitivity of Electrochromic glass to wavelengths in the visible spectrum. If this graph is anything to go by, reds and blues- the colors on opposite ends of the visible spectrum, will appear considerably darker compared to any other color. This effect might not be on full display to the consumer, as some degree of software correction and Color Science could prevent the images from looking strangely colored, yet software extrapolation can only be taken so far before we run into problems similar to what we face when bumping up ISO.

One Plus Concept One

A look at the OnePlus Concept One

As evidenced from this video, the transition time between dark and clear looks to be significant. However, it would be difficult to assess the other potential problems without a physical copy of the phone and close inspection of its photographic products.

WIRED also points out issues such as cost and longevity that could plague the device with the current state of technology. On the design and UX front, this disappearing design could also pose problems, as highlighted by Gadi Amit in the article. Certainly, this disappearing design will not sit well with the Bauhaus School.

Conclusion

Now that we are nearing the end of a spec war (we’re really not going to be able to display a 100MP image on our Smart Phone Screens, or be able to perceive at such a resolution even if the technology was available), companies are looking for novel features to put their products in the limelight. While some truly improve functionality, others seem to be using technology for its own sake.

For now, one can only hope that this implementation spurs progress in this area, becoming less of a novelty and lending itself to more functional applications like architecture.