Apple is exploring innovative image sensor technology that claims to deliver up to 20 stops of dynamic range. This capability exceeds that of the ARRI ALEXA 35 and approaches the dynamic range of the typical human eye. Here’s what this entails.
### 1,048,576:1
A recently released patent, titled “Image Sensor With Stacked Pixels Having High Dynamic Range And Low Noise,” initially noticed by *Y.M.Cinema Magazine*, discloses Apple’s ambitions for a next-level sensor that competes with the dynamic range found in modern professional cinema cameras.
The patent describes a stacked sensor configuration that aspires to achieve 20 stops of dynamic range, representing the proportion between the highest and lowest light values that can be recorded at the same time without detail loss. It’s gauged in “stops,” with each stop signifying a doubling or halving of light.
Thus, a 20-stop dynamic range would effectively translate to a 1,048,576:1 contrast ratio with no light or shadow loss in a single image.
### A complex measurement
For context, although there is no formal dynamic range specification available for the iPhone 16 Pro Max sensor, here’s *CineD’s* detailed approximation of the iPhone 15 Pro Max 24mm camera, derived from three distinct methods: waveform test (“how many stops can be discerned above the noise floor”), IMATEST (“signal to noise ratio for every stop”), and latitude test (“the ability of a camera to maintain colors and details when over- or underexposed”).
The findings:
> “The waveform indicates approximately 11 stops above the noise floor. Speaking of which, the concept of a noise floor is nearly nonexistent – everything is exceptionally clean, suggesting substantial internal noise reduction (there’s no way to disable this ‘off’)”
And
> At ISO55: “We achieve 12 stops of dynamic range in the iPhone 15 Pro (Max) for a signal to noise ratio (SNR) of 1, and the same 12 stops for a signal to noise ratio of 2. Also for the ‘slope based DR’. This suggests ‘excessive’ noise processing for IMATEST to yield a substantial measurement. It also becomes evident in the lowest diagram where ‘Noise (% of max pixel)’ is illustrated. Noise metrics for the shadow stops are incredibly low.”
At ISO1200: “IMATEST calculates (higher) 13.4 stops at SNR = 2 and 13.4 stops at SNR = 1.”
> At ISO55 “We obtain 5 stops of exposure latitude (3 above to 2 under). This is, in fact, 2 if not 3 stops lower than the present array of consumer APS-C or full-frame cameras. When compared to the previously mentioned ARRI Alexa Mini LF, it reflects 5 stops less exposure latitude. Moreover, juxtaposed with the Alexa 35, the variance is an even seven stops.”
Meanwhile, most calculations position the instantaneous dynamic range of the human eye between 10-14 stops, extending to 20-30 stops post pupil and retinal adjustments.
### A dual-layer strategy that could create new product categories
While Apple has historically depended on Sony for its camera sensors, this patent indicates the company may be developing something even more ambitious internally, starting from the silicon level.
According to the patent, Apple’s design merges two layers:
– **A sensor die**, which captures light via photodiodes and specialized analog circuitry
– **A logic die**, where the processing occurs, incorporating built-in noise reduction
As noted by *Y.M.Cinema Magazine*, this stacked design isn’t entirely novel in the industry. Sony reportedly employs something comparable. However, Apple’s methodology introduces a few unique elements:
– **First**, it features a mechanism called LOFIC (Lateral Overflow Integration Capacitor), enabling each pixel to hold light across three distinct charge levels depending on scene brightness.
– **Second**, each pixel is equipped with its own current memory circuit, which measures and nullifies thermal noise in real-time, removing the need for post-processing cleanup techniques. Notably, Apple achieves this with a three-transistor (3T) pixel configuration, as opposed to the more intricate and less noise-prone 4T.
As this *Reddit* discussion elucidates, by stacking the sensor atop a chip, Apple effectively integrates a discrete shutter for each pixel and processes the image for noise reduction prior to it exiting the die.
### What implications does this have for products?
Should this sensor find its way into a commercial product, it may allow Apple to outpace not only its smartphone rivals but also professional camera manufacturers like Sony, Canon, or RED in particular key metrics.
Incorporate the Neural Engine and other capabilities enabled by Apple’s cohesive hardware-software integration, and it wouldn’t be far-fetched to envision