Camera sensor design has always been a balancing act between resolution and performance. Yet today, sensors continue to improve regardless. How and why is this, and can it continue?
It’s probably been twenty years since manufacturers started playing the numbers games with cameras. Back around 2000, when we started using high-definition video cameras to replace 35mm film, we knew that more was required, and the scramble to give us more has led all the way up to 8K cameras now. All that’s great, but most people are aware that we can’t just keep packing more pixels on a sensor without sacrificing anything else.
It’s a simple enough problem. Put more pixels (well, photosites) on a camera sensor and they have to be smaller, so any one of them will see fewer photons. That means less sensitivity, or, if we crank up the gain to compensate, more noise. It also means poorer dynamic range, as the smaller photosites will hold fewer electrons. We can make the sensor bigger, which is a pretty popular response to the problem because everyone knows that bigger is better, but any focus puller will tell you that’s a solution that has other downsides.
So, all else being equal, a higher resolution sensor is a worse sensor. The push, then, is to avoid all else being equal; to improve the base performance of the camera sensor so that we can get away with more photosites per inch, but doing that requires fundamental advances in how sensors are made, and that’s not off the shelf tech.
Some approaches which were once new are now more common. A back-illuminated sensor sounds strange, but it makes more sense when we realise that the “front” of a conventional sensor was usually an area covered in wiring and electronics. That layer was thin, very thin, so light could get through to the photosites, though much was lost. In a back-illuminated design things were mounted the other way around, with the photodiodes pointing toward the light and the support layer behind them thinned down to almost complete transparency. That innovation, which we should probably credit mainly to Sony, bought us perhaps two-thirds of a stop.
One conceptually simple idea is to improve fill factor, or the proportion of the face of the sensor that’s actually made up of photosites. It sounds obvious, but the reality is that CMOS sensors, as opposed to CCD sensors, mainly exist so that extra electronics can be integrated into the sensor itself. Things that existed as support electronics in CCD cameras is built into the sensor in modern ones. That’s great; it’s hugely convenient, but in basic designs it means that some of the front of the sensor is taken up with – well – things that aren’t photosites. Making microchips with smaller and smaller features is a big part of making faster parts for computers, so making the support electronics smaller has allowed for improved fill factor.
There is another trick that’s improved fill factor, as well as a few other things: layering. Building sensors in layers means that at least some of the support electronics can go behind the light sensitive parts. Until fairly recently, it was always necessary to have some electronics on the front, though, which wasn’t quite ideal. Modern sensors use photodiodes to detect light, but the best manufacturing processes for photodiodes can’t be used to make the other electronics that exist on a sensor, things like the signal amplifiers and analogue-to-digital converters. That means that for a long time, the design was a compromise: a process that could make reasonable photodiodes and reasonable support electronics.
Ideally, what we’d like to do is completely separate the layers, so we can have excellent photodiodes, excellent support electronics, and better fill factor. Problem is, that requires one connection between the layers for every single photosite on the sensor, which is a lot of connections. That’s a capability that’s only just being worked out, but it promises much.
And it matters, because it’s not a great idea to try to make 8K Super35 sensors if we want them to be quiet. It’s also not a great idea to make all cameras into monsters with 8-perf full frame sensors, because the lenses become impossibly huge and expensive, or else small and cheap and slower, which sort of puts us back where we started: winding up the gain to compensate. In the long term, we’re more or less reliant on fundamental advances to avoid it becoming a more or less circular problem.