Wednesday, August 12, 2020

16K UHD – This is why ultra high resolution video is important

Resolutions higher than 8K will come at some point, and they are important. But how we get there might not be as obvious as it might seem.

Must read

DaVinci Resolve 16.2.2 update is now available

Blackmagic Design has continued to improve its DaVinci Resolve editing software with a number of bug fixes, and some handy new features. Read on to find out what's new.

Sony IMX500 sensor – Cameras are about to get a lot smarter

When Sony tells us that it has integrated AI into the same single-chip device as an imaging sensor in its IMX500 series, that’s interesting from all sorts of angles.

Lil Miquela: Signed CGI characters are now an actual thing

Lil Miquela is the first fully CGI character to be signed to a talent agency. You might want to sit back in...

Unreal Engine 5 has been unveiled, and it’s absolutely staggering

Achieving photorealistic 3D graphics has been a holy grail ever since computers displayed the first pixel on a screen. Unreal Engine 5 would appear to have made a gigantic leap towards achieving it.
Avatar
Vibliotechhttps://vibliotech.com
The Vibliotech is all knowing and it has chosen to disseminate its visual production technology knowledge through the medium of the internet.

Ultra high resolution video always sparks ‘robust’ debate, and usually with a 50/50 split as to who supports them, and who doesn’t. But resolutions above 8K are quantifiably important, but how we get there is along a tricky road. Will 16K UHD even be practically possible?

The debate over high resolutions is as predictable as the sun rising. The biggest objection raised against them is usually rhetorical. “What advantage do they give? The human eye can’t even see 4K!”

There are a few issues with statements such as this. Quantifying the resolution that the human eye sees is a complex business at the best of times because different people have different visual acuity. A fighter pilot who has better than 20/20 vision is clearly going to perceive more advantages in a high resolution screen than many an average Joe Soap on the street.

RED’s Monstro VV. RED was been making 8K cameras long before it became fashionable. Image: RED Digital Cinema.

Why did we move to HD?

Before we get onto resolutions like 16K UHD, it’s worth at this point rewinding a little bit to look at why we moved from the standard definition systems that dominated until the mid-2000’s to high definition.

High definition had in fact been around for a long time in various analogue forms. The best known of which was NHK’s MUSE system, which most of the world came to know as “Hi-Vision”. But there were also other variations previous to this, such as the French 819 line system, which was actually 736i in practice.

But it is the Japanese Hi-Vision system that really tackled the main impetus behind developing true high definition. And that was because television was the poor relation to cinema. Screen sizes were limited, and as soon as this was increased the structure of the picture became all too obvious. High definition was a way of allowing bigger screens and a closer viewing distance to better replicate a more cinematic experience.

With analogue systems it is clear to see the advantages, particularly because of screen line structure being a major obstacle in watching a larger screen. However digital screens brought their own challenges.

NHK Super High Vision 8K camera
An NHK Super High Vision 8K camera is prepared at the BBC for use at the 2012 Olympics. Image: BBC.

Why move beyond HD?

The drive to move beyond HD is not just about selling more televisions. That is most definitely one motivation, but it certainly isn’t the whole story.

It might seem like you can’t perceive much difference between HD and 4K, but with good eyesight you can, even at distance. I tried this with a colleague of mine with a large 4K screen. We watched from three screen heights distance away, the standardised perfect viewing distance to get the best out of HD, and compared the same footage in both HD and 4K.

On casual viewing the differences were not noticeable, but when you really looked you could see fine detail in the 4K image that wasn’t in the HD one. It was an aerial shot of NYC, and on the HD one pedestrians disappeared into a vague mush compared with he 4K variation. The important thing was that once we had noticed these differences, we couldn’t unseen them.

Of course this wasn’t a scientific test, but the human eye perceives resolution in an unconscious way. Higher resolutions is not always about detail.

Consider very high frequency edges such as a single human hair, or the edge of a bright white roof line against a blue bird sky. These are extremely testing for the resolution of a camera or screen, and it is in these sorts of scenarios where the resolution limits become exposed.

Even in 4K there is a limit to the ‘fineness’ of a line it can reproduce. So in fact one of the big advantages of truly ultra high resolutions is ‘smoothness’. The ability to reproduce the finest, sharpest lines without any aliasing, whilst still retaining the fine textures and subtly of colour. 16K UHD would most certainly achieve this.

You might not think you are seeing much extra, but once you get to the sort of resolutions where pixels are truly, truly not visible at all, you will see a quality to the picture you might not be able to put your finger on, but it will be there.

It’s the same as seeing an image taken with an ultra large format Hasselblad vs your DSLR.

Hasselblad X1D II 50C
The Hasselblad X1D II 50C. Larger format images, whether stills or video, offer tangible advantages. Image: Hasselblad.

Super UHD is about more than just your TV

But to focus on whether you can see extra detail or not actually misses a much, much wider picture. It’s not all about you.

By this I mean that ultra high definition imagery has many more uses beyond you watching the latest Netflix series or Hollywood blockbuster. Video has a much wider application than that.

In the filmmaking world ultra high definition cameras will have a lot of use in the VFX industries. More data equals more manipulation of the image is possible.

360 cameras desperately need much higher resolution chips. Currently most of them max out at 1080p final display output because a 360 camera’s footage is created by cropping in to an image. 16K would make UHD 360 cameras truly possible.

Then there’s live broadcasting. A company called AZilPix has developed an 8K camera system for live broadcast, which allows multiple virtual camera ops to create totally separate angles using the camera. The quality is exceptionally high and it saves a huge amount of cost by not requiring hugely complicated multi-camera setups. This is good for broadcasters and opens up live multi-camera streams to lower budget operators as well.

Needless to say, a 16K UHD camera on such a system opens up even more possibilities, and dare I say it, is essential for its advancement. Particularly if the eventual final output needs to be 4K or above.

The Canon EOS R5
Canon’s EOS R5 will be the first video capable 8K camera of its type and size. Image: Canon.

The limits of physics

Going to these sorts of resolutions isn’t without challenge. Data bandwidth is clearly one obstacle, but the other is that there are only so many photo sites you can cram onto a wafer of silicon without performance dropping off significantly.

Chip manufacturers somehow do manage to keep pulling rabbits out of the hat, such as with the new generation of stacked sensors. One of which has given us a 108MP sensor in a smartphone. Surely putting 16K UHD onto a larger sensor should be easy in that case?

The problem is, in order to maintain a high end level of quality, the sensor itself needs to become a lot larger, which then translates into much larger lenses and so on and so forth. So what’s the solution to this?

The fact is that the move to truly high resolutions may not come from the chip manufacturers. There are two other ways it could be done.

The first is by way of computational video. We mentioned in a previous article about how chiplets are helping to advance the speed and power of processors by splitting up a processor into smaller ‘chiplets’. The same can be done with video and photography.

Your latest smartphone uses computational photography all the time to bring you visual results that really shouldn’t be possible with such a small lens and sensor. By combining several smaller sensors and lenses it’s possible to computationally produce a higher resolution image.

Currently there’s a smartphone app that can create massively high resolution images by analysing very subtle movement in a handheld shot to calculate the extra detail. In some of the latest mirrorless stills cameras there is a function that does this by a different method. Instead of relying on a handheld shot, the camera uses the in-body image stabilisation system to micro-shift the sensor to calculate the higher resolution.

These methods rely on the subject being taken staying perfectly still. But as computation gets much faster, so the possibilities for gaining higher resolutions through computational methods increases.

Virtual high resolution

Then there’s virtual high resolution using AI predictive methods. These methods are already used now to help create enlarged still images with varying degrees of success. The results are getting better over time, but they rely on the AI system recognising objects and creating detail that wasn’t present in the original image.

This can be problematic if you are starting with a very low resolution image. But the problems become much easier to deal with if you feed such a system a much higher resolution image.

In fact some televisions are already starting to use AI predictive methods with their upscaling, such as the system by Samsung. Techradar did a good breakdown on the challenges of upscaling, as well as Samsung’s system here. The article also shows some real world examples of the differences between the 4K original image and the upscaled 8K one so it’s worth reading to see how stark the differences are.

Summary

Artificially created high definition will fall foul when it comes to legal deposition video or surveying, so it isn’t really a solution to footage capture. However the important thing to take away from any discussion about resolutions above 4K or 8K is to understand that the advantages and need for them goes well beyond the idea of manufacturers wanting you to buy a new TV!

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest articles

DaVinci Resolve 16.2.2 update is now available

Blackmagic Design has continued to improve its DaVinci Resolve editing software with a number of bug fixes, and some handy new features. Read on to find out what's new.

Sony IMX500 sensor – Cameras are about to get a lot smarter

When Sony tells us that it has integrated AI into the same single-chip device as an imaging sensor in its IMX500 series, that’s interesting from all sorts of angles.

Lil Miquela: Signed CGI characters are now an actual thing

Lil Miquela is the first fully CGI character to be signed to a talent agency. You might want to sit back in...

Unreal Engine 5 has been unveiled, and it’s absolutely staggering

Achieving photorealistic 3D graphics has been a holy grail ever since computers displayed the first pixel on a screen. Unreal Engine 5 would appear to have made a gigantic leap towards achieving it.

Lens softness: Why do lenses become softer at different apertures?

There is an unabated craze right now for shallow depth of field. But lenses are do not, generally, perform at their best when fully wide open. Why is this?