So this year at NAB, we continue to see more 4K acquisition tools being developed, but nearly all content is distributed and viewed by its audience at 1080p or below, and will be for the foreseeable future. So do we really need 4K, if we aren’t finishing to a 4K DI? Having 4K resolution available offers some interesting workflow options, but people need to avoid getting caught up in 4K output aspect. Almost all 4K cameras are single sensor devices, with a bayer pattern to differentiate colors. While a good demosaicing algorithm can interpolate the detail at full resolution, it will never be to the same level as a three chip camera. Instead, if you treat each 4 sensor block as a single pixel, you only end up with half of the resolution on paper, but you end up with the best of both worlds. You have a camera with a single large sensor, which better simulates the optical response of traditional film, and you have dedicated sensor locations for each color. This is the principle behind the Canon C300 having a QuadHD sensor, and only recording full color 1080p. A more ideal solution is to record the RAW single channel QuadHD or 4K, and treat it as half that resolution, like the “1/2Res-Fast” decoding option in Red’s software. There are ways to encode that data over 3G SDI that involve using the alpha channel space to carry the extra green data, effectively giving you 8:4:4 color in a sense, if measured from that half res perspective. This workflow has been enabled by Red for a while, but Canon’s new tools force this route. While it is happening blind to the user, understanding the underlying ideas can allow users to better leverage the capabilities.
Now the other major divide in 4K production is: 4096 True4K versus 3840 QuadHD. Displays will only have one native resolution or the other, while a True4K display could display QuadHD pixel-for-pixel with pillar-boxing. A QuadHD display can display True4K pixel-for-pixel if it is center-cut, ignoring the 128 pixels on each side, which seems less than ideal at first glance. But if you shoot with that in mind, the extra pixels will just give you framing flexibility. 99% of deliverables are 1080p or below, even on productions that are currently shot at 4K. So if you frame your 4K for a QuadHD center-cut, and then treat your resulting RAW footage as 2K in 8:4:4, that will give you a high quality image with lower demands on your system for native playback. That gives you the freedom to do minor framing adjustments without a full demosaic, and if you want to zoom in past that point for certain shots, demosaicing the RAW 4K will give you twice that level of zoom before you start to lose resolution. Shooting QuadHD, your options are similar, if you treat the image as 1080p at 8:4:4, and only demosaic any shots that you need to reframe.
If your output deliverable is 1080p, there is very little practical benefit to monitoring 4K in post, although being able to shoot and process at 4K gives you more options, and possibly better quality. The increase in quality from processing only takes effect if you are zooming in on the image, or doing VFX. In almost every other case, it makes no difference, so there seems to be no advantage to processing the full 4K instead of a half resolution decode. Back when all of this was being “burned-in” to the file during a transcode prior to editing, this approach would have been throwing away all of the flexibility offered by shooting 4K. Now that editing systems can decode 4K RAW on the fly (especially at 1/2Res), you still maintain that flexibility of having the 4K data available if needed.
So what does all of this mean from a practical perspective? If you are delivering 1080p for the foreseeable future, but want to take advantage of the new 4K shooting options, here are some ideas. If you are able to record to 4K RAW, shoot with a wider lens than necessary, framing for a QuadHD center-cut if you can. Import your footage and edit in a 1080p timeline, but do not scale to frame size, instead scale to exactly 50%. If possible, set your decode resolution to 1/2. Take advantage of any extra image outside the 1080p frame to tweak the exact framing of your image. One issue with this workflow, at least in the current implementation of Premiere, is that setting your 4K footage to 1/2 resolution decode also effects any native 1080p footage, significantly lowering quality.
The same principles hold true, if not even more so, when shooting over 4K with the new Red cameras. There is zero image quality increase of oversampling beyond twice your target resolution, so shoot with the intent of center cutting to QuadHD, by framing much wider. The only exception would be shots that you want the option of being able to zoom into in post, but that is really the same approach, from a certain perspective. (Treat your target zoom level as the QuadHD framing point, if possible.
Now this only applies to shooting 4K RAW, but that includes most 4K cameras that are currently, and will likely include the new Canon C500 and the Sony NEX-FS700, once true RAW recording options are made available. I predict that most deliverables will remain 1080p or 2K for many years to come, even though many 4K acquisition tools are hitting the market, so this approach will probably remain relevant for quite a while.