Video & Data Levels Demystified!

Untitled2 Gain Lowered On 64 960

There can be a lot of confusion when it comes to use of video or data levels when recording with most cameras, including the Sony’s, and usually it rears it’s ugly head when it comes to the post production process resulting in everything from shifted levels, clipped signals, and just a mass of confusion that can leave you scratching your head saying;

what happened to my levels?!?

On a A7S thread from the DVXUSER community forum a user noted that the internal recording waveform varied from the externally recorded waveform over HDMI and thought something was drastically wrong. The reason in this case is not due to any issue really, but just differences between YcbCr 422 digital video and RGB Data. The camera is recording in YCbCr 422 REC709 space, but the external recorder seemed to be using full RGB data levels, or somehow otherwise scaling the output.

But that observation helps to bring up some really good questions and potential issues when it comes to the post production workflow process:

To simplify the discussion, most cameras including the Sony F3, F5, F55, A7S, F7S record video signals in REC709 space which is YCrBr 16-235. Very few digital cameras actually record in a 100% true RGB container using data levels. Having said that some external recorders may support it but it’s also typically dependent on the codec. Although certain codecs such as ProRes 4444 support “true” RGB color space, according to ARRI they state in the case of the Alexa they always record to ProRes using “legal range”.

Back to XAVC however, whether you chose slog or hypergamma its recorded using REC709 standard space which is “video levels” 16-235. Then there are also “extended” or “illegal” levels to consider which Sony calls “full range” in their 2014 slog3 white paper but technically speaking these are still REC709. ie. its not “true” RGB data levels all of the sudden just because we drop below 16 with blacks or go above 235 with superwhites.

Practical Example

If you were to take an XAVC clip into Davinci Resolve and from the media pool right click the clip and select CLIP ATTRIBUTES and then switch it to VIDEO LEVELS you would see the waveform expand and fill the scope some more. If you switch it back to DATA LEVELS then the whole waveform shrinks by lifting blacks and lowering the highlights causing the effect of reduced contrast.

Example:
Below is a high contrast scene I shot using HG7 on the Sony F55 camera in XAVC HD. I’m using HG’s in this example instead of slog to help illustrate the effect better:

This is how it appeared in my monitor and how I exposed the shot:

Looking in Davinci Resolve, with the clip attributes set to DATA LEVELS. The contrast is reduced. Blacks go up the scale. Whites go down.

If I go back to clip attributes and select VIDEO LEVELS the image looks normal again. (Note the highlights however)

Then if I use the gain wheel in resolve to reduce the image gain, even though it first appeared as if the highlights were clipped you can see they are easily restored.

It’s important to note that if you are working with a “full range” (aka “extended” or “illegal”) signals in Davinci Resolve and you’ve set your project up for “video levels” just because the signal might look clipped on the waveform monitor does not actually mean any of your signal is clipped or lost! All you need to do is use the gain, offset or shadows controller wheels to bring the waveform back into a good workable range.

That’s great about Davinci Resolve but its also where things can get ugly depending on how your NLE software interprets the video footage that you ingest. If for example you were to bring in that same high contrast scene shot in XAVC using the HG gamma into FCP X – how would FCP X treat that footage? Sony’s white paper on slog3 calls the signal “full range” but this should not be confused with “DATA LEVELS” and here’s what happens when I bring in some XAVC HG footage into FCP X and then try and reduce the brightness:

Initially the waveform is clipped and occupies the entire scope. If I try some correction / exposure settings and try to reduce the highlights it doesn’t bring them back, they are clipped for good. Now what? At the time I did this testing I found a plugin for an older version of FCP that would actually trick FCP in how it handled these extended video signals and allow you to adjust the levels back within space. Is this a bug then, or just a problem in the way that some NLE’s will handle footage? In Davinci its no big deal because you can pick the “levels” of your project and just make sure you’re overall grade conforms to it before exporting – but if you are going straight to NLE you could run into problems. It’s something to be aware of and especially if suddenly you find yourself dealing with images that have contrast added or removed you weren’t expecting.

I sure hope this has proved itself insightful for some. If anyone has had different experience in post production I’d definitely welcome some comments to further discuss it. It’s quite a fascinating topic and I certainly have spent a long time trying to wrap my own head around it.

References:

http://www.arri.com/camera/alexa/cameras/camera_details.html?product=11&subsection=prores_codec_overview&cHash=0c78ed8542829bd4f376544398e43fc5

https://www.apple.com/final-cut-pro/docs/Apple_ProRes_White_Paper.pdf

http://www.arri.com/camera/alexa/learn/alexa_faq/  (Section 11)

  1. RafaelRafael10-30-2014

    The da vinci resolve doesn’t aceppt the xavc-s sound yet? It’s better to transcode to prores or grade the internal file? I really wanna work with the da vinci. But the transcode for use it worry me because the quality loss.. Sorry for the english.

    • Dennis HingsbergDennis Hingsberg11-03-2014

      Hi Rafael, although I only use Resolve for grading I do see sound on the Resolve timeline from the MXF files of the F5/F55 and when I scrub them hear the sound no problem. I’ve also used Resolve to transcode to ProRes or H.264 with sound, and have never had any issues. Hope this helps.

  2. Joseph MooreJoseph Moore10-30-2014

    I just recently posted a similar article about how the GH4 handles luminance levels. Unfortunately, there is no “demystifying” this topic since it’s a bonafide mess. 🙂

    • Dennis HingsbergDennis Hingsberg11-03-2014

      Thanks for posting your link Joseph, I know this subject is confusing to many and gets worse along the way once you start getting into the post end of it. Hopefully our articles help make things clearer. :S

  3. AlexanderAlexander10-30-2014

    Joseph, yes it is a mess. Sometimes, you get data levels on video codecs… When you are unsure, always choose Data.
    If unsure, record some pure white and pure black objects and see if the blacks are at the right level (should be 0 if your camera’s pedestal is correct). If ever you see that you are able to recover highlight information from “outside” the waveform scope, then you’re in the wrong level setting!

    • Dennis HingsbergDennis Hingsberg11-03-2014

      Alex thanks for pointing out that even though the ProRes 4444 codec can support full RGB color space, that the Alexa only in fact records legal levels to it. This was a miss on my part.

      On a technical note, you are not really getting data levels on video codecs. Video levels in REC709 space that are extended may use code values 4-1019 (1015 code values) versus a true RGB container using “data levels” which is 0-1023 (1023 code values), and although they are close enough to interchange in post, in Davinci Resolve “data levels” actually refers to RGB color space 0-1023 for example that most post houses would use for CG or other visual FX. Formats such as DPX or EXR make use of the full RGB color space. Overall its a bit a confusion that comes from similar terms yet different used when referring to video signals in general and terms used in post.

Leave a Reply