There can be a lot of confusion when it comes to use of video or data levels when recording with most cameras, including the Sony’s, and usually it rears it’s ugly head when it comes to the post production process resulting in everything from shifted levels, clipped signals, and just a mass of confusion that can leave you scratching your head saying;
what happened to my levels?!?
On a A7S thread from the DVXUSER community forum a user noted that the internal recording waveform varied from the externally recorded waveform over HDMI and thought something was drastically wrong. The reason in this case is not due to any issue really, but just differences between YcbCr 422 digital video and RGB Data. The camera is recording in YCbCr 422 REC709 space, but the external recorder seemed to be using full RGB data levels, or somehow otherwise scaling the output.
But that observation helps to bring up some really good questions and potential issues when it comes to the post production workflow process:
To simplify the discussion, most cameras including the Sony F3, F5, F55, A7S, F7S record video signals in REC709 space which is YCrBr 16-235. Very few digital cameras actually record in a 100% true RGB container using data levels. Having said that some external recorders may support it but it’s also typically dependent on the codec. Although certain codecs such as ProRes 4444 support “true” RGB color space, according to ARRI they state in the case of the Alexa they always record to ProRes using “legal range”.
Back to XAVC however, whether you chose slog or hypergamma its recorded using REC709 standard space which is “video levels” 16-235. Then there are also “extended” or “illegal” levels to consider which Sony calls “full range” in their 2014 slog3 white paper but technically speaking these are still REC709. ie. its not “true” RGB data levels all of the sudden just because we drop below 16 with blacks or go above 235 with superwhites.
If you were to take an XAVC clip into Davinci Resolve and from the media pool right click the clip and select CLIP ATTRIBUTES and then switch it to VIDEO LEVELS you would see the waveform expand and fill the scope some more. If you switch it back to DATA LEVELS then the whole waveform shrinks by lifting blacks and lowering the highlights causing the effect of reduced contrast.
Below is a high contrast scene I shot using HG7 on the Sony F55 camera in XAVC HD. I’m using HG’s in this example instead of slog to help illustrate the effect better:
This is how it appeared in my monitor and how I exposed the shot:
Looking in Davinci Resolve, with the clip attributes set to DATA LEVELS. The contrast is reduced. Blacks go up the scale. Whites go down.
If I go back to clip attributes and select VIDEO LEVELS the image looks normal again. (Note the highlights however)
Then if I use the gain wheel in resolve to reduce the image gain, even though it first appeared as if the highlights were clipped you can see they are easily restored.
It’s important to note that if you are working with a “full range” (aka “extended” or “illegal”) signals in Davinci Resolve and you’ve set your project up for “video levels” just because the signal might look clipped on the waveform monitor does not actually mean any of your signal is clipped or lost! All you need to do is use the gain, offset or shadows controller wheels to bring the waveform back into a good workable range.
That’s great about Davinci Resolve but its also where things can get ugly depending on how your NLE software interprets the video footage that you ingest. If for example you were to bring in that same high contrast scene shot in XAVC using the HG gamma into FCP X – how would FCP X treat that footage? Sony’s white paper on slog3 calls the signal “full range” but this should not be confused with “DATA LEVELS” and here’s what happens when I bring in some XAVC HG footage into FCP X and then try and reduce the brightness:
Initially the waveform is clipped and occupies the entire scope. If I try some correction / exposure settings and try to reduce the highlights it doesn’t bring them back, they are clipped for good. Now what? At the time I did this testing I found a plugin for an older version of FCP that would actually trick FCP in how it handled these extended video signals and allow you to adjust the levels back within space. Is this a bug then, or just a problem in the way that some NLE’s will handle footage? In Davinci its no big deal because you can pick the “levels” of your project and just make sure you’re overall grade conforms to it before exporting – but if you are going straight to NLE you could run into problems. It’s something to be aware of and especially if suddenly you find yourself dealing with images that have contrast added or removed you weren’t expecting.
I sure hope this has proved itself insightful for some. If anyone has had different experience in post production I’d definitely welcome some comments to further discuss it. It’s quite a fascinating topic and I certainly have spent a long time trying to wrap my own head around it.
http://www.arri.com/camera/alexa/learn/alexa_faq/ (Section 11)