Shooting a Feature Film on the Canon 5D

The Canon 5D MarkII was the first DSLR that offered HD video capture capability worth considering as a replacement for film.  Its full sized sensor, full resolution 1080p recording, and high quality 40Mb AVCHD compression differentiated it from all competitors.  I have experimented with many of the other DSLR options on the market, but most of the projects I have worked on for the last year have been shot with the Canon 5D, so the majority of my experience and workflow expertise has been with that particular camera, most of which I will try to share here.  The workflow has improved greatly as the tools have become further developed over the course of the last year.  While the most glaringly obvious issue was that the 5D only shot 30fps, that was acceptable for certain workflows, especially if the 5D was the only camera on a project.

A much larger issue was the fact that the camera did not give the user manual control over certain important settings while in video mode, including aperture, shutter speed, and ISO level.  The settings could not be specifically dialed in, but any setting brought about through the automatic feature could be paused or locked for the duration of the next shot.  Having three variables all changing made it nearly impossible to trick the camera’s auto-exposure system into giving you the settings you wanted with any level of consistency.  The easiest setting to over-ride was aperture, since this was on the lense.  By preventing the camera from commuicating with the lense, the automatic feature could be disabled.  But with no electronic communication to the lense, the aperture must be set physically.  Older Nikon Nikkor manual lenses were the only ones that easily adapted to the 5D, that had physical rings for controlling the aperture manually.  Once the aperture was set, the standard practice was to point the camera at lighter or darker areas until the automatic exposure feature gave the user the desired settings, and then to lock it.  This process had to be repeated for each take or shot, as stopping record put the camera back into full auto.  Regardless, many people used this method of manipulating the camera to achieve the desired results for the first few months after its release, and I worked on a number of commercial projects that did.  Canon was not real excited about promoting the use of Nikon glass over its own lenses, so this was one of the first issues they fixed.  The 1.1.0 firmware update solved this problem by allowing the user to maunally set the aperture, shutter speed, and ISO, and keep it consistent from shot to shot.

So once the lenses issue was dealt with, were left with a selection of AVCHD encoded MOV files.  AVCHD is a processing intensive format that does not playback or edit very well.  While Quicktime would play the files, it clipped the blacks and the whites at incorrect levels.  16 and 235 were being stretched to 0 and 255 on decode, lowering the dynamic range.  This was caused by Quicktime incorrectly interpreting one of the header fields on the file.  The solution to this was to use CoreAVC to decode the files when converting into a different, and ideally more edit friendly, compression format.  Shortly after this workaround was developed, Apple released a Quicktime update (7.6) that fixed this particular issue entirely.

Beyond the clipping issue, there are other tricks to maximize the dynamic range of the 5D.  The picture style is used to control the way that the camera converts the 14bit RAW still into an 8-bit JPEG.  The same picture profile settings are applied to the 8-bit recorded video.  This allows you to do things to get the maximum detail out of the available 8-bits of color depth.  The first few projects I worked on that used the 5D, we used a custom picture profile that I got from Stu Maschwitz’s ProLost blog, High Gamma 5.  We did a number of comparison tests, and while High Gamma 5 gave us a wider total dynamic range, for our feature film, we eventually decided to use Neutral, one of the default Canon presets.  Neutral gave us a file that was closer to the final look we were going for, and with only 8bits of color depth, burning in your look, at least to a degree, should result in better picture quality at the end of the day.

Every file the camera records is named MVI_####.mov, with an auto-incrementing number, and no real override options.  That makes things simple on tiny projects, with one camera since each file has a unique name.  On larger projects, and ones that use more that one camera, (We usually have 15) file management can be a bit more work, to keep things straight throughout the post production process.  Our solution was to rename each MOV file with a unique 8 digit identifier as the new filename, and store the key to the original card and filename in a database.  This allows each clip to have a consistent name throughout the process, to show up on EDLs as a tape name or clip name as desired, without truncating unique values after the 8th digit for certain formats.  By the time we are done ,we usually have a source MOV, an Avid MXF, and an online Cineform AVI, all with the same content and file name.

Next up was the framerate problem, at 30p.  The first few projects I did with 5D we posted at 29.97, so the issue was solved with a simple reinterpretation of the framerate, when converting from the source AVCHD into an editing codec, and tweaking the audio .1% to match.  Unfortunately 29.97 footage doesn’t intercut with film very well, and won’t print back for theatrical masters either, so sometimes a 24p workflow is required.  For 24p projects, the conversion solution is much more complicated, involving motion compensated frame blending.  After extensive testing we concluded that this was best done with the Revision Effects Twixtor plugin for AE, or using Optical Flow in FCS Compressor on OSX.  Having a PC centered workflow, I favor the AE based solution.  With render times at around an hour per minute of source footage, it is impractical to convert all of the source footage on large projects, which necessitates an offline edit.  Since we don’t have timecode and keycode, relinking for the online requires a bit more creativity.  We have found some interesting options that are unique to Premiere Pro CS4, related to the way it links EDLs to existing source footage that make this much simpler than our first tedious tests, which involved manually rebuilding projects at 24p back in Premiere Pro CS3.  The new CS4 version can convert the TC-In on an EDL to a framecounted In-point of an existing media file, with makes the onlining of 5D footage a relatively simple automatic process after a few find-replace edits (.mov to .avi in our case) to the EDL.  In the future, it looks like Canon is going to support 24p recording on all of their DSLR offerings, so all of these crazy 30p workarounds will soon be an obselete thing of the past.

Although it is much better in rough environments than most other electronics, Canon DSLRs do have their weaknesses.  I have operated a 5D in temperatures of 20 below zero, and in the desert at over 120 degrees fahrenheit.  While we had no issues in the cold, where solid state recording has a huge advantage over tape, there are some issues at higher temperatures.  The camera sensor itself is a large piece of silicon, that generates a lot of heat on its own, and when combined with a high external temperature, in the worst cases is shuts off the camera.  You probably have to be over 150 degrees to get to that point, leaving the camera in a black metal box in direct sunlight for an extended period of time, but we have seen it happen.  A much more frequent problem, that is harder to detect, is that as the sensor begins to overheat, there will be much more video noise in the recorded picture, especially in the darks.  This is probably due to a higher latent voltage on the chip as its electrical resistance changes with the temperature increase.  This has only been a problem for us when shooting with the same camera for many hours in a hot environment, and our solution is usually just to swap the camera bodies for one that has not been used in a while.  This obviously requires having multiple cameras on set, which isn’t always an option on lower budget projects.

The last issue, that we are still finding new ways to deal with, is rolling shutter.  Having a large format CMOS sensor, DSLRs are subject to rolling shutter, or inconsistencies between the top and bottom of the frame.  I have spent the last few months working on a project that put the 5D into some of the most intense situations.  As a fairly lightweight device, it is subject to more jitter and shake than a larger camera with more inertia, and with the camera moving, the rolling shutter results in the recorded picture being slightly geometrically skewed, depending on the direction of the motion.  We also shoot high speed objects, like helicopter rotor blades, which are known to cause strange artifacts in certain instances.  So far we have been lucky with that, and haven’t found any of those types of issues in our footage.

The type of rolling shutter artifact we are struggling with the most, is gunfire muzzle flashes, especially at night.  In the dark, the flash blows out the imager, but the flash does not last as long as even a single frame.  So with the rolling shutter, the top half of a frame will be totally blown out, with the bottom part looking normal, because the flash had subsided by the time that part of the chip was sampled, or vice versa.  Setting the shutter speed lower than the frame rate causes it to screw up more of the frame or frames, and setting it higher causes it to narrow the flash into a distinct horizontal band in the footage, neither of which is desirable.  One thing we have found that helps is setting the shutter on the 5D to 1/30th.  (We usually set it to 1/50 to get similar motion plur to film shot with a 180 degree shutter)  With the 30p framerate, the flash either affects an entire frame, or matching parts on two subsequent frames.  (Bottom part of one frame, and the reverse area on the top of the next one)  This gives us an entire over exposed frame if we stitch the two parts together.  This can be hand cut back into footage that has been brought from 30p to 24p by manually selecting frames.  It remains to be seen if this solution can be scaled practically to our entire movie.  The best way to avoid this issue is to avoid recording gunfire at close range in very dark environments.  The farther you are from the muzzle flash, and the more ambient light there is, the less it is going to flare out your camera, minimizing the degree of the resulting rolling shutter artifact.

So that should convey some of the challenges are in faced in using DSLRs for filmmaking, especially on large scale projects, but it is by no means an exhaustive list.  As the tools evolve to suit the cameras, and the cameras evolve to suit the tools, many of these issues will become much easier to solve and require fewer workarounds.  The AVCHD decoding issue was solved by a new release of Quicktime, the manual lense control was solved with a new firmware release from Canon.  The 30p conversion process is the next issue I see becoming a thing of the past, if Canon can get a 24p recording option onto the 5D.  I am looking forward to that day, but in the mean time I have 2TB of 30p footage, divided into 5,000 shots, to cut into a 24p film, so I have a lot of work ahead of me.

9 thoughts on “Shooting a Feature Film on the Canon 5D

  1. Richard

    Hi,

    Interesting post; good insights.

    After an extensive period of researching cameras (I was considering an EX-3 with a Letus 35mm adapter), I decided to go DSLR all-the-way. I now own both a 5D-ii and a 7D, with proper 35mm glass and stabilisation gear from Zacuto. I’m in love with the shallow DOF and the versatility of the camera (timelapses).

    I too wrote a piece on working with HD video on ReelSEO: http://www.reelseo.com/hd-video-dslr-camera/

    Look forward to seeing the final result of your movie. Where/when is it going to be released?

    Richard

    Reply
  2. bcj.

    Thanks for taking the time to share your experience with the 5DmkII. Regarding “converting the video files into a different, and ideally more edit friendly, compression format,” would you be able to share what format you chose to convert to? Do you experience any color shift as a result of the transcode? If so, is it significant enough to address, or do you just live with it?

    Thanks for your time,

    bcj.

    Reply
  3. McCarthyTech Post author

    Depends on the project, DNxHD36 for the last two big ones, Cineform, ProRes, or Matrox for previous ones. The color shift is irrelevent because it’s an offline. We relink to the source footage for the online. With the online we usually see a gamma shift, but account for it in color correction.

    Reply
  4. Frank Arch

    Mike,
    HELP!
    I’m still trying to put together the best possible recipe for my 5Dmk2 stock shot to bring them up to their highest possible quality for a feature film project. And even if I’m not using either Avid or PC (Mac and PPro CS5…sorry…), I’d love to hear your suggestions… In this quest to find the best transcode software and the best “post” recipe, here’s what I’m testing tomorrow on the Speedgrade DI at the Technicolor lab we have here in town:

    -I took my native CF card stock shots and dumped it on an ext drive.
    -I renamed and categorized it all.
    -I used Cineform NeoHD, NeoScene and 5DtoRGB softwares to transcode the H264 files into higher res files (CF422 filmScan1, switching ON the deinterlace and Limit the YUV, 5DtoRGB PR4444, PRHQ etc…)
    -I simply layed all the files down on a timeline on Premier Pro CS5, and then exported the sequence in DPX files.
    -I will then go see the colorist with my hard drive and look at it on the SpeedGrade DI, hoping he’ll be clear on what transcodes he prefers…

    Now, I’m trying to find the best way to optimise those files for a 35mm blowup… Can anything be done to bring those h264 files “cheated” up to higher quality files? Is there another transcode sofware you would recommend? Am I doing anything wrong here?

    Thank’s for your help, Mike.

    Francois.

    Reply
  5. McCarthyTech Post author

    Import the native source H264 MOVs into Premiere CS5 on your Mac, and export your series of clips directly to a DPX sequence. This will give you the maximum possible quality. The color will be different than the Cineform conversions, because that process uses Quicktime, but the colorspace directly from CS5 is “correct” and will be higher overall quality. That should also stretch the 0-255 values of the camera into the 10bit space of the DPX without squeezing them into 16-235 (8bit video space) at any point. You could render into Cineform instead of DPX and the result would be the same. The important step is to have CS5 decoding the source H264 MOV.

    Reply
  6. Frank Arch

    Thank’s for your help, Mike. And I did put the native file on that DPX test run. I’ll be looking at it with the colourist friday. But I’m curious… Am I limited by my tools?… So Mike, If you were me, and if you could go in any direction and had the money for whatever route… would that still be the best way to optimize those H264 file? Or is there a better recipe available?…

    Francois.

    Reply
  7. McCarthyTech Post author

    Adobe’s H264 importer in CS5 was custom designed for Canon DSLR files, and is the only application I am aware of that properly decodes the files from Canon’s non-standard colorspace. (Full Range with a 601 color matrix) So as far as I am aware CS5 is the best solution that money can buy, for decoding DSLR files, at least for right now.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *