Editing DSLR Footage in Avid

Avid Media Composer works on a very different paradigm than either Premiere or Final Cut.  This makes the application more stable than its competitors on larger projects with lots of source footage, but involves more steps in the workflow to get your final product. (The recent announcements about Avid’s new features in the upcoming version 5.0 will offer dramatically different options.  New AMA support will allow you to work with DSLR footage and other Quicktime files in a similar fashion to how they are handled by Premiere and Final Cut.  These new workflow options will be further examined in a separate post once the new version is publicly released)  While Avid is capable of doing online quality work, it is most popular as an offline editing program.  

For the Navy Seal movie, I developed a workflow that allows us to intercut the 30p footage from the Canon 5D with 24p footage that we shot on film.  This generates EDL sequences that can be accurately re-linked to 30p footage after it has been processed to 24p in a motion compensating frame rate conversion.  This works because of the way that Avid generates new DNxHD intermediate files of your media upon import.  These new files match the project frame rate of 24p, by dropping the extra frames from the original 30p MOV files.  This is usable for editorial, and allows you to generate a proper 24p EDL.  Premiere Pro CS4 can re-link to existing tapeless media from an EDL, by frame counting based on EDL time code. (Will only work if editorial sees each clip as starting at 00:00:00:00)  This will allow you to re-link to the original source footage, assuming it’s 24p. (With the 1D and 7D, it is)  The 30p footage from the 5D will not re-link since the frame count is different at that fps rate.  But we want 24p footage anyway, and not just for syncing purposes during online re-link.  Converting 30p footage to 24p with Twixtor will allow those exported clips to be properly linked to the EDL from Avid, within one frame.

Twixtor is a plug-in from ReVision Effects that allows you to change the frame rate of your footage thru motion compensated frame blending.  While it can be used to add frames for slow motion effects, I have found that I get much better results when removing frames, such as when dropping from 30fps to 24fps.  Regardless of the specific settings, Twixtor takes a lot of time to render.  In our first tests on 8-Core Xeon systems, processing one minute of source footage required one minute of render time.  Now with Intel’s new Nehalem based CPUs, and recently their even newer Gulftown 6-core chips, we have seen that reduced by about fifty percent, to a half hour per minute of source footage, which is still a long time, but feels great compared to where we were a year ago.  Since our footage re-link process is based on frame counts, we have to process our entire source clips in order to take advantage of that level of workflow automation, even if we are only using the last ten seconds of a 14 minute take.  Obviously there are ways around this, but we currently have more render time available to us than man hours, and it gives us more flexibility later on anyway, so we just let it go.  We took advantage of every night and weekend during creative editorial to Twixtor every clip that made it into the rough cut, and now we just have to link to that bank of processed footage to conform our cuts in CS4.  The fact that all of Canons DSLRs now support 24p should alleviate most of the frame rate and Twixtor issues in future projects.

Besides frame rate issues, Canon DSLRs present another unique challenge, in regards to color space and bit depth.  Many professional video codecs store the color values in the range between 16 and 235, of the 256 possible 8bit options.  (The reasoning for this is fairly complicated, and relates primarily to legacy analog video signal issues)  This limits pixels to 220 levels for each color in most 8bit codecs, but the MOV files from the Canon DSLRs use the entire 256 possible options (0-255) for each color.  This increases the number of possible values for each three color pixel by over 50%, (220^3 vs 256^3) but also means that converting your DSLR footage into most other 8bit formats will result in one of two issues: either the extreme values will be clipped, losing detail in the highlights and the shadows, or all of the dynamic range will be squeezed into the reduced sample-space, meaning certain intermediate values are going to be merged together if you edit in an 8bit codec.

Clipping was the most likely possibility in most existing applications prior to the release of Quicktime 7.6.2 in mid 2009.  Previous to that point, Quicktime displayed Canon clips incorrectly (clipping the values beyond 16-235) but after that update was released, most applications that used Quicktime to decode DSLR footage, were able to access the entire dynamic range of the source clips.  This support is not a foregone conclusion though, since DSLR files could be imported with a more generic MPEG4 decoder without Quicktime, and still be displayed incorrectly.  Even with properly calibrated import processes, compressing the 255 possible values for each color channel into the limited 220 values that most 8bit video formats offer, will lead to a loss of precision, and a potential increase in color banding, especially if you plan to color correct the footage later.  A 10bit video format will offer four times as many possible legal color values, and will be able to store all of the original image data with precision to spare.  Once you have color corrected your footage, and any visual effects are complete, an 8bit distribution format may be sufficient for most uses, but any image processing that takes place on the original files before you apply the “look” that you want, should definitely be processed in at least 10bit color space to preserve as much of your original image information  as possible.

When editing DSLR footage in Avid, DNxHD is the recommended intermediate format.  DNxHD files can be encoded in either HD 709 (16-235) or RGB (0-255) color space, but any DNxHD files encoded in RGB are converted to HD 709 upon import into Media Composer, regardless of the original output setting.  Therefore any DNxHD MOV files generated elsewhere for ingest into Avid should be exported at 16-235 to match Avid’s target color space, for a lossless “Fast Import.”  On the other hand, when importing DSLR footage into Avid, you should select “Computer RGB (0-255)” as the SOURCE color space, in the “File Pixel to Video Mapping” options.  (Rec. 709 is always the TARGET color space for DNxHD MXFs in Avid)  While importing with the 0-255 setting retains the full dynamic range, it still squeezes the entire range into the 16-235 gamut.  That loss of precision should not be as significant as viewable dynamic range for an offline edit, but if you planning to export your Avid sequence as your master without a separate conform, you should consider using a 10bit codec in Avid, like DNxHD 175x.  That will allow you to maintain both the original dynamic range and the bit depth, at the expense of higher storage space requirements.

Once you have a re-linked timeline of high quality 24p footage, there are still a few more steps that can be taken to cleanup the footage.  Dead pixels should be the first thing on the list to deal with.  Dead pixels can be caused by physical debris on the sensor or lens of the camera, or by an electronic malfunction with one of the photo-receptors on the CMOS sensor.  The result is the same regardless of the cause, with one of more pixels locked at a static value throughout the shot.  The simplest way to fix this is to cover the effected pixels with information from the surrounding area.  One procedural way to fix this is to duplicate the layer of footage in an AE comp, and mask out a similar section nearby and cover it.  (If you have a horizontal row of three dead pixels, mask the three pixels above them on a second layer, and then drop the top layer down one to cover the spot)  In most cases the duplicated data will be totally invisible, but be sure to QC the result.  If you are Twixtoring your footage to a different frame rate, fix the dead pixels before applying the rate change, otherwise the motion compensation process will cause the dead pixels to move around, making them much more difficult to remove in a procedural fashion.  The next step is to look for any rolling shutter artifacts, caused by the slight difference in time between when the top and bottom of the frame are sampled.  This difference in time can manifest itself in a number of interesting ways, including distortion, with the top of the frame seeming to “lead” the bottom.  It can also cause horizontal bands of brightness with quick flashes of light only being recognized by part of the sensor.  The Foundry has a plug-in called Rolling Shutter that can help reduce the image distortion caused by motion of the camera on smoother shots.  The horizontal bands have to be removed manually in a VFX process if you want to get rid of them, borrowing data from the preceding or following frames if needed.  The Canon DSLRs also exhibit some moiré and other aliasing issues due to the way they sample the low resolution video from the high resolution sensor.  The only way to really get rid of those artifacts is to selectively mask and blur the effected sections of the frame.  Lastly, if you are using Twixtor, QC the output for corrupt frames caused by the interpretation engine being unable to guess the proper motion of the moment in the shot.  If re-rendering with different settings doesn’t help, covering the bad frame with an original frame of footage from that moment usually solves the single frame issues.  Luckily the most difficult sections of footage to calculate motion compensate for, are usually segments of where using frame dropping conversion instead is undetectable, since the extreme motion should hide any stutter caused by the missing frames. (This is coming from a guy who is processing a lot of handheld combat footage)  Once these steps, as well as the rest of your visual effects work, are finished, you are ready to export and color, which should be similar to most other workflows at this point.

4 thoughts on “Editing DSLR Footage in Avid

  1. Pingback: HurlBlog Technology Guru: Mike McCarthy Part II | Hurlbut Visuals

  2. JR

    Hey Mike,

    First of all I’m sorry we didn’t get to meet at the Cineform booth this year, both David and Salah indicate you’re a great guy to meet and talk to.

    Secondly, my question. I’ve been using this Avid/Pr/AE offline/online process for the last couple of docs I’ve worked on. I do everything in Cineform which is also why I use this process. But I”m curious as to why you export an EDL vs. an AAF Link to Audio and Video option? (Then re-link to my Cineform masters- Process I outlined on my blog: http://blog.jayfriesen.com/2010/01/avid-to-premiere/)

    It appears we get the same results, but I’m curious why you do the EDL vs. the AAF…

    Incidentally, you’ll like the new MC5 😉 No more offline/online. You’ll be able to use your CF files all day long- including any First Light color data.

    Reply
  3. McCarthyTech Post author

    The only reason I currently use EDLs instead of AAFs is that I have had trouble getting AAFs to import into Premiere. (Incidently EDLs with trailing spaces don’t work too well either) I am working on improving that aspect of the workflow with AAFs, but I haven’t quite nailed all of the details yet. Its good to know that it is working for someone. I still haven’t figured out why my complex sequences fail to import. And yes I am looking forward to MC5 with AMA support for both DSLR clips and Cineform MOVs.

    Reply
  4. JR

    For what it’s worth, my sequences are pretty basic. Because I know I can tweak the edit if I need to in Pr- but all my transitions and FX are done in AE. I’m also on Mac.

    I’m playing around right now with a few different MC5/Adobe workflows. Haven’t had a chance to test in full yet but hopefully in the next few weeks.

    Thanks for for your posts on the DLSR workflow. Good to know someone’s going into depth on it.

    Reply

Leave a Reply to JR Cancel reply

Your email address will not be published. Required fields are marked *