NVidia GTC Fall 2020

I usually attend GTC in San Jose during the spring, and that was interrupted this year by Covid-19.  Instead, I watched a couple of sessions online, in what was most decidedly not a replacement for the real thing.  This fall, NVidia normally would have been putting on GTC Europe, but instead made it a global event, since it was online anyway.  As a global event, the sessions are scheduled at all hours, depending on where in the world the presenters or target audience are.  (Tagline: “Innovation never sleeps”)  Fortunately, the sessions that were scheduled at 2 or 3am were recorded, so I could watch them at more convenient times, albeit without being able to ask questions.

While the “G” in GTC stands for graphics, much of the conference is focused on AI and supercomputing, with a strong emphasis on healthcare applications, which I suppose makes sense, when you consider why the event is being held online.  Ray Tracing, which is the primary new feature of NVidia’s hardware architecture, is graphics focused, but that is limited to true 3D rendering applications, and can’t be easily applied to other forms of image processing.  Since I don’t do much 3D animation work, those topics are less relevant to me.  The one emerging application of graphics technology development that I am most interested in at the moment, which was the focus of most of the sessions I attended, is virtual production, or more precisely: in-camera VFX.  This first caught my attention previously at GTC, in sessions about the workflow they used on the show The Mandalorian.  I was intrigued by it at the time, and my exploration of those workflow possibilities only increased when one of my primary employers expressed an interest in moving towards that type of production.

Cine Tracer

There were a number of sessions at this GTC conference that touched on virtual production and related topics.  I learned about developments in Epic’s Unreal Engine, which seems to be the most popular starting point due to its image quality and performance.  There were sessions that touched applications that build on that foundation, to add the functionality that various productions need, and on the software composable infrastructure that you can run those applications on.  I heard from a director who is just getting started on a feature film being created in a game engine world, built in Unreal.  My favorite session was a well designed presentation by Matt Workman, a DP who was demonstrating his previz application Cine Tracer that runs on Unreal Engine, but he basically went through the main steps for an entire virtual production workflow.

There are a number of different complex components that have to come together seamlessly for In-Camera VFX, each presenting its own challenges.  First you have to have a 3D world to operate in, possibly with pre-animated actions occurring in the background.  Then you have to have a camera tracking system to sync your view with the 3D world, which is the basis for simpler virtual production workflows.  To incorporate real world elements, your virtual camera has to be synced with a physical camera, to record real objects of people, and you have to comp in the virtual background, or for true In-Camera VFX, you have to display the background on an LED wall in the background.  This requires powerful graphics systems to drive imagery on those displays, compensating for their locations and angles.  Then you have to be able to render the 3D world onto those displays from the tracked camera’s perspective.  Lastly, you have to be able to view and record the camera output, presumably as well as a clean background plate, to further refine the output in post.  Each of these steps has a learning curve, leading to a very complex operation before it is all said and done.  My big take away from all of my sessions at the conference is that I need to start familiarizing myself with Unreal Engine.  And Matt Workman’s Cine Tracer application on Steam might be a good way to start learning the fundamentals of those first few steps, if you aren’t familiar with working in 3D.

Separately, a number of sessions touched on Lenovo’s upcoming P620 workstation, based on AMD’s Threadripper Pro architecture.  This makes sense as that will be the only way in the immediate future to take advantage of Ampere GPUs’ PCIe 4.0 bus speeds, for higher bandwidth communication with the host system.  I am hoping to be able to do a full review on one of those systems in the near future.

I also attended a session that focused on using GPUs to accelerate various graphics tasks for sports broadcasting, including stitching 360 video at 8K, and creating live volumetric renders of athletes with arrays of camera.  As someone who rarely watches broadcast television, all the more so now that I am rarely in restaurants that show sports on TV, I have been surprised to see how far live graphics have come, with various XR effects, and AI assisted on screen notations and labels, to cue viewers to certain details on the fly.  The GPU power is certainly available; it has just taken awhile for the software to be streamlined to utilize it effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *