GTC 2020

Mike McCarthy   March 31, 2020   No Comments on GTC 2020

NVidia has put on a series of conferences every year that focus on the new developments in GPU based computing.  Originally these were about graphics and visualization, which were the most advanced things being done with GPUs, but now they showcase everything from super-computing and AI to self-driving cars and VR.  The first GTC conference I attended was in 2016, when NVidia first announced their Pascal architecture, with dedicated Tensor cores.  Even then, that was targeted to super-computing users, but there was still lots of graphics based content to explore, especially with VR.  Over time, the focus has shifted from visual applications, towards AI applications, that aren’t necessarily even graphics based at all, they just have similar parallel computing requirements to graphics processing, and are optimal tasks to be accelerated on GPU hardware.  This has made GTC less relevant to visual artists than it has been in the past, and more relevant to programmers and similar users.  But the hardware developments that enable those capabilities also accelerate the more traditional graphics workflows, and new ways of utilizing that power are constantly being developed.  So I was looking forward to going to GTC to hear the details on what was expected to be an announcement about NVidia’s next generation of hardware architecture, and seeing all of the other presentations about how others have been using current GPU technology.

And then came the Corona virus.  I had been following the progression for the previous few months, and hadn’t been too concerned during a cross country trip in late February.  As the days went on, the idea of attending an event with tens of thousands of people from all over the world wasn’t super appealing, but I didn’t want to miss out on things either.  So in a certain way, I was relieved when I heard that NVidia was turning the event into a digital one.  I would be as involved in the online keynote as much as anyone else, would get to participate in certain other online events that normally would have followed.  I was disappointed that I won’t be able to explore and talk with other vendors in the exhibit hall, since that will never be the same online, but that is the sacrifice we make in that format.

But then NVidia canceled the online keynote, and announced they would be releasing the data as press release.  And eventually they deferred most big product announcements until after they were not competing with the Corona Virus for media attention, which is understandable.  So that left GTC to be a selection of talks and seminars that were remotely recorded and hosted as videos to watch.  On the plus side, these are available to anyone who registers for the free “online” version of GTC, instead of paying the hundreds it would cost to attend in person.  But very few of these are Media & Entertainment related, and of those, most focus on 3D animation which is not my area of expertise.  I watched a few, as I would have done in person, but only one really stood out to me: “Creating In-Camera VFX with Real-Time Workflows.” (S22160)

One could say it is basically a commercial for Unreal Engine, but what they were doing, specifically for the Mandalorian, was amazing.  The basic premise is to replace green screen composites with VFX projections behind the elements being photographed.  This was done years ago for exteriors of in-car scenes, using flat pre-recorded footage, but technology has progressed dramatically since then.  The main advances are in motion capture, 3D rendering, and LED walls.  From the physical standpoint, LED video walls have greater brightness, allowing them to not only match the lit foreground subjects, but they can light those subjects, for accurate shadows and reflections, without post compositing.  And if that background imagery can be generated in real time, instead of recordings or renders, it can respond to the movement of the camera as well.  That is where Unreal comes in as a 3D game rendering engine, being re-purposed to generate images corrected for the camera’s perspective, to project on the background.  This allows live action actors to be recorded in complex CGI environments, as if they were real locations.  Actors can see the CGI elements they are interacting with, and the crew can see it all working together in real time, without having to imagine “how it is going to look after VFX.”  I know we looked at using this technology for the last film I worked on, and it wasn’t quite there yet at the scale we needed, so we used green screens, but it looks like it has arrived.  And NVidia should be happy, because it takes a lot more GPU power to render the whole environment in real-time, than takes to render just what the camera sees after filming.  But the power is clearly available, and even more is coming.

While nothing has officially been announced, some thing is always coming, and past patterns would indicate sooner rather than later.  The current Turing generation of GPUs, which has been available for over 18 months, brought dedicated RTX cores for real time ray tracing.  The coming generation is expected to scale up the number of CUDA cores and amount of memory, by using smaller transistors than Turing’s 12nm process.  This should offer more processing power, for less money, which is always a welcome development.

Leave a Reply

Your email address will not be published. Required fields are marked *