AMD is a leader in innovation, enabling content creators to achieve their goals with high-performance computing and visualization technologies – but did you know Fox Feature Entertainment’s Fox VFX Lab is currently utilizing AMD Ryzen Threadripper (the most powerful consumer processor available today) to support its virtual production?
Thanks to AMD’s partnership and technologies, Fox VFX Lab is achieving real-time virtual production capabilities, fundamentally changing the way movies are made and making production and post-production a single iterative process for movies such as The Call of The Wild, starring Harrison Ford – revolutionizing decades of filmmaking by eliminating much of the director’s guesswork, eliminating risks for studios and translating directly into greater box office success.
We spoke with AMD Virtual Production Director James Knight in Los Angeles to explore how this advanced processing power is changing the way Fox VFX Lab operates.
Renderosity Magazine: Boxx is an awesome hardware company. How did they help with getting the team set up?
James Knight: Fox VFX Lab used customized BOXX APEXX T4 workstations equipped with AMD Ryzen Threadripper processors for its artist work stations, as the lighting calculations in particular are much faster on a machine with that number of cores. After initially using the EPYC processor-based servers for offline lighting calculations, Fox VFX Lab started employing the servers for extensive light art, photogrammetry, and construction setup.
BOXX has been an excellent partner, service provider, and technical resource, providing white-glove treatment. Whenever we request support, the BOXX team responds immediately, and it’s those seemingly little things that make all the difference. It also offers a wide variety of custom, overclocked workstations and dedicated rendering solutions for maximum efficiency and ROI with 3D Studio Max, Cinema 4D, and its plethora of options.
AMD Threadripper CPUs have gotten a lot of good press for their performance. Did you or your team tweak them at all during production (overclock, etc.)?
James Knight: The FOX VFX Lab used both regular and overclocked processors for production. The studio has outfitted 80 workstations with Threadripper processors in a combination of 16 and 32 cores. These processors were able to perform all production tasks and decreased time required for rendering.
Today, digital content is being created and packaged in new ways that require heavier computing power, and the Interoperable Master Format (IMF) requires just that. Ryzen Threadripper has been able to shorten rendering times by delivering high FPS, allowing content creators the ability to put time back into projects.
What was the graphics card configuration for the Boxx computers?
James Knight: The team used a single box, rack-mounted configuration to power the customized BOXX APEXX workstations. While almost every machine is not a single-purpose machine, we selected parts we knew would work well with the AMD Ryzen Threadripper.
What was the order of work (briefly) of a typical day of production?
James Knight: Virtual production is a growing practice within feature filmmaking and television. It’s incredibly cost effective – knowing on many sets, employees are paid hourly – and helps creators plan and streamline the production process before they even show up for work on any given day.
On a typical day of production, virtual production is helping to eliminate guesswork by allowing creators to visualize in real time and have near-final representations of how characters will appear in the final frame, or how a final CG set extension may look in frame. In many cases now, using game engines, you can have final, quality-level rendering, almost cutting out post-production completely, leaving just color correction and editing.
And think about the implications for audiences. Real-time virtual production leads to much more compelling movie and television content creation if content creators embrace it. You can follow around a CG character, as if it really existed, drawing the audience in – as the camera is the audience’s only point of view.
If a content creator can see a final character or set extension composition, or even a representation of final, in the view finder on any given day of filming, the viewer gets the real benefit. If directors get to see CG in real time, then they can shoot scenes as if these representations are right in front of them, leading to better camera work and a much more immersive experience.
What do you envision in the future for previz given the great performance of the Boxx/AMD computers?
James Knight: Motion capture is pivotal to making scenes more believable, automatically capturing the proper weight and the way an actor moves. In a similar way, the virtual camera allows the director to shoot a piece of CG content as if it really existed. AMD’s technology is increasing the speed of production as we make the use of digital assets much easier across all three pillars of content creation, making pre-production, production, and post-production a single iterative process.
In fact, with AMD’s technology, Fox VFX Lab cut processing and rendering time by upwards of 30 percent, and higher, when compared to how it was done. Overall, the technology is eliminating much of directors’ guesswork, eliminating huge chunks of risk for studios and translating directly into greater box-office success. In the future, we envision efficiency continuing to increase, with content creators getting closer and closer to final resolution.
This technology is not only increasing efficiency but also allowing creatives to make more mistakes, in a shorter amount of time. Because we’re able to render so quickly, we’re now more empowered than ever to be creative, make mistakes, and be human.
Last question: I was intrigued with the mention of using the EPYC servers for "neural network training,” can you elaborate on that?
James Knight: Neural network training is a pre-set list of algorithms that can detect patterns in input data. These patterns are then translated into usable data, be that sound, imagery, or text. However, doing so is not an easy feat – it requires powerful systems and servers to handle the input load. Our EPYC servers are used to create a neural network to predict an outcome based on input data. By stacking our EPYC servers, we’re able to create a deep learning environment that provides a node for the network and allows us to run intensive training processes.