Evolving USD Workflows: Powering The Last of Us Season 2 | RISE | FMX HIVE 2025

Houdini7,832 words

Full Transcript

[Music] Hey everyone, first of all, thanks to Side Effects and FMX for having us and uh yeah, welcome to our presentation uh evolving USD workflows powering The Last of Us season 2. My name is Andreas Gizen and I'm working as a VFX supervisor for Rise Visual effect studios and I'm fry head of FX at Rise. So yeah, today we would like to give you an overview uh of the latest developments in our USD pipeline at Rise uh and how it became the foundation for delivering a show like The Last of Us season 2. Um at Rise, we've been using a start to end USD based workflow for about 4 years now. The first full project built on the USD pipeline was the last voyage of the Dmeter back in 2021. Since then, the pipeline has grown significantly. So something I already touched on last year here at FMAX during my Fallout talk. By the time we worked on Fallout in 2023, the system had become much more stable, modular, and also production proven. This solid foundation gave us the freedom in 2024 to really build then on top of it. And for the last of us, we focused not only on the stability, but also on new features, automation, and better tools for the artists. And some of those improvements uh are exactly what we will be showing you today. But first, let's get started with a few quick facts about the project Last of Us at Rise. Um while we only did about 40 shots for the first season of The Last of Us, we almost delivered 20 uh 250 shots for the second season. So those shots were spread across all over all the episodes like 1 2 3 5 6 and seven. the last one. Luckily, more than half of our shots are actually episode one and two. Um, otherwise there wouldn't be much to talk about today because the other episodes haven't been released yet. So, unfortunately, I cannot show you the work we did for the uh last episode. But, we have these other 136 shots uh here to look at. To give you an overview, let's check out our Last of Us episode 1 and two show real. [Music] Hey. [Music] Hey. Hey. Heat. [Music] [Music] Heat. [Music] [Music] [Music] [Music] Heat. [Music] [Music] Heat. Heat. [Music] Heat. [Music] [Music] [Applause] Thank you. So yeah, we started working on The Last of Us season 2 at Rise in August last year and delivered the last shot just about 3 weeks ago. So the show was in production without us for about 9 months. Uh yeah, which is quite a stretch for TV series. Over the course of the project, around 100 artists uh contributed to the work here with about 40 people active at a time. Um, as on the last season, we worked with Alex Wang, BFX supervisor, and Fiona Cample, Westgate producer on the production side, which was again a very creative and trustful partnership. To get into the mood, let's join Ellie for some shooting training here and blow away some infected. While the shot here on the left we could tackle completely in compositing by adding blood elements and gore elements, the leg shot on the right was a bit more challenging. To make sure that we see how bigger parts of the leg are blown away, uh we did an effect simulation in Houdini. The production provided us with a lighter scan of Kelsey that's infected. Um and our modeling department created a multi-layered leg including skin, fat, tissue, and bones. So by creating all these layers, we could make sure um that there was enough to play with for our effect CDs. The next step was the fracturing of those layers and to set up different material properties and constraints. The actual simulation was done with the vellum solver. That's yeah quite fast and efficient especially in combining different materials. Once the base sim was in place, two blood layers were added. A dense one which interacts nicely with the ground and another fine spray and mist layer. For the blood layers, we used the flip solver and uh for the mist particle simulation. The first environment we're going to take a look at is Hobback. Um, in the first episode, Ellie and Dina arrive in this small village while out on patrol with a few others from Jackson. The sequence was shot in Canada near Cam Loops in an abandoned area called Pedova City. So, uh, I really like this approach of using real world locations as a foundation rather than jumping straight to shooting on a blue screen. From my experience, you can usually tell whether something is, you know, not grounded in reality, even if it's just a small details basically giving it away. That said, we still had to do quite a bit of enhancement work to make Hobc feel right. Otherwise, it would have been a bit boring for us, too. So, let's start with the mountain environment we created. Since we knew we would need like snowy mountains across multiple sequences under different lighting and camera conditions, um we decided to build a procedural mountain setup in Houdini. The client provided us here with extensive reference material from the shoot in Canada which was incredibly helpful for locations like Hob and Jackson. We use real world mountain geometry derived from satellite data as our starting point. Um using the height field tools in Houdini, we added more and more details with every layer. One of the key components was the erosion simulation which like yeah mimics the natural effects of water and temperature over time like snow melt carving passes or yeah creating sharp peaks. Um these hydro and thermal erosion models bring a level of realism that would have been extremely difficult to achieve manually especially at this scale. Um, in addition to the erosion, we had full control over the distribution of snow, ice, and rock, which allowed us to fine-tune the terrain based on shot specific reference. Um, additional masks all based on the height field data were used for scattering of the trees. So, to make sure that the tree line corresponds with the snow features, um, and that the lower trees have less snow on top. Thanks to the efficiency of Houdini's volume based height fields, we could generate entire mountain ranges quickly once the core setup was built. Um, using our custom scattering tool set in Houdini, we had easy access to different variants of a single tree to activate wind and add color variation while the memory footprint stays small. Um, to ensure no performance impact for our lighters, we switch all heavy instances by default to a bounty box mode. So the lighters will see the element in the USD hierarchy, but it yeah won't slow down their scene and just get visible when rendering. But there was not only to add snow on the mountains also in Hob itself. The art department addressed the central area with snow and icicles, but many of the background buildings and the surrounding mountains didn't get that treatment. To handle this relatively straightforward task, we built procedural setups in Houdini that allowed us to distribute the snow naturally across the rooftops and also to place those icicles along roof edges and ledges. This gave us in the end a lot of flexibility and helped ensure consistency across the environment even in the wide shots. Um to place our snow and icicles accurately, we started with a lighter scan of the main buildings which was provided by the client. For additional background structures, we created proxy geometry to fill in the gaps. For the icicles, we scattered points along the roof edges and copied simple line primitives to them. These were then um shaped using the guide process to create more natural looking icicle forms. To get the clustered structure we saw in the reference material, we merged multiple instances using WDB nodes which gave us that yeah organic and frozen feeling. For the snow, we built a veg based system so that all roofs could be calculated in parallel on the render farm. One of the most exciting task for us in this sequence was a clicker head replacement. In the very first shot shot, we see dead clicker lying in the snow ripped apart by a bear on set. They yeah used the practical prop for both the clicker and also the bear here on the right. Um but Craig Mason the director and showrunner wasn't quite happy with how it looked on camera. So the plan became to enhance the bear and fully replace the clicker set in CG. Naturally we were also pretty excited to get our hands on the Last of Us clicker. Yeah, even if it was just a dead one to start with. Um, so as a starting point, we received a base clicker asset from another vendor which was already quite detailed. However, yeah, to make it work for this specific shot, we had to rework and customize it significantly. Um, while keeping the conic clicker design, but adding realistic wound detail, blood, and IC layers on top. So we started by modifying the base mesh of the clicker and adding anatomical details in ZBrush that were then exported as displacement maps. For texturing, we used Mari building the asset with a mask and a debased workflow uh instead of baking everything into one single texture. This allowed us to keep things modular and flexible. Um, this kind of a layout mask setup is perfect for building a complex shader, giving us in yet a lot of control in lookdev and ability to tweak things quickly without going back into texturing. Uh, on the shading side, we use material X and Houdini rendered with Karma. At Rise, our pipeline is heavily based around USD and Houdini. So, most of the layout, shading, lighting, and rendering happens directly in Solaris. To stay flexible during lookdev, we made sure that all passes and masks were tweakable in compositing. This helped us to avoid unnecessary rerendering and made client feedback uh much easier to address. As a final touch, we added a grooming layer with a subtle fine fluff mixed with some icy snowflakes to bring out an extra level of surface detail. For this asset and the effects I showed earlier, we used our unifi template system in Houdini, which helped a lot to uh iterate fast even with a small team. Since our very first USD project, the last voyage of the Dmeter, we started to create templates that are defining a base for our setups. This became necessary as especially the default boiler plate uh Solaris node graph can get quite complex. Our templates are databased registered version hip files and are part of the backbone of our Houdini pipeline and serve several different purposes. Starting point templates were the first use case for them. The artist gets the boiler plate setup of their required task. Each department has its own default PL uh default template plus possibly more specific ones too. The goal of the template workflow has always been to make sure the artists can fully concentrate on working on their actual setup while maintaining a common workflow structure and reduce the time waste of dealing with importing and exporting data as well as repetitively set up GL previews, short cam object level imports and so on. Templates can, for example, include turntable or shot preview render setups for the lookdev department, rendering and sub um submission setups for the lighting department, or a default import and use the export uh setup for assets, animations, and cameras for the FX department. Once building more specific templates like water simulations or fire setups which include ROP based dependency networks, we figured out that we can use them to be triggered automatically for different contexts and execute the dependency network within. For example, on the last of us the fire sequence we use uh for the Fireface sequence we used a geo light um from volume template to create mesh representations um from volumes to be used as render efficient proxy geolytes. We can trigger them directly from the template uh scene inside of Houdini or outside with our action system rise flow which we will talk later about in this presentation. Quickly after realizing the power of templates it was only a matter of time um that we started using the them to create configurable automatic quality checks based on our USD pipeline. where our QC's included all rendered layers that built up the scene composition in its current version. With the QC's, we were able to visually review the export and though approve the connected USD layers and can then push them into the shot or asset though that the upstream departments can work with them. In the recent months, we created automatic template based QCs for modeling, texturing, even based on texture layers from Mari surf surfacing, animation, and layout, and working on more to come. Specifically for The Last of Us, we created a making of template that can trigger a clay shaded uh drop asset rendering with a couple of clicks directly from our database interface, Rice Base. Especially in tight project and time frames um we often have to deal with small mistakes that um are arising uh in a complex pipeline with multiple people working on the complete scene composition. For example, naming issues or hierarchy mismatches. To address these issues, we use our template workflow to define validation templates. These validation templates can be used to identify downstream issues by using a sub error node in a validation template. For example, this quick OpenGL uh QC shows a watertight validation check for assets that can be used for simulation ready assets. The left two categories are also used in our onboarding and training assignment that automatically creates production similar multi-shot and asset layer setups for new starters at rise to test and train with. These four uh categories are currently our main use cases for templates, but we are sure that uh we will find um workflows over time that will include the template approach too. Our templates are based on four main components. The USD import uh the control node, the dependency network and use the export. The most important thing for any artist is to get the correct incoming shot or asset data into the work file. USD pipeline simplifies this a lot. The rice load shot note is suppleing all shot layers into the stage so it's easy for the artist to work with that data and import parts into post-processing processing and simulating or directly render the scene. For versioning, we use our custom implementation of an USD asset resolver that enables us to introduce concept like model packages that connect the model with the rig and animation as well as letting the user pin and load specific versions of our layers. This can be used to preload work in progress layers um or go back to an older version for comparison. Especially this pinning feature proved to be essential for our USD push pipeline. To gain an easy control over templates and or work files, we created a rise control node which is behaving as a single node that collects all the important parameters of the template. This node interface can uh be compared to an interface of an HDA but contains all uh the control parameters of the complete hip file. So we basically treat our HIP file as a complete uh multicontext HDA. Dependency networks are great for mainly two reasons. First, they can be used to create an executable work file with dependency caching or rendering evaluation. By submitting the latest note of a dependency network, all required caches or renderings of that work file will be submitted and waiting um for input caches to finish before running. Secondly, the structure of a dependency network helps to understand the structure of a work uh or template file especially if the setup is fairly complex. So this is important for executable templates. Lastly, the export of uh newly created USD caches like procedural assets, asset modifications or effects simulations is an important part of the template that brings the created caches into the USD stage. The rice export shot uh layer node is creating a new USD layer including all prim cre creations and modifications as well as layer references or other USD compositions between the load shot or a layer brick and itself based on the connected export shot layer node. The layer muting is applied on the shot loader to not mess up the layer ordering. The part below the export shot layer note uh then can be uh used to preview the changes including all upstream layers for example lighting to preview the FX shot uh fix caches with at rise we created a flexible structure for all templates all department based templates are defined in our library and are distinguished between shot and asset templates based on the department they are maintained by the corresponding headoff and um pipeline Every department uh can have multiple types of templates that are more generic or more specialized on project level. The list of templates can be extended by more specific projectbased templates per uh project library templates can also be overwritten. For example, if uh the default model or surface QC is not meeting the specific requirements of the project. On the project level, uh the templates are maintained by soups leads and TDDs. Templates can be published. So only finished templates are visible to artists. This creates a very flexible and open system where users can have a great control over. So uh the bear yeah in several shots the bear puppet was either fully or partially replaced. So especially around the wound area where the practical version yeah looked too dry and also lack detail. This became a fun Houdini grooming challenge where we had to find the right balance between their clumped curly and natural hair flow. So the groom could blend back seamlessly into the practical bear. Using Houdini's interactive grooming tool set, we were able to iterate quickly and explore various looks efficiently. And thanks to our USD pipeline, the distribution of the groom for the individual shots is quite straightforward using principal shots and the multi-shot workflow as Yonas will explain now. Principal shots are master scenes that can control all shots that are connected to it from the single world work file. Usually the principal shots are representing sequences but can as well contain shots from multiple sequences that are more semantically connected. We can have different principal shots for different departments like layout, lighting and effects. Switching to a specific shot will update the shot context option and triggers a refresh of our load uh shot node that will load the shot relevant layers. Our workflow includes upstream layer muting. So lighting artist will receive the FX layout and animations. For an FX artist, the load shot will only bring in the layout and animation. So the export of the FX uh layer will not interfere with the layer layer order. The artist will be able to see the latest shot lighting behind the export. uh note that sub layers the upstream layers uh for rendering in this USD based uh push pipeline ensures faster cross department collaboration fewer errors and uh more time spent on creative and lookdev task and less on technical back and forth or repetitive manual manual task using a short switch the artist can apply different shot specific overrides like loading different caches transforms or offset prims short filter networks are our way to provide an artist friendly uh connection to database entities like shots, asset, text or to-dos. Inside of the principal uh shot, the shot filter loader node is creating a rise shot info prim for every shot connected to the principle. Every prim inherits the database info like start and end frame text to-dos as primitive attributes. Using uh USD collections, our artists can create uh groups of shots or filter some shots out. These collections can be used on export nodes, shot switches or render submissions. This way we can work very procedurally. As an example here, we group the shots that we have a specific tag. Transferring the idea of principal shots from Zolaris to stops. We enhanced our caching workflow by adding a multi-shot based caching mechanism to our caching nodes. Our shot filter network mechanism is working for that workflow too. Our rice geometry rob have the has the option to either write in an specific uh shot or the current selected shot in the principal shot using the Houdini uh context options. In our uh rock dependency network, we can create dependency networks and on the last node activated uh multi-shot and on last note activate multi-shot uh submission. We can exclude uh parts of the dependency tree via short switches based on short names or dynamically uh using the shot filter network uh notebooks collections on submission to our render farm. Per selected shot um on our render farm jobs per selected uh shots are created including the dependencies using our frame dependent flip book uh creation. While caching, we can easily inspect if all depend time dependent simulations are going as planned. So yeah, in the beginning of the series, Joel and Ellie are living in a larger community, the fortified town of Jackson, Wyoming. Um, for most of the scenes inside the town, the production team built a very detailed and impressive set. But especially in episode 1 and two, there are several wide establishing shots that reveal the full scale of Jackson and how it sits within the valley. Um, and that's where we came in to extend the physical set and build the entire town of Jackson in CG, including the parameter wall and the surrounding mountain landscape. Um, because these shots feature Jackson from various distances, angles, lighting conditions, and even later with fire and smoke effects, we decided early on to create a full 3D asset build. This approach gave us maximum flexibility and allowed us yeah to cover everything from wide aerial shots to closer more atmospheric views. To get started on this environment, we received a set of town buildings from another vendor. These were based on the Jackson version built for the first season. Alongside this, the client provided a rough layout concept, but the most valuable reference was the real town of Jackson in Wyoming. We gathered a large collection of images to study its layout, architecture, and how it sits within the surrounding valley. Another key element uh was the large parimeter world to protect against the infected. Its exact shape and layout changed several times throughout the project, requiring us to remain flexible. And that was a core challenge here, creating a detailed, believable town layout that could still be iterated on quickly. To solve this, we developed a procedural city setup in Houdini, allowing us to make quick layout changes while maintaining full control over the look. In order to build the city procedurally, we had two artists to work in parallel. One layout artist was responsible for blocking out the initial street grid, defining the general structure of the town, and setting up the parimeter wall. At the same time, a technical director focused on developing a rule-based scattering system to populate the different districts with buildings and props according to their type and location. The layout changed quite a few times during the development of those shots and we also improved the house scattering logic to better support the town suburban structure. The client uh wanted the neighborhood areas to read more clearly clear clearly even from a distance. So yeah, we introduced small backyards, added fences, and emphasized more consistent patterns in the layout. These changes made a big difference in helping the town feel believable and organized even at this scale. While the layout and tools we were being developed, our asset team started modeling a growing library of buildings and key landmarks. These included residential houses, commercial blocks, industrial structures, and signature buildings like the church and the water tower. To create quickly first texture passes, we made use of our procedural text texturing tool in Copanicus that our asset department developed during this show. The big workflow enhancement on the last of us is integrating Copernicus workflow into our surfacing pipeline. Uh the surfacing team built a template based automatic texture and surface generation pipeline based on copanicus that can be triggered by model artists directly on model publish. In Mayer the artist can set material text for meshes in the model which creates USD primar attributes that can be used in Houdini to split the meshes based on material types to create different materials for them. In the Maya publish UI based on pipish, we can configure the material generation in a post-publish action. This pipish publish uh workflow and UI is consistent across all our 3D pipeline softwares. Additionally, a surface QC can be appended and configured to preview the model layer with the procedural generated materials. The copanos uh material template hip file includes various definitions of material configurations with copanos node trees that are uh contained in subnet nets based on a generic based template HDA. The final texture creation progress uh layers the different materials types based on the mesh material text configured in Maya and though creates the final textures for the asset. This workflow takes a complex assets with various levels of UDES into account. The baked textures are exported including masks and a generic karma material loads them automatically to generate uh the material in the USD stage. Using the masks, it's very flexible to tweak the materials afterwards in the shader to add um color shifts and small adjustments without the need to recreate the materials as textures. Using the workflow, we can easily um create detailed procedural texture surface layers and QCs with a couple of clicks that can be triggered by the modeling department itself. It's worth mentioning that this whole procedural workflow of automatic surface creation was designed by artist using the enhancements of our new rice flow system without pipeline being heavily in uh involved on it except a couple of file and database caching mechanisms. The system can create the final lookdef for mid and background assets and as well as a first uh serve as well as a first starting point for hero assets. Our whole compos workflow was explained in detail by Yunas Ganza and Stefan closer cutter in the Houdini user group uh meetup last Monday in Berlin and the recordings will soon be available online. So make sure to check them out to get a deep dive in how our automated surface pipeline for asset works. Yeah, for the surrounding environment of Jackson, uh we were able to reuse the same snowy mountain setup originally developed for Hobback. Uh however, we customiz it customized it further to closely match the reference mountains visible in a drone footage provided by the client. In earlier versions, the snow featured a lot of breakup and detail which ended up drawing too much attention away from the town Jackson itself. To resolve this, we simplified the snow coverage by adding larger tree patches and concentrating the snow more heavily at higher elevations. A few tweaks to the height field control ramps in Houdini were enough to find the sweet spot. Once the layout was locked in, the next challenge was bringing the city to life. That turned out to be trickier than expected because in most of the wide shots, people would only show up as a few pixels. So, we approached it differently. Instead of tiny characters, we focused on bigger readable motion, adding oversized smoke stacks with heavy plumes, moving trucks, and other elements that created a sense of activity, even if the fine details couldn't be clearly seen. In the second shot, we get a little bit closer, which finally gives us a chance to show off all the details that went into the setup. compositing did a great job of balancing all those elements and using some com tricks to add even more details and life to it, like pushing certain reflections. Next, let's take a look at the sequence that sets off the chaos in episode 2. When Abby tries to climb down a snowy ledge, she slips, starts sliding uh uncontrollably, and lands in an area packed with infected buried under the snow. While the view over the ledge required another set extension, the more interesting work happened during the sliding shots. Even though they shot the actress practically sliding down the slope with a safety wire, which is quite impressive, uh there was still the need for some enhancements. First up was a cleanup path, removing the wires and cleaning up the ledge itself. After that, we built a basic romation path for EB's movement, which served as a base for our snow simulation. The goal was to crank up the danger, adding more snow spraying up towards the camera to make the discan feel faster, more violent, and to obscure the original sledge. To keep the setup flexible and controllable, our FXTD Valentino used Houdinos Vellum grains, um, a particle-based solver that gave us a lot of control. By layering those different groups of grains here, visualized in different colors, we could separate out snow that sticks to Aby's body from snow that blast towards the camera. The simulation itself was surprisingly fast and for rendering we used Karma XPU, Houdini's GPU renderer. This was actually the first project at Rise where we leaned into GPU rendering more seriously. Yeah, probably it's not quite ready yet for massive set extensions due to limited GPU memory. But for effects and volume work um like fork or smoke, it's really game changer. The speed boost compared to CPU rendering was somewhere around factor of 10, which is huge, especially for effect shots that often go through several iterations before getting final approval. Once the setup was working, it was yeah straightforward to apply to the other sliding shots using the romation. To tie everything together and make the shots blend seamlessly, we added some additional trees and obscured the background slightly. And with that, Ebie slides into the abyss. During episode two, um we worked on several shots that we call internally burning Jackson. The idea was that during the infected attack, fire breaks out around the town. Uh some caused by exploding barrels, others set as defensive measures. As Joel and the others make their way towards the fateful lodge, we see Jackson from a distance burning and engulfed in smoke. We were able to reuse the same layout NCG environment for Jackson that I showed you earlier, just with a few more mountains uh added so that Jackson appeared further away. But for these shots, we added a new layer of effects work, including piles of burning corpses, rooftop fires, and large plumes of smoke rising over the valley. Because the visibility of the fire and smoke was an important story point, we had to push the scale and intensity a bit more than we might have otherwise. It had to read clearly even in the middle of a snowstorm and from a great distance. To handle the fire simulations efficiently in Houdini, we used the vegbased system not to test different parameters but to split the scene into separate simulations for each major fire source. Simulating all fires uh as a single unified effect would have been yeah too heavy and slow to iterate on. So instead each fire was simulated individually using a dedicated batch. This allowed us to process them in parallel um drastically reducing the sim time while keeping the setup manageable. Once the simulations were done, we generated a special fire light template to have full control over the amount of light that scattered into the smoke and to speed up the rendering. To handle the dependencies between complex effects, we developed a system called Rice Flow. Riceflow is our in-house system to give artists the power to work on multiple context shots or and assets less destructively and define pipeline automatization. The idea of the system is to give anyone mainly artists and supervisors but potentially production 2 the power to control pipeline steps like executing templates. As an example, having a shot containing a character with hair or fur like the bear that we saw in the previous examples. We used to have a groom artist update the groom simulation on every animation iteration which wasn't needed in this case at the bear wasn't uh really moving much. By using the presented template approach, the groom artist can define the setup for the groom and uh groom simulation once and the setup gets triggered as a post-publish action on an animation export from Maya including a groom QC. When the anim animation artist exports a new version, the grooming artist only has to interfere for final tweaks or if anything goes wrong and can uh uh concentrate on more challenging or creative tasks during the animation iterations. Though the idea of the system came from elim eliminating boring repetitive uh tasks using um rice flow we can submit templates or template dependencies over multiple shots or asset potentially automatically and though save artist resources and uh the dependencies. Another example is a short lighting submission that can be triggered automatically on FX layer uh publish and save the upstream department artist from waiting for simulations to finish. um those actions can be manually um or automatically triggered for example as a mandatory post-publish um action as a QC. The core of uh the system is uh flowpe which is developed by our head of pipeline Paul Schwitzer and as an open source package available via GitHub. It's a framework for flow-based programming in Python the uh basically nodebased dependencies. as a visualization for it. We implemented a connection to the node graphqt package with which is as well um an open source package on GitHub. Uh we connected um uh flowpipe to our render farm. So nodes are translated into farm jobs. Additionally we connected Houdini, Maya, Mari and other applications uh to flow pipe. So nodes step uh nodes or steps can be executed using the applications. Um all the connections are connected to uh with our USD pipeline and rice based database. Since the system is an RP construct only and fairly uh complex, we needed an artist friendly abstraction of it and that's is how rice flow was born. This is how the uh Riceflow user interface looks like. Here the user can select from various actions like Houdini template execution to run them in one or multiple contexts. As an example, we are breaking this action down into its components which are flowpipe nodes. First, we query the latest template from the database which uh with this helper node. Uh then we install the work file. This is creating a new work file version basically one or if um any version is already existing just versioning it up in the context. So the shot or asset based on uh the template. Um it also can set parameters on the control node based on the user interface of the riseflow actions. After installing the work file, the next node will submit the dependency network in the file that is connected to the main submitter node. The network can include simulations, caches, renderings or use the layer exports. Afterwards, the graph will wait until the dependency network has successfully uh been evaluated on the render farm. Uh in case uh the execution exports any USD layers, we post query the database entities and merge them with incoming layer database entities. When concatenating Houdini template or work file execution, we use uh that layer database information to pin these layers with the use the asset resolver in the following actions. This is an integral part of the uh system. So we don't uh do any actual scene data transport in our flowpipe graph but rely on uh USD layer exports that can be accessed via the pinning implementation and though load the containing layer data in our uh pipish publish workflow. We integrated rice flow as post-publish actions. These actions will be uh executed after a successful publish use the layer export and will directly uh use the exported layers via pinning to load that version in. This is the example how the execution uh execution actions will be concatenated to a more complex graph. Using the uh new recipe RP introduced with Houdini 20.5, we export the parameter interface uh of the control node as a serialized dictionary on work file uh save into its corresponding database object which exposes the parameters out of the hip file. Based on the extracted work file parameters, we can create parameter conversions to a QT interface which allows us to control the hip file parameters from out outside of Houdini. This is definitely limited compared to the possibilities of the original Houdini parameters and do and doic but does a pretty good job for most of the parameters and use cases so far. For the companos workflow um created on Last of Us, we implemented for example color parameters with a color selector like the one inside of Houdini. This example shows how a layout layer published from the Last of Us Jacksonville can trigger a QC rice flow action. The uh export shot layer node is set to work in progress in this example. Also a layer export will be triggered but that layer will not directly be pushed into the shot structure. In this way we split the export and the layer approval before handing over the layers. First the artist opens uh the publish UI where it's possible to se select the shots that should be published. In the postpublish action sections of each uh publish uh item the artist can add one or multiple actions that are run after the publish and pin the new work in progress layer for the execution. which means that the actions are loading the data from that layer. Multiple actions are executed one after the others and uh and appended in the rice flow graph. On publish uh first the layers are exported and then post-publish actions are submitted for execution onto our render farm. It's also possible to trigger actions via ricepace either on the context so the shot or asset object or on any layer uh which then will be pinned in that execution. So it's possible to preview a work uh in progress layer or asset uh in the final shot with uh without actually pushing that layer. Here the layout QC was submitted as a pending preview approval. So only rendered a couple of preview frames. The rendering can be inspected via RV. From within RV, we can approve the preview so the rendering can continue and also publish the upstream layers of uh the rendering in this case our uh work in progress publish from RV using our USD layer dependency tracking. So every upstream department will now get the changes directly. Every rice flow action always creates a hip file in the target context like in example for this uh Jackson city shot. Artists can open Houdini via the shot launcher to inspect the created hip file. So it's not a black box and in case anything goes wrong like caching or rendering errors on the farm, the artist or shortd can check for the reason and it does not always uh need pipeline involvement in solving these issues. The artists can then uh version up the hip file, do manual changes and resubmit uh the dependency tree or parts of it if a general issue is found in the hip file. The main template can easily be adjusted. The rice flow system is integrated into multiple parts of our pipeline and we can for example easily work in a template file. Submit the template execution into multiple contexts like shots from within Houdini. When opening the editor, the work file is saved and the parameter extraction from the control node is triggered. As a different example, we implemented post-piplish actions for Mari. So Mari, a texture artist can export its uh textures and automatically create PBR surface USD layer um or material creations and assignments in Houdini um for the assets based on the exported Mario texture without Houdini knowledge. Having the system integrated into multiple parts of our pipeline, we can have uh non-Houdini artists trigger Houdini ex executions based on these executable templates which creates an artistdriven complex pipeline uh flow. Yeah. So that was a look at the latest and greatest additions to our USD pipeline and some of the work we did for the last of us season 2. This project has been a really exciting journey and if you want to see some more of the challenges we faced like Joe's last moments uh yeah I will be giving another talk tomorrow at 11:15. So thanks a lot to everyone on client side our last team at rise our pipeline and thank you for listening. [Music] [Applause] So cool. Uh, any questions for the guys? Yeah. >> Thanks for the presentation. I have three questions actually. Uh, let's start with when you have XPU now integrated into your uh, rendering, how do you handle the render farm? Do you did you add GPUs to the existing machines or do you have how do you handle that? >> Uh, so we have a couple of machines. They have like with our um system FNessie which is handling our render farm we have some called ticket system. So machines with a GPU have this kind of ticket and depending on how much uh GPU power we need we can have multiple tickets. So this is how we hand like we we can configure it and once we see that it's an XPU rendering those this kind of tech for the job will be added automatically. Um and uh yeah we we had like first like a very CPU heavy farm um but like are currently continuing on uh adding more uh GPUs uh to the farm. >> So you're integrating into an existing hardware not necessarily extending new machines purely for so you don't have GPU purely GPU based machines. >> H no >> not yet at least. Yeah. And also like all the workstations render during the night as well and they have powerful GPUs mostly. Okay. Have you have you checked the balancing about how much less CPU resources you now have outloaded to GPU? Have any any statistics? >> Not yet. I think that yeah, probably for future projects we will do something like this, but for this project, I think the amount of GPU renderings was was not big enough yet. >> Okay. So, you didn't notice more resources on the CPU side all all of a sudden. >> Yeah. No, no, >> you can say that. Okay. >> Uh what about Dino Noiser? Um yeah, >> all our renders are have a D noise pass afterwards. Uh I think we now using the optics mainly uh or the Intel one. >> Yeah. >> Yeah. I mean there's an optics option we have both integrated and also the Intel uh den noiser. Um and the artist can basically also trigger the D noiseis like in in the rice space our database system if he sees okay that's not that's too noisy he can still trigger it afterwards. So >> as a postp process >> correct yeah of also when the rendering is already done for days for example. Yeah. >> Uh then the last question when you basically allow someone upstream to go through the whole pipeline and render basically a test version so that it doesn't show up too late doesn't that create a lot of overhead in terms of more renders that you in the past sure you would have potentially have to render more often because you made more errors. But if there are now more renders on the farm because of these checks, doesn't that create more overhead? >> It definitely creates more overhead, but in the end um machine power is always cheaper than uh people power. So um we try like have to still make make sure that all the checks and validation checks or QCs are um um very fast. For example, our layout QCs, it's uh fine that we just do PPA. So waiting for approval like every 10th frame for example or every hundredth's frame to see if the camera framing is right if uh the layout is working. So um and it's often because in the reviews we like uh with so much stuff in the reviews being presented um the supervisors might not see something from from the other side that is maybe blocking something in a later point. So it's always important or we figured out that uh a really good QC workflow is elemental for for running those big shows especially on sometimes smaller or sometimes really big teams. >> I can can imagine so in the beginning when you set these things up or user misuses it in the beginning that you have too long renders but then you just jump in and optimize that. Is that the way I can imagine it? >> Yeah must have problems as well guys. >> Yeah and plus doing those KCS we already know about memory consumption. So if a layout like is super heavy, uh we already know it before it goes to lighting and it's like really bad if it's already went over FX and all the other stages. So we need to find all the issues like the most early in the pipeline before it uh raises and then it's like super uh uh >> mind [ __ ] so you see problems earlier. Yeah, definitely that makes sense. >> Thanks very much. I love that uh stat that you showed the CPU versus the XPU. I sent that to Marco Lent, our head of rendering inside of it. >> That's awesome. >> Any other questions? >> Going once. Going twice. Thanks very much, guys. Great, great presentation. >> Thank you. Thank you.

Need a transcript for another video?

Get free YouTube transcripts with timestamps, translation, and download options.

Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact

Evolving USD Workflows: Powering The Last of Us Season 2 ...