[Music] Hey everyone, thank you all for coming out. My name is Sebastian Canol and I'm a lead procedural assets and effects technical director at Netflix animation studios in Vancouver, Canada. My team takes care of the domains that use Houdini at our studio. That is the that's assets and simulation. Today I'd like to focus on one portion of that responsibility and talk about the assets pipeline in our studio. A quick rundown over the contents. It's deceiving deceivingly small, but I'll go into quite some detail. First, we'll start with some general information. I'll give an overview of what assets really means in our pipeline and go into a feature that we worked on quite heavily on called asset containers. Lastly, we'll wrap up with a quick fire round of some of our fragment workflows for some nice imagery. Let's start with some general information. We're no strangers to Houdini and it has been used in our company for a long long time for over 15 years as far as I can see. Um, here's a quick timeline of the Houdini adoption particularly for our assets department. However, Peter Rabbit 2 was our first full USD feature film that we made, and it's a hybrid animation and uh liveaction movie. At this point, character effects, effects, and crowd were fully in Houdini already, but we did not yet do assets. Our next film, DC's League of Super Pets, was the first fully animated feature film on this new USD pipeline. We started using Houdini in Solaris uh for our surfacing team for loop development purposes. Migrating some of our existing workflows from other DCCs brought with it some user experience challenges with the new paradigm. The magician's elephant came with a revamp of our groom system alfro and reintroduced our vegetation and scattering systems spruce and spawn. Our most recent release Leo there on the right um made more usage of our render time procedurals than ever before. All of them are also being set up in Houdini. It also made a big push for and focus on the user experience side of things now that most of our legacy systems were migrated. We're currently working on multiple really really exciting projects that are sadly yet to be announced. But one of the developments they're benefiting from is our revamped asset container system, which I'll be shedding some light into. Movies aren't the only thing we have released in recent years, by the way. We also released our very own USD production scene to the public. It's called the AAB, and it's fully open source. I'll be using some of these assets to help illustrate my points today. You can access it via the digital production example library, or DPEL for short, or by following the QR code. Here's just some quick examples of what we have released in the AAB. It has over 300 assets complete with highquality textures and even two characters with looping animations in a short context. You can just go download them and have a play. I think they're a great resource, especially if you're interested in USD. Have a look. Now, if you watched the Houdini Hive in recent years, you might be having a moment of deja vu, and you would be partially right. Some of my colleagues presented a glimpse into USDA ALAB and asset building back when we were known as Animal Logic. Today's presentation will give an update on this and go into more detail on the pipeline side of things. Enough with the general information. Let's look at an overview of what assets really means. at Netflix animation studios. At our studio, the assets departments create things like characters, props, and environments. And we split them into the modeling, surfacing, and rigging departments. Today, we'll focus on the modeling and surfacing department in particular. Those are the two that use Houdini. Modeling takes care of your traditional polymodeling and UVing tasks, but also things like creating environments either by hand or procedural approaches like some cobblestone generators or building generators. Additionally, they create our tree and bushes with our vegetation authoring system. Surfacing is responsible for anything that has to do with the surface appearance of our mod of our meshes. This can be materials and look development or grooming fur and feathers. They also take care of grass and scattering surface surface details like moss. Okay, let's start things off with a brief overview of the kind of asset structure we use in our pipeline. It's technical, but it will help understand some of the upcoming topics. We use a term called entity to describe highle concepts such as characters, environments, and shots as well as all the parts that make up those elements. As an example, let's look at that stowed character from the AAB. The character itself is made up of multiple parts. This is one of those entities and it's the body of the character. Then you might have the outfit. There's a sweater in there and some bandages wrapped around the tail. This would be another entity. And then lastly, you might have some accessories like a backpack, for example. This is yet another entity. These entities consist of data bundles of various contributions that make up the asset. We call these data bundles fragments. The body, for example, will consist of a mesh and some fur and fuz groomed in our grooming system, Alfro. And a look contribution. You can see the pink on the hands and the ears and the red tongue. That's material assignments. The outfit will similarly be made out of geometry and material assignments, but rather than having an alfro contribution like the body, it is made out of fabric curves from our cloth generation system weave. The backpack is an example of something a bit more basic that doesn't need anything fancy. It just has some geometry and a look. All of these are constituent parts of the stowed character itself. We can connect entities to entities via a special fragment we call assembly. And finally, we get the end result, the ALAP sto in all its glory. Let's look how we build our assets and load our assets. I personally like to explain this and think about this as answering two questions. The where and the what. Let's start with where we want to make changes. Here we have the baseline, an entity of an oscilloscope in isolation. This is perfectly adequate for a lot of use cases and how a lot of work gets done in practice. However, we try our best to offer ways to review and also do the actual work on whichever context makes the most sense. We think there's little point in putting a lot of effort on something that might be seen very far away or not in not in high detail. So we can work on the set piece itself. Or do we want to work on it in an environment like this workbench? After all, it contains the oscilloscope right there. Or maybe we want to work on it directly in a shot. After all, it's in there, too. All of these occurrences of the oscilloscope ultimately are the same one. They're just visualized in different contexts. This goes both ways, by the way. Any changes to the contributions of the oscilloscope that we make in those other contexts will ultimately be made to the underlying fragment of the oscilloscope entity. We call this concept working in situ for working on something in place. To build something in a context, we use a UI we call production explorer. It offers a list of entities. Those define a context. Here we're seeing a list of set pieces that contain the word electronics in their name. That's what I filtered by. But environments or shots can be loaded in the exactly the same manner. All right. So we build entities we want to use as our context via production explorer. And here it is. In this case, I brought in the oscilloscope entity itself with all the contribution that make it up like the mesh and the material or what we would call the geo and the look fragments. This is a render right at as it's built. Now, how would I go about changing one aspect of one of the fragments? Then let's say I want to change one of the materials for the case to be orange. We would build the fragment in another one of our custom UIs, version explorer. You can actually see all the domains or department contributions right there. There's modeling and surfacing, but there's also rigging as well as lighting. For example, the LCD screen actually has a light. All right. So, we have now determined where we want to make our changes in the context of the entity itself. And we also figured out what we want to change. The shading or the look fragment. Look isn't our only fragment that users wouldn't interact in Houdini in this way. Here's a list of the other ones. discovers all manner of things from fabric generation like I mentioned to hair and feather grooming or scattering or vegetation creation and more. Most of the time when loading an asset it will be done via these two applications production and version explorer. You load a context via production explorer and you specify the fragment you want to work on with version explorer. Good. These custom applications are built in our nucleus design system and they're used all over our p pipeline. In this case, you will see them just added as additional panels in Houdini UI. There was a great presentation at Digipro 2024 by our pipeline team that goes into detail into the design system itself. That's quite interesting. All right, here's the build process as a little graph. We've covered that we select our context through something like production explorer. This will create a load asset node. This node will load the relevant USD files from disk and configure them accordingly. We've also covered that we can build what we want to work on a fragment via a UI like version explorer. But we also extended Houdini's rightclick context options in the scene graph tree for example to do common operations more interactively or directly from the viewport. You can select the prim you're interested in, right click and choose what fragment to build. No matter which route is chosen, the same node will be built ultimately an asset container. In this case, we're building the look fragment and the asset container represents this. This container node will have the node graph that was used for the last check-in. Or alternatively, when you build the container, you can think of it as unpacking a fragment into the nodes that created it. Lastly, once the asset container is ready to go, this is where artists take over and where the magic ultimately happens. Just to put our money where our mouth is, here are three videos of me doing the exact same operation of building the look fragment for the oscilloscope in the three different contexts we looked at earlier on the set piece in an environment and at the shop level. I'm using the rightclick option in the viewport here. Now, okay, building an asset hopefully kind of makes sense, but what are those asset container nodes? It's where we actually modify those fragments. Let's say we have an asset container. When we double click it, it will drop us into a Solaris user work area. In this case, you see the usual default contents of the look container. You can see the material library and the assigned material node. The look isn't the only fragment that we work on directly in lops. For example, this weave, our fabric system, but most of our other fragments are largely subspaced by this point. You can see that these subspaced ones look very similar at this level. And that's by design. Again, these areas are where we want our artists to spend most of their time. When you double click your container, that's where you get to be an artist. Let's fast forward and look at our example from earlier. We've built our look fragment on the oscilloscope, made some changes to make it orange, and job's done. We're good to go. I'll jump back up to our asset container to get the parameters. We add a check-in comment to note what has changed in this version for ourselves in the future and for production for keeping track of things. And we can hit check-in button at the top. When I do so, I'll be met by Cberus, who's aptly named after the multi-headed dog that guards the gates of the underworld. In our case, Cberus validates our nodes in the USD stage and helps us from hell breaking loose when unexpected data is checked in. In this case, it doesn't look that like there's anything to worry about and I can check hit check in at the bottom. Occasionally, however, our validation system does find some things to complain about. I've sabotaged our little example to illustrate this. The yellow entries are warnings that we want our users to be aware of. They're suggestions, but won't stop someone from checking in. And the red entries, they're failures. They marks issues in the node graph or the USD output that we want addressed. Selecting these errors will provide a description of what the system thinks is wrong. In the case of this warning, I I've removed an output lop from the asin container and we usually encourage using one. We try to be as clear as possible with these validators. For example, on this failure, it's letting me know that I had unlocked a specific node in my graph and I get the note n note path as well. This can get quite in-depth. In this case here, we're analyzing the USD stage for some problematic prints. It tells me which USD layer causes the issue and which prim is the offending one. Still, sometimes things just need to be pushed through the pipeline. We have that option available via little drop down that allows us to check in anyway. We highly discourage this from being used during daily daily use, but some cases this definitely can help. Maybe there's a director review looming and I really can't wait for TD to help me fix my scene right now. So, I just need to push it out. All right, to summarize, here's an overall usage example in motion. Just so you see all these steps combined, I open production explorer in the new scene, search for the oscilloscope, and double click it. This loads the entity. Next, I switch to version explorer and I select the fragment I'd like to work on. This case, the look fragment. The system will build the asset container node for me and let me know when I'm good to go. I'll expand more on the nuances of the fragments later, but for the purpose of this, I'll pretend that I did an amazing change by pasting this temporary node here. Provided I'm happy, I can jump back out of my work area and select my asset container on there. I now add my comment of what I changed and I can hit check it. servers does its thing and validate my scene and a window pops up with a report. It looks like it's a lot of green. I'm good to proceed and click check in at the bottom. If something was wrong, that check-in button would be grayed out and I can use that little drop down I mentioned to check in anyway. Lovely. So, what I just showed off is very similar to how things have worked for quite a while ever since we adopted Houdini in surfacing actually. However, earlier this year, we switched the underlying fragment container system after substantial refactoring efforts for reasons that I'd like to get into and show off some of the benefits. If you if you're keen on uh investigating a fe a system like this, they might help you fast track it. Let's get going. For now, we've been looking at this node called an asset container, or rather for in our examples, it would have been an asset container that is set up to work on a look fragment. Prior to asset containers, the system we used was called container templates. It's just a different name for the same idea of containerizing our fragment work. These are essentially equivalent as far as an artist is concerned. In the old system, building a look fragment would create create that container template on the left while in the new system it would create the asset container on the right. This similarity helped a lot during the adoption of the new system. This is again not exclusive to the look fragment in any way. Every one of our fragments that we work on in Houdini can be worked on following the same idea. The main difference is that all of the fragments in the asset container system use the same node type. Instead of having to maintain multiple different node types in the old system, each with their slight micro differences, we only have to maintain a single one with the new system. Let's look at some internals of the asset container. If we break it open, we can jump inside. And it's a really, really simple premise. We want to set up the fragment to be worked on, allow artists to do their work, and then deal with the export and checking of the files. We'll dig deeper. When we look at what happens inside this fragment setup step, we see the incoming stage being split into two streams. The left one is going to the fragment export and the right one is going to the user work area. I've highlighted nodes in of particular interest. The blue node here mutes our incoming contribution to avoid clutter and confusion. Imagine you're working on a groom that already exists and you want to make some changes without ensuring that the incoming groom is removed in some manner. You'd be seeing the incoming groom and the new groom you're working on at the same time. So, we mute the incoming one. The red layer break node down there does some heavy lifting, too. It ensures that any modifications in the user work area are the only things we need to concern ourselves with. It makes it really clear what data should be exported. We'll skip the artist work area for a moment and talk about what happens in the fragment export step. If you ignore those nodes in the middle for a moment, you'll see how simple this is. Those nodes are some pipeline specific things we're ensuring and I'll grade them out a bit. This portion here is what we export and check in to disk as our fragment data. Thanks to the layer break node in the fragment setup I pointed out, this will only contain what has been created by the artist in the work area. Super clean. Because it's just the content from the work area, we can also just layer it back to the incoming stage. As we see here, this will be the output of the asset container and what an artist would interact with afterwards. It serves as a preview of what the entity will look like once the container is checked in. By the way, when I say the output is clean, I really mean it. This has been a really great way to debug. Lastly, we have that rob network that lays out the export and check-in processes that run and the order. I'm not going to go into too much detail, but essentially it covers some pre-process that run, exporting the files to disk and checking them in. The last part of the internals is the work area which as mentioned earlier is the dive target of the asset container. It's where you end up when you double click it. I mentioned we have some blob space tool sets like look or weave. These are pretty self-explanatory because they're managed by development teams and they're highly curated system that systems that output the correct data. We have some other tool sets that are much more open to users. However, you can see them here. They're the spaced ones. What you see highlighted are tool set specific lop nodes that pull in data from a subn network exactly how the pipeline expects. Ideally we want our users to work in the context that makes sense for the tool set. So in this case in the sub networks without needing to jump around too much. The question now becomes how can we ensure that artists adhere to an expected output. So is very freeing and very much more accessible than solaris. So manipulating attributes, custom work from artists like HDAs or other experiments, they might end up deviating for what we want and what we expect. Let's look at this with an example. Investigating this measuring tape looking friend right here. It's one of the other characters from the AAB, Remy. We look at those little fuzzy pieces to either side of the head. When we create hair and fur, we work in our proprietary grooming tool set, Alfro. You can see two highlighted nodes there, too. We have an alfro loop in Solaris and an alfro finalized soap in the geometry context. We'll be looking at the head entity for this character for simplicity sake. In Solaris, the alphalop tells me that no grooms are being imported. Indeed, there are none in the viewport or in this rendered image or in the scene graph tree. When we jump to the subwork area, the data is certainly there. It's just not hooked up to the altra fininaliz as as I've been instructed. Connecting it immediately makes it appear and rendering shows up. Let's connect it other side as well. When we connect it to its own finalized sub, you can see it works the exact same way. It shows up as soon as it's connected and is available to render. Let's have some fun and give him some kooky hairdo as well. >> I made some quick room and I connect something and you notice this didn't immediately show up. The reason for that is that the node is bypassed for us. A valid finalized node needs to be connected and not bypassed. Our pin looks a bit more wacky now. Let's dig a deep let's dig a bit deeper. What's going on here? We'll look at one of the finalized subs. The alpha finalized sub goes right at the end of our node graph. You can see it right there. It contains a lot of parameters and the internals of the nodes also reflect that. It deals with a lot of attributes being set up like bindings and setting up our render time procedurals. Luckily, we don't have to care about that network at the top and we can just zoom in on the end. There we have an endpoint where all the SOP operations leave us with cleaned up curves with the exact data that we want. The attributes are good. There's also a lop network. Here is what we set up where we set up the USD data that we want to export. We grab the incoming stage and we add our curves to it. There's some more things happening on the right side there. That's for clump curves. This is a simple groom, so we don't have to worry about it. Thanks to the layer break loop, the data we have here is exactly what we expect. You can see overs all the way down until we're actually defining the new work. We are only defining the things that we care about. Nice. So, we define USD inside each finalized node. Now, let's take a look at about how these things are being pulled back together into into Solaris up top. They're being hoisted by the Alphalop here. And if we break it open, we can see it's a very simple network. Yet again, we have an object context. That's the SOP network we're modifying things in as a user. Then there's a fetch that will simply point to each of the finalized lops and find that uh USD data that we had to pull it up. And you can see we're we're finding what we loop over with this prim hierarchy. This just mimics the node path of the finalized node. So you can see that is the alpha finaliz L and that's the alpha fininalize R. You can see if I disconnect and reconnect it that updates really nicely and interactively and that's triggering this for each loop that's just set to iterate over those point prims. This loop recooks as this iteration changes. Quick side note in the alpha loop you'll notice that the for each loop looks a bit weird. It doesn't have a begin context options block. Usually we would want this, but because we're fetching outside of a loop, it will leak the context options with it. For this reason, we clear the context options with a block inside the alpha finalize precisely before we fetch. If I indicate what the block wrapping would actually look like, it would look kind of like this. That's not unique to our system. That's just how loops works. Uh, loops with fetches work. And I thought I'd bring it up. Let's look at one last benefit of the finalized nodes just to drive the point home. We're looking at a template bird that we use for testing our feather system quill. It tells me there's three primaries and they are in our scene graph tree too. It's a big graph from our artist, but we only need to focus on the last ones. I want to show the node info panel for this node because it give these finalized knobs give a very nice opportunity to validate our scene. You can see if I remove the input, I'm letting the user know exactly what's wrong and a hint of what they can do about it. If I instead provide an unexpected input like a simple box, for example, we again get a clear message of what's wrong and more info for the user. Okay, so connect it back up and maybe I want to make a duplicate of this. Maybe I just want to export another thing. If I'm not careful, it will complain again. I'm being told that there's multiple finalized nodes that would end up with the same prim path. You can see we highlight it here. These two nodes would have the same prim path and because because we're just moving it into USD, we just override one prim with the other. So we allow for this identifier to be added and everything's fine. Solaris is happy. We looked at examples for Alfro as well as for Quill, but again this is for all of our subspace fragment tooling. They follow the same approach. Each tool set lop will have a corresponding finalized stop that communicate with each other. To summarize our finalized nodes, we were able to create a system that behaves similarly to output nodes and gives a clear way for artists to mark their outputs. We're able to use them to clean attributes and set up important parameters for our render time procedurals. And it's a fantastic place to run early validation for the final output directly where a user works. We also managed to get live updates working just by working in soaps which is great. And finally for the asset container ecosystem as a whole. What do we think of that? Going from multiple container templates to a single asset container node type made it much more robust and much easier to maintain. We have much less tickets that are coming in. The separation of concern of what the container should be respons responsible for and what is left for the tool set nodes brings a lot of clarity both for developers working on these systems and introducing new ones as well as for artists. They have the exact same interfaces in all our tool sets and they can jump quite easily. Lastly, we can now build generic processes that work the same across all of our fragments. For example, the way we collect external references is now consistent. Previously, it was a bit bespoke. We also have a new system that we're super excited about where we can add nodes inside our user work areas that themselves define processes that we can dynamically append to our check-in process. It's really quite powerful and we're exploring a lot of ways there. Okay, that was a lot of tech and a lot of talk about asin containers, but I never actually showcased what the fragment workflows look like. Let's have a quick fire round of a few of them. I'll go over them step by step. Starting with the look fragment. It's basically on everything as it defines the material properties. As mentioned earlier, we presented this one before and I'll just give a brief overview of it. If we drop down a look asset container, it contains the default lops nodes that you're expecting for material assignments with a m material library and an assign material loop. Note, however, that this is a custom assignment material up. It's prefixed with ash as ash is our material definition and rendering technology for our proprietary path tracer glimpse. The node has a pretty simple Houdini UI, but it gets unwieldy very fast as you can see as big multiarm lists tend to do. That's why we made a custom Python panel that can be used instead. It has a tree view that plays much nicer within our system. You can see we add a material to the root geobbrim right there. And this material can be made right in the scene or more often than not it can come from a global library like in this case. In order to modify a subset of those parameters, we can add a material override. Those gray out rows in those child prims indicate that it's inheriting a material and an override from the parents. Lastly, those numbers specify in which order these materials should be layered in. Higher number means higher layer. You can see that there will be a dirt material applied on top of all the other materials like the metallic paint or the stainless metal. This way we can end up with a complex look that is made up out of different layered materials. Here's an example of the layering being built up. We work in layers as it allows us to create highly specialized materials with arbitrarily ordered light responses instead of relying on predetermining predetermined sets of layers like in Uber shaders and some other systems. If you want to learn more about this technology in particular, our rendering team has written a very detailed paper on the topic and is using that same UI to illustrate it. Overall for the look fragment, the groundwork from our rendering team is incredible. It the shaders and the translations are very robust. The layering system allows us to create highly specialized materials rather than being limited by something like Uber shaders. And we feel the custom UI fills a gap when it comes to complex material assignments like this and showing inheritance. But it definitely comes as a maintenance cost. For our next fragment, let's look at weave. You can see Miss Malcolin here from our show Leo wearing her oh so comfy looking sweater. That sweater as all our clothing is made using weave. And yes, these are rendered as curves. In practice, we split our fabric generation up into more or less three big steps. In the weave node, we define the base fabric properties like the pattern, thread count, width. Then we can add stitches by defining them as simple curves that get converted to more complex patterns at render time. This is also where we would create embroidery for example. Finally, we would add a fuzzy layer to tie everything together and really push the realism and make it look so comfy. Weave is a fullon render time procedural and fully in house. So no geometry exists at all until we hit render. The nodes in OD are really just used as an common interface to set USD attributes. Here's an example of me changing the pattern type with an expanded preview in the viewport and a very simplified render on the left just to illustrate. Now, as you can imagine, because we're setting parameters for an external system, there are a lot of parameters that can be tweaked to real dial in every last detail of our final fabric. These parameters have been accumulated since far be before we adopted Houdini through multiple years of production. As mentioned, weave is one of our render time procedurals and as such no actual curves exist. Our R&D team provides us with ways to expand our procedurals without the need to render directly in Houdini. You can see that node right there. Artists use this to preview their work without needing to render. You can compare the resulting curves with the res with the render to the right. The colors are used simply to visualize the various aspects of the fabric. As far as the stitches go, they are driven by normal curves that can be created in subs. The curves are being used to create the stitch patterns and indent the fabric around it to integrate them properly. Our artists often like to rely on drawing textures rather than authoring curves directly. we have a note that will help them convert those textures into curves. Especially for more complex patterns like maybe lace, it's much easier to draw them in a texturing software and convert them to curves rather than tediously paint each draw by uh paint each curve by hand. So, we've been looking at examples of weave throughout this presentation already. By the way, there's Rey's little flag and a stowed sweater. When we zoom in, we can see a great example of a very simple weave setup on the flag and a much more complex one on the stowed sweater. That one is knitted. It's another technique other than usual weaving that the system supports. And yet again, there are two papers on these topics if you'd like to learn more. Weave is used in a lot of our work from furniture and carpets to hero characters, even crowds. We've contributes a significant portion to the final feel of our frames. I'd like to shout out that bottom left example in particular. It's not from any movie we released yet and a relatively recent addition to the tool set. Those fabrics you see are tests made by our surfacing department with a system we call Jakard. It's based off the real life equivalent technique with the same name. Thanks to how grounded Weave is in reality and how it works, our PGNS team managed to seamlessly integrate something like this into our system. I'll just briefly zoom in for to show off those gorgeous details. We're super excited for these to show up in our upcoming movies. And I'll reiterate all of all of the things you see are curves being rendered. Once again, we have a paper for this. And my colleague Mike Davidson actually presented this at Sigraph just a few weeks ago. So this is hot off the press. Okay. For weave, we think it's a fantastically successful tool. We're able to create our fabrics faster than by painting textures. Instead, we bake it to textures for use in viewports and background elements. It is hyperrealistic by nature and this allows to implement those reality grounding grounded tools like Jakard without much hassle. But it takes some additional consideration when working on highly stylized projects like stretch uh stretched limbs and stuff like that. It's highly curated and more on Rails than some of our other tool sets. This requires expertise with the tool set while development and adds maintenance and also uh it takes a bit of time to learn the tool set. But I've I've been told onboarding is pretty quick. One thing we really suffer from is managing those massive UIs. Collaboration on HDAs is famously difficult if multiple developers need to touch the same HDA at once and resolving merge conflicts. Okay, enough of Alfro. Let's carry on with our grooming in our system. Alfro Alfro's biggest standout different to what Houdini provides is that it has a lot of abstractions. You can see two basically equivalent grooms. One with the Houdini system and the other one with Alfro. During the migration to Houdini, our artists had trouble with all of those node connections. Back then, Houdini's grooming was also less established with a lot of jumping around in that object level workflow, if anyone remembers that. Additionally, Alfra's cooking logic is pretty unique. Let's say we want to go from the clumping step right there to the guides that drive it. In Houdini, we simply follow the graph and it explicitly tells us where it's getting the data from. In Alfro, we don't necessarily follow the graph though and don't step from node to node if not required. Based on configuration inside of our nodes, Alfro knows where it should get the data from. Usually, that's the last node that modified that data. And in this case, that's the Alfro guides node at the top there. Another quick example. Let's also grab the skin marked in red here. In Houdini, we would traverse the graph and find it. But in Alfro, we don't. Alfro jumps straight to the node that last modified the skin, which is the one up the up top. Here's some examples of Alfro. He ranges all manner of things from straight or curly hair, furry creatures, all the way to highly stylized hairstyles, or even something as mundane as a tennis ball. We have a concept of groom rigs that are maintained by technical groom artists, and they aim to simplify common needs of a show. Here's some examples and we'll take a look at this eyelash groomick. It provides a single node solution for how eyelashes should look like on a given show. It also allows a way to update this one node and get the same change in all characters that use it. Pretty easy to adjust. Now, grooming is super interactive and every bit of performance counts and we've tried multiple things to enhance that. One of them being the solar view network that is being used in some of our groom rigs. Here's an example of what it allows. Usually, if you change a parameter on a on a graph at the very top, the entire network will recook and you see the output. Now, if I enable solomute, it behaves quite differently. If I change the skin subdivisions, I see the skin. If I change the guide smooth, I see the output of the guide node inside. The scattering density shows me the point cloud. So this is a way of us displaying just the node you're changing and you can see how quick the changes are. We only pay for the full cook at the end when we need the output of the node while but while a user interacts with it we can show them exactly what they're changing. If you want to learn more about the ways we try to improve Chrome interactivity in you guessed right we also have a paper on this. All right. Then for Alfro, we believe that the level of abstraction we have is useful and helped out particular at the start of the adoption process, but that abstraction definitely isn't free. It comes with a lot of development to maintain the system and improve things further. We think rooms are a great concept to have even outside of our system and technical artists can easily define defaults and change them down the line. But that's the catch for us. They're maintained by production and by those technical artists. So you do need them for the future. By the way, we've experimented with using interactive Houdini clones that showed some real promise. The idea is to groom at a lower resolution for interactivity locally and get a preview of the final groom for an ex from an external machine running the actual heavy calculations. Next up is Quill. I don't have many fancy images. It's one of our more recent fragments that we migrated to Houdini, but it's our feather system. Quill's premise is that it relies heavily on the concept of a feather primitive that defines a set of parameters. Those parameters can then be blended from one feather to another. Here you can see that we can take either groomed curves or modeled meshes as input data, convert them to a common rectangular lattice mesh representation, and finally assemble our render feathers onto those lattice meshes and blend their parameters. Here's an example of quills feather blending with much better looking feathers from an artist. You can see that all the feathers are unique in both examples. In actuality, we are just defining three feathers and blend their parameter spaces between. What we end up with are completely unique feathers all of over our birds, but they're based on the same characteristics that an artist defined. You can see the number of split changes, the noise changes. Every single feather is unique. Here's another example of blending over an area. Just three simple feathers with different characteristics. As I step through these, you can really see how the blending behaves. I'll go back one sec and do that again. Quill also has a way to deinsect those lattice meshes and it's quite good at it. Here we see the incoming lattice meshes converted straight from a curved groom that a artist made. There's quite a few intersections that can cause trouble with the final look of the bird as well as in downstream departments like character effects. After just a single iteration of RD intersection, it's already looking much better. But after two or even three, we converge on a really good result. This is far better than the state it was in at the beginning. Here's a quick before and after. To the left side you can see the uh intersected state and on the right you can see the resolved state after our 3D intersections. While you can hopefully see the improvement I want to point out that the directionality of the groom has been largely left alone. We want to leave the intention of the artist preserved as much as we can. To summarize quill as a solid set of features like the blending and the intersection among others that I didn't show. Currently, we're using rectangular lattice meshes, but there are plans to use a tighter fitting solution for better use in downstream departments like character effects. We also have recently extended it to be able to deform arbitrary geometries so we can use it for things like scales. And in future, we really hope uh sorry, in future there's plans, my bad. Um, in future we really hope to evaluate Houdini's feather solution a bit more and pick certain elements to extend Quill. We think Houdini is doing some things right. We think Quill is as well. Next up is Spawn, our scattering and point instancing tool set. Spawn is a special case. It's used both by modeling and by surfacing for various scales. Spawn is really just a tool set that results in a USD point instancer and as such is super efficient to display and render. It's probably our simplest tool set in terms of custom behavior and there's no need to reinvent the wheel. Houdini has a really solid baseline for point cloud manipulation. We simply wrapped up common concepts like clustering or proximity based manipulation and extended with anything we feel is lacking. Here's some examples that our modeling department would create. You can see large scale environments like cities or landscapes, but also structural elements like cobblestone or roof tiles will be created with spawn. Here's some examples of the sort of things our surfacing department would work on. Unsurprisingly, these are things that cover the surface of assets like moss or pebbles or even grass if it really needs high fidelity. It's just a quick time lapse of placing some cobblestones by filling patches based on an object UV. This is a tool one of our technical artists made. Nothing groundbreaking. It's essentially Houdini, but you can see the sort of tooling that we can make for users to actually work with. Applying it to a few different meshes. To sum up, spawn being an interface to set up a USD deployment instancer is inherently very efficient. It's a sandbox environment for all manner of use cases and we simply provide simple wrappers around Houdini's solid solutions. This makes it an easy system and is a great gateway tool set for new users. We found spawn also has a render time procedural to make sure things stick to surfaces or curves and this is useful for deforming elements or animated characters. Lastly, and I promise, spruce are component-based vegetation system. Spruce is built on the premise that we can represent a lot of branching structures like trees or bushes by reusing a small number of unique prototype elements. You can see the resultant tree on the right and the prototypes it uses on the left. I've colored them uniquely here to help show where each prototype is used. The trunk is usually just used once, but the branches tend to repeat a fair bit. The leaves are obviously all over the place, but in practice, we're just dealing with three unique meshes. Here's some examples of spruce in our movies. You can see it used for all manner of things like palms, decidious or coniferous trees as well as various types of bushes. The last time we pres presented soaps uh uh sorry, the last time we presented spruce in that presentation a few years ago, it was fully in Solaris. Since then, we have completely refactored it to be subspaced with great success. The output of the system is no longer multiple point instancers per generation. In practice, we found that splitting the instancers by generation caused confusion and was limiting in what we could create. We now output a single point instancer that contains all of our prototypes and includes the trunk as well for consistency. Previously, the nodes were able to work one certain way and didn't give much flexibility for customization. With moving it to SOPs, we have opened the floodgates for all manner of geometry manipulations and allow our artists to experiment and come up with novel systems. Nothing yells enable your artists as much as this workflow does for me. To me, this is Tree Doodle and as you can see, we're drawing curves interactively. Considering Spruce is designed to work on rigid component pieces, this workaround allows us to draw them interactively and immediately use them as components with the rest of the tree. This allows for art directing your important one-off branches while reusing a bulk of them and still gaining the performance. A huge shout out goes to our technical modeling lead, Animal Lumpy, for this work. And it's just super rewarding seeing a tool set adopted to this extent. We can't wait to integrate this further into spruce properly to really make it shine. Here's just an example of IV working the same way. Shaping the canopy is another need productions have. We work in feature animation, so sometimes building photorealistic systems isn't what we need. Sometimes we just need to kind of scope out a shape and fill it with leaves. Here you can here see an example of what's possible again with tooling from our technical artists. It's really cool. So spruce and then we're done. I promise the sub rewrite brought with it substantial speed increases. Previously many of the lops utilized subnets internally and those scene translation times compounded quite a lot. It enabled our technical artists to take the tool set and run with it. Spruce is still component- based by design and originally designed for mid to background trees. However, we use it at hero level more and more in our recent movies. Tree doodle is an amazing step to make spruce even more useful for those productions. We now expose the tree skeleton as a variant on export for downstream usage in other departments like effects especially. And we have an improved keep alive render procedural. This will emulate kind of the swaying of leaves in the wind and we don't have to cache it to disk. We can just adjust the wind strength even at the shot level with directionality and we're good to go. Finally, thank you all so much for listening and if you would like to learn more about our studio, feel free to check us out. You can scan the QR code or or visit netflixan animation.com. Thank you. I went a bit over. I don't know if there's time for questions. >> Yeah, we have time for some questions. >> ASAP. >> Hello, Ricardo. Uh what um comparing the asset container template and asset container um in your presentation you show that you can work on multiple level multiple fragment at the same time. Is it possible to chain them? >> Yes. Um, so we allow um to work on one fragment and then chain it with others to preview what it would look like when um I would work on let's say weave. I want to change the thread code of my shirt and then I want to make it from change it from black to blue. Um we don't need to do that in two distinct steps. We can simply have a weave container make our changes and then have a look container after and they just chain that way. the fragment setup and export step where we merge everything to Solaris that ensures that it's basically previewing what it would do as if you hit checkin. So in practice our users have two three nodes change they make their changes and when they're happy they check them all in together. Usually it's the same artist working on the same things but the containerized system allows them to also work separately. Thank you. It's a question. >> Thank you so much for the detailed note breakdowns. >> Thank you. >> Uh my question is um regarding the serverous validation and the check-in process. Uh would that be slower for larger scenes because you're doing a stage traversal? >> Um it would. It would. Um but we think that's a feature. Um, if you have a large scene and you actually make your changes in it, if you break it, it's the same end result as if it's as if it was on a slower scene, we're still bound by the complexity we're working on. But what we actually validate is just the stuff that happens inside the container. So if you do chain a lot of nodes and you have turntables and all manner of things, it doesn't care about any of that. It knows the input, it knows the output, and it will see, oh, that's the changes you make. Do we agree with that? No. Please change it. Thank you. Yeah, no worries. Do we have any more questions? >> All right. Thank you for the great presentation. Thank you very much.
Get free YouTube transcripts with timestamps, translation, and download options.
Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact