Cool. [Music] Hey everybody, my name is Andreas Leen and I'm working as a visual effects supervisor for Rise Visual effects studios. Welcome to another Fallout presentation. Um yeah, so the last 16 months months we worked uh on the Fallout TV series and uh today we're going to take a deep dive into some of the challenges we were facing. I mean yesterday there was on already another presentation. Anybody who attended this? Oh, >> okay. But yeah, no problem. Today we will go much more into like technical and Houdini related details. So uh and what could fit better to start talking about our USD pipeline. Then the project fallout or Hondao as its working title was um started uh in fall 2022 at Rise. So yeah, we were already working about 2 years with our start to end USD pipeline. Um, but for sure our Houdini pipeline team, uh, led by Zeon Ola, who's also here by the way, uh, never stops improving our workflows with his team. So, let's take a look at the latest updates and how this was really fundamental to tackle a project the size of Fallout. Um, the first project we used our USD pipeline for was the last voyage of the Dmeter. That was in 2021. Um, you may have seen Sim and me talking at the Sikra Hive two years ago about it. Um and yeah this and a few other projects were using this first iteration of the pipeline. Um and why we already saw huge benefits compared to our legacy one uh as as expected with such a huge change not everything was going completely smooth. So based on this experience and also the feedback of the other projects uh that work with this new pipeline um we worked on several updates and of course documentation and training of the artists was also a really crucial part of this. So by the time Fallout started in late 2022 the pipeline was already much more refined. The first major and very helpful improvement was the update log. This handy tool informs the artist if new stuff is being pushed while he's working in a scene. But you may ask why is this important if it's a push pipeline anyways. So it's not uncommon um that for example a lighting artist has this principal lighting scene open all day long. He's waiting for updates from layout another scatter layer here and effects update there. So the moment that those updates are getting pushed he gets informed and has a really clear UI. You can see here to get more details about the changes. He can update to the latest version in his current session or even decide to pin it to a previous work one. So, and also it would slow him down a lot if the stage is constantly reloading with every single push that others are doing in the background. Another advantage is that you will also see any update messages since the last time the scene was open. So, this tool helps a lot for all departments to really stay on the same page. Next, we now have a completely USD based asset library that grows with each asset creating during the projects. Here we have a simple UI as well which works seamlessly with the stage manager and the layout nodes in Solaris. Using this library here uh greatly speeds up the whole assembly workflow. Speaking of libraries, same goes for our effects library which is now also USD ready. Even though most effects set up, yeah, they need a little bit more adjustment, it provides still the perfect starting point for like a big variety of those effects and yeah, saves just valuable R&D time. Another handy tool that saves a lot of time and avoids unnecessary back and forth between the departments is a short preview note uh in our surface scenes. Using this useful HDA, it's possible to load complete setups of any shot of the project. This includes the light, camera, and also the position of the asset if it's already animated in the shot or moved around by layout. This allows surfacing and lookdev artist to review their assets under the real shot conditions. So they can do yeah all the necessary adjustments even even before pushing it once. Um, I mean there's more stuff like enhanced template scenes, a sitewide material library which is even configurable per project and geometry LODs including lighter and proxy. Um, aside from many other optimizations under the hood, here's one more I want to show you. With huge environments and sets, we noticed a slowdown while rendering because of too much texture data that needs to be streamed. Kama is certainly using the mip mapap levels which are embedded but apparently it's yeah using sometimes a higher level than it might be necessary. So back then in the last voyage of dmeter times we solved this by creating different texture lod files for each and every texture which was yeah quite an overhead. Also switching the texture map was performance- wise not optimal in the stage. We talked to side effects about this problem problem and as usual it didn't take them long to implement the solution. Now it's possible to use prim variables that influence the texture blur and with the mip mapap level that gets loaded. Using this approach we avoid writing those texture LOD files and it's also way faster in the node graph as well. Here on the right side that's scene where we we are using this uh texture LOD. Um yeah it speeds up the startup time tremendously and saves memory to ensure efficient workflows for layout and lighting. So if you need fast feedback, you can use a high blur value in the local session and in combination with the previous switch, the farm render will still use the full texture quality. We also automatically assign a higher texture blur value to MET and Phantom objects, resulting in even more performant gains in larger scenes. All of these great features were already in place while we work working on Fallout, which was a Houdini 19.5 project. All upcoming projects though will be using Houdini 20 or Sunnish 20.5 at Rise because with H20 there will be even more optimizations available from our pipeline team aside from all the cool new feature side effects added. We refactored the render selection workflow which speeds up the evaluation in complex lighting scenes a lot. Another big update performance-wise is that we now store all materials with an instancable material prim. So while this sound like a minor thing, it has a really huge impact on the scene performance in our biggest scenes that reduces the number of primitives that the scene graph has to manage and which are slowing down the whole network with every merge node a lot. So the performance boost is about times five in this case. Yeah, I'm really looking forward to the next project using all those new features and also to dive even further into Kama XPU rendering which looks super promising. If you have any questions about USD workflows, feel free to ask in the end. Um, but yeah, today's presentation is also about a past project called Fallout. But before we go more into details, um let's have a look at the show real. [Music] [Music] Heat. [Music] Heat. Heat. Heat. Heat. [Music] [Music] Heat. [Music] [Music] Heat. [Music] Oh my god. [Music] [Music] You are you are [Music] [Music] be a [Music] [Music] [Music] [Applause] Thank you. Cool. Yeah. So overall, we worked on about 400 shots starting as mentioned in fall 2022 and delivering the last shot in March this year. So Fallout was in production at Rise for about 18 months, which is pretty long. During this period of time, about 150 artists worked on the project rise with roughly 60 at a time. Houdini was used for scene assembly, layout, asset, lookdev, shading, uh effects, shot lighting in Solaris, and rendering with Karma. Our shots were spread across all the episodes with a focus on the first one which is the LA Nuke sequence and the last one episode 8 the final battle. We touched more than 30 different sequences. So there was a huge variety of tasks which made it very interesting but also challenging as we had to tackle many of them in parallel. Probably I could talk a few hours but let's focus on a few highlights now. First one the atomic blast in the retrofuturistic city of LA. This was also the sequence we started the project with in late 2022. As it's one of the most complex sequences, it took us a few months from start to deliver really the final shots. So, I would say it was even the most challenging sequence overall because this was this combination of a complex environment build as we had to recreate the rich futuristic version of LA in combination with very long large scale simulations. To get started, the production here provided us a great concept to use as a base for the look and feel of the city in establishing shot. In general, the topography is close to the real LA. So, we used several procedural approaches based on actual map data to generate a low res city first because the effect sims needed to interact with all the individual buildings including destruction. So, we could not just rely on traditional matt painting approaches in creating the city environment. Houdini played a major role. While the base city was quite low resolution, our team created a procedural scattering setup to populate the buildings with hundreds of thousands of small details such as rooftop props, balconies, billboards, and so on. We did a test on a couple of blocks first, and once we were happy with all the details, it was easy to roll it out to the whole city. Similar workflows were used to create the city vegetation. A large number of animated speed tree trees and bushes were scattered throughout the whole city. As the scattering system is instancing the asset references, the performance was no issue at all. While we could still add color variations to each instance. Now that the city has been built up, our FX team has put a lot of effort into making sure to raise it to the ground. Again, this was a second challenge aside the environment in the sequence as this whole process from seeing the first indication of the mushroom until the atomic cloud is rolling over the city is building up for more than 10 shots. This required quite large uh scale simulations and we put a lot of work into getting really the perfect timing and placement for each shot. For example, the shot in which the shock wave hits the mansion consists of more than 20 different effect layers all simulated in Houdini. All of these are great but one layer is particularly interesting which is the simulated vegetation layer. All these plants are created in speed tree and we had to push the detail and number of leaves leaves really high to get that photo realistic look. But to get them moving we tried using the speed tree wind which works fine for like simple wind motion but it's absolutely not controllable and art directible. So our lead FXD Yona Zongfi came up with a very cool solution. He built a setup which uses the speed tree skeleton creates a kifix rig from it with a base enom and then runs a vellum hair sim on top to add all those nice dynamics here. But let's break it down a bit more in detail. The first step was to extract and clean the skeleton being exported from speed tree. Unfortunately, when exporting cinema quality trees from speed tree, the cleaning process is not trivial as the skeleton is not working out of the box in many cases. To create a proper workflow, we adjusted our speed tree ingesttor tool so that the cleaned skeleton gets written automat automatically in the guide purpose of the USD asset. Then the next step was to do proper waiting using procedural capture approaches such as the bharmonic capturing nodes. Depending on the tree complexity, this needs some manual tweaking unfortunately to make sure all weights are set correctly. Also the calculations can take a while but using a dependency network at least this step we can just send to the farm. Now it was ready to be used with kif fix. This awesome set of nodes is also the basis of this whole approach as you will see in a moment to match the exact timing of the explosion. The rick post node was used also to set a cool final state for the tree. This basic animation was then applied to all the trees and is used as a driver for the vellum simulation. For a more natural motion of all branches, it helps a lot to use vellum, but it's still directable as the base animation is defined before the actual simulation. It is also possible to set individual stiffness settings for all the different branch levels, giving us really full control here. Since we are taking full advantage of the skeleton blend node and kin fix, we can mix in as much as a simulated motion as you want. In the end, overall the vellum sim is so fast and lightweight because it's basically just a curve or hair sim so that all trees can be simulated at once with this approach. Also, the layering of a secondary simulation layer is easily possible. If you would use a normal pointy form approach here with these high-res meshes, it would be super slow. But as it's using the joint dey form node and all weights are already captured, it's possible to process multiple high-res trees at once. For the leaves, we use the centroidids. So those are b basically just points and the weights are get getting transferred from the branches to to those points. As you may have seen, the individual leaves here are also torn off. To achieve this, a basic particle sim is used, modifying the points of the leaves before the actual leaves are copied back onto the points. A simplified version of the school setup was also used for all the trees in the LA city, just without the vellum sim, as you wouldn't see those tiny details from this distance. Anyways, another interesting topic was the concept for the mushroom cloud. You will find several reference videos out there of termic explosion and mushroom clouds, but Jonathan's idea for the sequence was a little different. In the beginning, we should see the effect just in the reflections of the window. And it was important to the story that it was not clear at first that it is a nuclear explosion. Could be also just a big fire cloud. So in the references, we see this typical mushroom shape pretty early on. But that didn't work for us. We first had to create a normal version of just a smoke cloud and then then come up with a cool way to reveal this mushroom shape. The tricky part was that it shouldn't look like just two clouds blended together. Key here was to balance the different velocities in the effect simulation to get a realistic adection connecting the two clouds as we can see here on the right. The final shot of the sequence is a wide aerial shot that is based on a stock footage drone plate, but we ended up yeah replacing a lot of it. Like in the shot I showed earlier, we created the same effects layer package consisting of the mushroom and all the destruction layers here just times three all interacting with the proxy city and hero buildings. So it was not possible to simply reuse the same simulation again and again. On the screen right side, we still see the mushroom cloud from the beginning of the sequence. This was probably the longest and biggest effect sim of the whole show. One iteration of this explosion with the Slayers consumed already a couple of terabytes on the server. So believe me, we're definitely not it's favorite project. The smaller task here was the hill replacement and Cooper riding away with Janie. Not to forget the hair simulation for the horse tail. If you haven't seen it, you're right. Okay. Next, let's have a look at some of our hero assets. The most important one is certainly the Vertie bird, or as we just call it, the Vertie. It gets featured in several shots and is one of the if not the Fallout signature vehicle. For modeling, we used Maya for the detailing of the metal part Zbrush and then texturing happened in Mari. In addition, we used a scatter setup to add thousand of balls attached to the panels in an efficient way so that no modeling artists had to deal with them. The layering concept within the USD pipeline helped a lot to keep everything lightweight. Those bolts and other details also never showed up in rigging as a scatter layer is being added afterwards and just moving with the parent transform when the vert is animated. Last but not least, the shading was done in Houdini using material X and karma as a render engine just like all our show assets. And as everything is powered by atomic energy and not fuel, we added some but unfortunately very subtle thruster effects. Yeah. While we knew we would see the verie bird in many close-up shots, challenged with the derigible, the sephylene like airship was a different one to sell its huge scale. This derigible can even carry six verties and is used as a carrier for the brotherhood of steel because of its size and the detail. The d the original difficult word sorry digible is not only one single asset in our pipeline. Instead, it's an assembly of several individual assets. Basically, it's like a set just that one can be rigged and animated as well. The advantage is a way better performant performance and that several artists can work at once on all those individual sub assets. Our USD pipeline offers a big flexibility supporting also assembly rigs. Unfortunately, the derigible is not featured a lot in the first season, but let's see. Maybe its big moment is coming in the second one, right? For me, one of the most interesting aspects of the show were all the postapocalyptic set extensions and wastelands, destroyed cities, vast deserts, seas of dead trees, and huge craters. Speaking of craters, in episode 5, Lucy and Max come across the remains of the town Shady Sands, which is actually a large bomb crater with buildings around it. This crater is actually not caused by the atomic bombs we see in the beginning of the series. Instead, another bombing happened here. To create this huge set, we could take full advantage of our start to end USD workflow at Rise. While the layout for the buildings around the crater was still in progress, we relied on the height field tools in Houdini to create the crater itself with all its detail. At the same time, several procedural approaches were used to add scatter layers with bigger elements like trees around the edge and tons of debris inside the crater. While the tall buildings were modeled, textured and shaded with traditional approaches, we created a building generator for all the small ones. This was a digital asset in Houdini and gave the layout artists the option to quickly add a big variety of buildings to large area. In the last episode, we even have a flashback scene with the same crater just a few moments after this bombing. So, a bit of rework was needed, like a more burnt crater texture, collapsing buildings, and tons of other layers. To handle the amount of FX needed for this shot, we created a library with different fire and smoke caches. All of these were written out as individual USD assets, some with up to five variants, so we could use our normal layout and scatter tools in Houdini to distribute them. Even in this huge scene, the layout was still interactive and fun to work with, thanks to working with proxies for every effects cache and using optimizations like the texture blur I mentioned in the beginning. As the shot has a pretty dramatic lighting, it was important to be able to check quickly how it looks rendered with proper materials and lights during the layout process. Our load shot node manages the USD layer stack within the scene. While the current layer we are working on is muted, it keeps the layout, lighting and cam layer available. So it's always possible just by switching from GL to karma to get an accurate preview how it will look like when it's getting pushed and then later on rendered by the lighting artist. Rendering wise, the shot was a bit of a challenge, especially with these volumes lit by all the fires. Karma render optimization is one of my favorite topics. But as our times limited a bit today, we will just look at some important settings. Especially if you have volumes in your scene, the volume step rate is probably the most important parameter. You should lower it until you see really a loss of quality in your volume at the final render resolution. And I'm not talking about noise in this case. I'm really talking about not seeing detail anymore. Especially with high-res volumes and a default setting of 0.25, to five far too many voxels are getting sampled. This slows down the render a lot without adding any detail. Second, I would make sure that all emissive light sources are properly teched as geolites using the render geometry settings node. Then karma will sample those much more efficiently. In this scene, we did not use original fire volumes as the light source. Instead, we combined them all and reampled them to a lower lower resolution. So there's no difference in the visible light contribution later on, but but much faster rendering. And one last thing, if you have a lot of intersecting volumes in your scene, you should think about combining them into one big VDB file. This will speed up the rendering process also quite a bit. Using all of those techniques, the render times were still reasonable at a 4K resolution. For an upcoming project, we will probably use Kama XPU as it's very fast with volumes. Only thing to consider here is for sure the available memory. But enough about those large scale simulations for now. Let's take a look at the other extreme. In episode 3, a group of criminals has a rather unfortunate encounter with Max and his recently acquired power armor. This head smash scene as we call it consists about four or five shots actually with the one here on the right being absolutely the hero shot. In the beginning, I said that the LA Nuke sequence was probably the most challenging one, but I'm not 100% sure if this is completely true because this head smash shot here kept about four effects artists quite busy for a while. But let's take a look why this was the case. We started with practical elements that they shot. And in the beginning, the idea was to just enhance it with some blood, some remains in the hand and it's the G arm for sure. That would have been quite straightforward, but we noticed soon that this wouldn't have enough impact to sell the shot. So, we added some skull crushing and fracturing, but still with the idea of using a lot of blood to cover up the critical areas, so we wouldn't need any complex skin simulation or anything like that. Next, the client wanted to bring back more of the plate element they shot with the puppet head exploding, especially the ears. And they wanted to reduce the blood a lot. So, we did some tests in compositing and to keep more of the plate actually worked better than expected. But with less blood, it became clear that we would have to replace a way bigger part of the head than we originally planned. Also, they wanted a less straight cut, more ripping skin and visible fat tissue and goo. Okay. After some discussions with our VIX soup, we decided allin. From now on, four effects artists worked on this effect in parallel as also the deadline was approaching. The USD workflow helped us a lot here with its flexible layering so all artists could collaborate efficiently. Let's have a look at the layers. The base was for sure the animation. It was important to match the plate as close as possible, but also to provide a good starting point for the simulations. The next layer was a skull fracturing. At first, we tried it with ABD SIM, but that wasn't really working out and yeah, it was just not controllable enough, especially since the client wanted to see more of the original plate. We could not just switch from one moment to the next to completely crushed head. So a cool fracturing with some procedural animations on top to create areas for the skin to tear later on was the way to go. With the skull in place, the next layer is the simulated skin, tissue, and fat, which is which is adding a lot of this disgusting feeling to it we were looking for. So the skin and tearing is a vellum simulation which was very challenging to control actually because in some areas we needed this dynamic but also not too extreme tearing while in others it should just move a bit but without that it looks like rubber. This took us quite a few iterations. Next are several blood layers. In the beginning vellum fluids were used for this layer but later the TD switch to flip. Then we created a few wet maps or in this case blood maps. Some just static where the blood splash hits the arm for example um to better connect the parts we were using from the plate phase. We also needed a dynamic pass called yeah we called it the gushing blood. This was also a flip sim colliding with the skin to get a cool interaction. The key here was to balance the stickiness and the viscosity so that it did not look like a waterfall. Finally, there was a bit of dripping blood where the fingers of the power armor began to press on the head. Last but not least, we added a groom or hair layer, but this one did not require any simulation. So, we just attached it to the individual skin chunks. With all those layer rendered, compositing did a really fantastic job of combining and integrating them with the actual plate. Especially since the lower part of the CG head did not really match the actor in the plate at all. It was necessary to repro the plate onto the CG face again while making sure that all the blood and wounds will still be added on top to give compositing as much flexibility as possible. The render setup was quite complex as well with everything being rendered in about 20 layers, I think. So, this shot was a really tough one, but in the end, I'm quite happy how it turned out. Ah, and then there were a few more shots showing the same thing from a different angle and how the remains drop down. But with everything we learned from the first one, these were more straightforward, at least a little bit. Next, I want to jump into the final episode and the epic battle. This takes place at the Griffith Observatory, which is located high above the destroyed city of LA. Since we had several shots from all different angles and lighting situations, we created a detailed asset of the exterior and interior of the observatory. The plaza extension around the observatory is partly based on a set build of which we received the lighter scan. But as there were also several wide establishing shots on our list, we had to create a complete CG version of it, including all the surrounding hills as real ones. Yeah, they were not really post-apocalyptic enough. Before we jump right into the battle, let's take a look at this establishing shot here, which is again based on a drone plate. Yeah, we kept a little bit of it, but again, it was super valuable to have a proper reference that can be matched. For sure, we need to get rid of the modern city of LA, which we replaced with a wasteland. Then we added several ruins, destroyed remains of downtown in the far distance, and for sure the Griffith Observatory. Another use case for our height field tool set and building generators. Even though some of the base topography was extracted from online map services. Um, as it's such a large area here, it was even necessary to split the height fields in different chunks which could be processed separately to have enough resolution for adding all the small features. Key here was also to add more and more scatter layers with burnt trees, destroyed cars, and more details until combined to a realistic landscape. Last but not least, in the shot, it was important to make the plaza feel alive. So, we needed a crowd also to match later scenes that were shot with a real crowd. Working people, running children, soldiers patrolling. So, the asset team modeled, textured, and shaded several Digi doubles with different costumes. We did a mochap session with our Xense suits and then effects added a cloth simulation to every agent and clip. As we have about 100 characters and not 6,000, it's a simplified crowd system that we use here or as we call it the cache placer crowd. Pre-cached animation variations which are getting scattered and some placed by hand. Again, USD here offers a great flexibility to quickly switch the animation, costume, and color variation for every agent while maintaining the simulated clause. What I like about crowds is that always something unexpected happens. Back then when doing the crowd system for the Babylon TV series, there was always one crowd character, an older guy losing his pants, for example, finding him in every shot was already a running gag. Somehow it's always related to effects and the cloth simulation. Same on Fallout. Yeah. Okay. But it was definitely a highlight dailies. Um, now let's jump right into the battle. The brotherhood starts to attack the observatory with wordy birds in this BFX heavy sequence. Even without any web, the plate looked already super cool and moody, especially the drone footage here with the low sun. For sure, this kind of backlit situation mostly helps in terms of integration as we have all those interesting silhouettes. That's why we also adjusted the city layout on a per shot basis to recreate the original look and realism of the plate as close as possible. For this project, we did most of the layout in Houdini and using our USD multi-shot workflow, it was quite straightforward to do those adjustments fast and from within one scene. A shot switch here and there, a quick push and the changes will be immediately available in the scene of the lighting artist. Next is one of my favorite shots. Actually, the two shots showing the explosion and crash of the vert. First of all, the way the plate was shot feels super intense, especially the fact that it does not focus entirely on the explosion. Even hiding it for a while makes it even more real in my opinion. Since I have an effects background, it was super exciting to finally blow up this wordy and have it crash. Our effects supervisor, Christian Cook, and his team did a really great job here, checking tons of reference for aerial explosion and the smoke trail was key here. There's a rigid body simulation at the base with vellum on top for the deforming metal parts. So, we had to make sure to build the whole vert model FX ready. But since most of it is covered by the actual explosion anyway, we spent most our time refining the look of it. Aside from having enough detail in the sim, the most crucial crucial part was tweaking the shader to achieve this for the realistic result. The few frames when the burning vert is hidden, we could actually use to combine two simulations. The big explosion and a separate one for the trail. As the crashing vertie was moving quite fast, we used a slowed down version of the animation so that it was easier to maintain the detail in the sim. Also, trail fire and ground exposure were split in separate simulations to keep the iteration times as low as possible. I mean, in comparison to the effect caches for the LA Nuke sequence, all of those were still fast. Just great how to see all those elements transformed this peaceful and quiet plate into such an action-packed shot. The rest of the sequence uh was mostly about adding more explosions, head smashes here, another burning wordy bird set extensions, and releasing a lot of energy. I could keep going, but um yeah, cool that I could give you some more technical insights about some of our VFX, our awesome team at Rise created for the Fallout TV series. Rice also has a recruiting booth here at FX and we were we are always looking for talented artists. So make sure to step by. If you're still looking for opportunities how to become an Houdini artist or 3D in general, I can recommend looking close at the Fos Zborg in Austria. Yunasfry who did a lot of work uh effects work here as well. Um and me are teaching there, the master students. So yeah, check this out. Uh if you have any questions now, feel free to ask. Um and as mentioned, the masterminds behind our USD pipeline also here today. So don't be shy with in-depth tech questions. I will hand them over. Thank you very much. Amazing. [Applause] Any questions? Don't be shy. There you go. Uh, this asset library, is this uh only a rice specific thing or is there any way we can get this going? No, >> I think that's Yeah, just for the rice specific. >> Yeah. Okay. Yeah. other questions. >> Yeah. >> Uh first up, I love the presentation. Uh very well done. >> Um with all the shots uh you've showed, what was the uh shot that took the uh with frames that took the longest to render >> to render? um >> with all the USD and everything. >> Yeah, I think the longest shot to render was this big the exposion in the end because we we are just going with the camera inside the volume and that's always difficult. >> So, so also there we had to yeah just use reampled versions of the actual explosion or even go for 3K rendering instead of 4K. Never said that. Um yeah, to to make it work in the end. Uh but yeah, there's always the noising, right? Which shot are you most proud of? Like was there a challenge and you sort of broke through and it looked amazing and >> Yeah. Yeah. Yeah, I think uh actually the also the final battle and especially that's not featured here in the presentation a lot but uh the scenes where this big blast in in this container with a diode is happening there's energy releasing effect that came pretty late to us and uh if you have seen the series I think you know what I'm talking about um yeah so that that was definitely quite interesting uh to to and a big challenge to to do. Yeah. Can you tell us a little bit more about the the speed tree rig to the kindics rig? Uh did you also do the same for for the for the leaf and and so >> yeah that yeah that's what I meant was also layering different vellum simulations. So I mean for the normal trees leaves it was not necessary to do but we also had those plants there with those longer leaves and there it was necessary to add just for those leaves another vellum sim to just get a nice movement. Did did you make the conversion manually on a bunch of uh of liberator tree and to end stock u stock it in your library or did you manage to automatize the process? >> Yeah. Uh I mean we have the speed tree injester tool which uh which is in theory taking care of this but to really create a really good skeleton out of those speed trees is difficult. So we I think we have a basic version of it in there and you can just run it and see if it works. But as I said, especially for those more complex trees, there was manual work necessary. I mean the I guess I showed the network, right? It was quite big. Yeah. >> Other questions? >> Yeah. >> Hello. Thanks for the thanks for the talk. It was really good. I just wanted to ask about the the texture LODs. Were you saying that basically you only make the high-res textures and then you you're using a something in Houdini to actually just reduce them so you're not having to pump out a whole load of LODs for >> Yeah. Yeah. Correct. With Yeah, correct. We're using this texture blur that's that's basically available on the uh on the node which loads the texture. So, so we just blur it gets blurred and then because of this it's loading a lower midmap level. >> The mip map is already embedded in the texture and we just forcing karma basically by uh blurring the texture to you load a lower midmap level. >> Cool. >> Yeah. So that's that's super useful. I can just recommend I mean it's it's on and right. It's nothing uh that we completely implemented. You can just access it. Yeah. Um I would have a question. Uh so you mentioned that you were guys using ZBrush. What was the workflow between getting files from ZBrush to Houdini in your case? Also USD or did you use any other file formats? >> Um no I think for ZBrush it's not USD. No, sorry I'm shaking his head. It's not uh No, I think there it's probably uh Olympic or even OBJ. uh I think for for ZBrush. So unfortunately I mean ZBrush is also running on Windows right. So yeah this tool is a little bit separate I would say from from the others unfortunately I mean we have we have for sure pipeline stuff for this as well but it's not so flawless as as with other the exchange with Maya for example. >> Thank you very much. >> Yeah there was one. Yeah, >> if I'm not mistaken, um if you mark uh a primitive uh instanceable on the asset, what you mentioned, do you do for materials? >> You can't uninstance them later if you load that. >> Were you using maybe varants to like set the prim on instanceable or instable if you wanted to make one particular material different than the others? >> Good question. I actually maybe Sam can uh can can you pass on the mic because that's so tech in depth. Uh I would hand over the mic there. Thanks not to just say something wrong here. Yeah. Yeah, that's something we um editing instancable materials or assets was something that we needed to solve initially. That's was part of those long R&D phase that we did. And what we do and what is an option with USD is you can apply an inherit um composition arc to to an asset. And this is the only way to actually override uh settings within an instancable um prim. So it is possible and it will automatically create if you just apply the inherits to I don't know all your 100 instancable assets you just get one more prototype like implicit prototype. But if you apply different settings then it behind the scene also creates it. So it is very efficient if you if you work with it efficiently and you don't have to deinstance. >> Couldn't have said it better at all. Any other questions? Yeah. >> Hi. Um so for these kind of projects of this size, how do you distribute uh junior FX for example or mid uh FX artists like what kind of task would they get assigned? >> Mhm. >> Or even an intern for example if that was uh the case in the project. Thank you. >> Actually we we had interns working on on the show effects interns as well which did a really great job and at rise I mean we are not hundreds of people right working on a project especially not in effects department. So how many were there like I don't know eight or something like this max. So so I think even if you're joining as an intern and you do cool stuff uh we try to definitely assign a real task for example like you see a lot of road to wash this stuff which the verie bird kicks up the dust from the ground. So that was something uh that that was a task for example for junior or or intern. Then um yeah we had this vellum uh simulations when this guy is flying out of the helicopter and getting ripped apart. I think that that's was also something uh that yeah Junior did. So yeah we are I trying to find for every level like interesting tasks. >> Thank you. If you're a young person interested in uh in FX, you should work at Rise. They're the masters. Work at Rise. Any other questions? Thanks for the talk. And uh for the uh log that you get in the scene when somebody updates an asset, do you is it like a notifier in the USD Python code or is it uh from uh sent from your tracking software somehow? >> Maybe you can give it some >> um it is uh all our USD files are also represented in our database. So when we push something there it it sends out um information through the database and the update log is always listening for that and this is how we inform the scene about the change. Yeah. Any other maybe one more question not mandatory but if there's one lingering in your mind. All right. Well amazing amazing Andreas. Thank you. Thanks a lot. Thank you everybody. [Applause]
Get free YouTube transcripts with timestamps, translation, and download options.
Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact