The Role of Karma in Dune: Prophecy | Rodeo FX | FMX HIVE 2025

Houdini6,398 words

Full Transcript

[Music] All right. Hello everyone. Um, first things first, I just want to say uh I really appreciate everyone for coming to this. There's a lot of people in this room right now. So, thank you all for coming. Um, my name is Scott Coats. I'm an environment supervisor at Rodeo Effects. I was the environment supervisor on Dune Prophecy. So, today I'm going to be presenting a shaping light, shaping worlds, the role of karma in Dune Prophecy. So, who are Rodeo FX? Rodeo Effects are a multi-awward-winning midsized VFX company founded in Montreal in 2006 by our CEO, Sebastian Mo. Since then, we've opened up many studios across the globe, Quebec City, LA, Toronto, Paris, and in 2025, we acquired Mick Cross Animation. So, what what type of projects do we work on? You can see here we've got an array of different projects, old and new. you know, some of my personal favorites here. Paddington 2, uh, Golden Compass, Indiana Jones, and what we're going to talk about today, Dune Prophecy. So, I'm just going to play the, um, company breakdown of Dune Prophecy. Every single shot you see here was rendered in Karma, um, mostly Karma [Music] XPU. [Music] Heat. [Music] Heat. Heat. Heat. [Music] [Music] Don't heat. [Music] [Applause] Fman don't stand a chance again. [Music] I hear you slaughtered many fing on the rockus. [Music] We need to defend our values with ferocity. [Music] [Applause] [Music] Lord. [Music] [Applause] [Music] [Music] All right. All right. So I'm going to quickly just talk about um you know the agenda for this. I'm going to first touch on the environment creation overview project methodology and then a bit about USD. Then after that move on to karma our motivations for using it lookdev lighting rendering compositing and then a few production tips best practices at the end. So the project overview we work mainly on the Imperial Palace environment. We see this in several episodes of the show. There were 94 environment shots, a mix of exteriors and interiors. We also rendered crowds, vehicles, and props to composite with both live action footage and the CG environment. And all shots were rendered in Karma XPU with Karma CPU being used for fog and atmospherics, so uniform volumes. Um, the environment was nominated for a VES award in outstanding environment uh in an episode, commercial, game cinematic, or real-time project. So, what was our methodology on the show? We were going to create, light, and render the environment into shots using a small generalist team. We had a strong collaboration between the assets department, so modelers, textures, and lookdev artists, and then these generalists that rendered the shots. We rendered shots as full frame range sequences or single frame overcans and spherical panoramas like where it was appropriate when we didn't have much parallax. And then like we we uh wanted to create complete images in CG. what we see in our renders shouldn't be visually too far away from the final comps in Nuke. Um, our interior shots were actually built by the assets department and rendered by the lighting department. Uh, whereas the exteriors were all by the generalists. So, when we first started building this environment, we broke it down into four main sections. The front garden in red, the palace there in green, the back garden in blue, and then the outer landscape in orange. So the main palace itself, the main building of the palace was actually built by the assets department, but we did make an assembly for it because it had lots of dressing, you know, extra things that the generalist could add on top. And then the outer landscape is basically like all the terrain, all the cliffs, the ocean, and then the two gardens are like the kind of regal gardens in front and behind the palace. Um, so before I move on, I just want to talk about the concept of a USD layer stack. So in our pipeline in rodeo, we're full uh USD and Solaris and most of our assets like characters, props, and even you know some environment stuff like buildings. This palace for example that was built by the assets department, it came as a layer stack. So a layer stack is basically when you take the layers of each department, stack them on top of each other, and then you get a file which has like the geometry coming in first and then the textures overriding that and then the shading overriding that. And then you just have a full asset that you can import into your scene and render. Um, so that's a layer stack. But for our um, USD environment assemblies, we didn't actually want to use layering because when you start making an environment out of layers, you end up with a load of dependencies. So what we decided to do for our assemblies is we kept them as a single USD layer that references other files. So we didn't do any stacking. So for example, you could you could publish a geometry somewhere for like the garden walkways or the terrain. You could reference that into the assembly. You could do a point instancer, publish that, reference that into the assembly. The prototypes of the point instancer could be a layer stack, but they could also just be like a random geometry. Um, and then you could also reference layer stacks into our assembly. So everything is just a list of references. Um, when we first started building the environment, we make a dev shot and a dev cam. And what that is, it's basically just like a an environment outside of any shot context where you just point the camera in different directions that you might have in the final production. So here you see like we've got some wide views of the palace. We've got some views kind of closer in the gardens. We've got some views looking out to the ocean and the cliffs. And this is only kind of nine of the frames, but the full camera was I think it was 24 frames or something. And we'd render that every couple of days and just see how all the assemblies are progressing. Uh we also use the karma physical sky for this as well like throughout the whole show to keep the lighting uniform. Um so karma why did we use karma? It's a first party render inside of Houdini. So we get unmatched interoperability with like Houdini and Solaris. XPU allows us to utilize both the GPU and CPU farm. So we have like a lot of blades a lot of uh blades on the render farm that don't have good GPUs. So we can utilize those but then we have a small pool of machines that have really good GPUs with a lot a lot of VRAM. So we can also utilize loads of XPU. We really wanted interactivity during lookdev and lighting process especially when an artist has a decent GPU on their workstation. And then the most important thing we wanted to gain some uh production experience using an XPU engine cuz we never done that before and we want to know like the best practices. So, at first we set out kind of with a conservative mindset to just use Karma CPU, but then we found that XPU was just better for like it better for most cases. Um, it's faster in general if you don't need advanced sampling even if you're using a CPU. Um, our Karma fog box pass our uniform volumes, we did render those in Karma CPU because that engine has um, uh, screen door samples which allows you to really reduce noise in volumes. Um, we decided to shade only in material X context uh so that it works with CPU and XPU. For those that don't know, if you shade in a VEX context, you can't use Karma XPU. And then we use Intel open image dinoiser on all passes as a postprocess to remove the last bit of noise. Um, so I'm just going to quickly talk about look development. So the first thing that you need to do when you do lookdev, you need to be able to load textures, right? And in our pipeline, the way that texture loading works is the texture paths come in as primars on the assets. So for example, you have all these mega scan cliffs here. And each cliff, it will have a path as a primar as a string primar to tell the renderer where that texture needs to be loaded from. Unfortunately, in Karma XPU, you can't load string primars in the shader. So you see here, I've tried to do it. I've I've got a string primar. I've assigned it to these cliffs. Everything's going red because it's using that red fallback texture. Um, so we had to make a workound for this. And I'm just, this is a bit of kind of pseudo code. I'm just going to describe the process. So you have to create a material and make a string parameter as a placeholder for a texture path. Plug this parameter into the file path of the texture node. Assign the material to some prim like you usually do. And then for each set of prims where the material is assigned, for each subset of unique texture primars found within that shading assignment, you reference the material using inherit to make a copy of it. Then you replace the texture paths on the parameters of the copy with those unique values, unique texture primar values, and then you reassign that copied material. So it sounds complicated, but what you end up with is for each unique set of texture primars, you have a child shader, kind of similar to how it works in a game engine. And the child shader has the bakedin texture path. So instead of resolving the textures at render time, you resolve them in Solaris time in the scene description. And it works the same way as it would in Karma CPU. You just have to put a HDA after the shading assignment. Um, so once we could load textures, our asset department could basically go ham after that. Like they could do whatever they wanted, you know, same as any other render engine. So here you see some examples of some of the things they built and lookdev. Like the main palace itself, the assets department, uh they built that and you know uh they used a lot of cool techniques like a lot of that palace was actually built out of instances both point instances and uh instancable references and a lot of procedural techniques as well like the hex trip planers the noises all the you know the standard procedural stuff and they also looked at the uh the vehicles the suspenser car and the crowds as well. The terrain however was built by the generalists and this was kind of a mix of Houdini height fields uh post-processing mega scan kit bashing as well like bashing in instances into the side of it and it was all fully procedural shaded. We used like again the same stuff hex triplaners noises well positions and we even use the karma point cloud lookup the one where you can load a bio file give it a point cloud to like shade certain areas in certain ways. We use that for like pathways and like where the water hits the the cliffs. For our trees, uh the one thing I wanted to mention about our trees is all of our trees in Rodeo, especially on this project, they were nested point instancer. So there's no opacity map on the leaves. The leaves themselves is just a geometry. Like the leaf model is a geometry and then the the tree contains a point cloud of all the transforms of the leaves. So this means we don't have to use any opacity maps in the shader. Um, and it's really really memory efficient. You don't get artifacts like you do with when you use like trans uh opacity and uh super memory efficient, super fast to render in Karma XPU. And I kind of recommend I recommend you do this if you want to do trees. Um, vegetation kind of a similar situation. We use a lot of nested instancing for leaves and blades of grass and stuff like that. Um, the key point here is we just use a lot of primars in the shader. So all of our shaders we kind of we would kind of have we try to have one shader for most of the vegetation. So you know like a grass shader, a bush shader and then primars override uh the stuff instead of having like a million different shaders. We just have one and we have like different levels of point attributes that we promote to primars. So you can see like every blade of grass is a unique blade of grass as a primar. And then like those those wavy patterns on the grass that that kind of motif that the client wanted on this show that would also be like a point attribute but the the shade is the same for all grass on the show. Um for our water we made extensive use of the um karma ocean procedural and that wasn't just for the uh ocean. It was actually for the the water pools in the gardens as well. Um the water planes were published as part of the assembly and then they were lop imported into the ocean procedural um because you need you need a SOP geo for that. And then changing the ocean looks was really easy cuz you could just swap out the spectrum uh spectra file on a shotby-shot basis. So on the left there you can see we're trying out different looks in the same shot just by swapping the file. And then on the right you can see like the difference between the ocean water and the the garden water but both of those are using the ocean procedural to to take our water to the next level. We dressed in some algae and again you're going to notice a trend here like that was another nested point instance or so you know the smallest atomic piece of that algae it was like a single clump of like you know whatever it was and then that gets scattered around to make a noisy patch like you see in the bottom right then you take those noisy patches you copy those everywhere and you get so much coverage and memory efficiency out of that uh it just really improved the look everywhere for our architecture um we we used a lot of instancing for this as Well, like our pathways, our pillars, any statues that were repeating, any kind of uh any features that were repeating, we tried to leverage instancing as much as we can to save on our memory. Uh all the shading was procedural. Uh nothing really textured by hand, barely any UVs, honestly. And um a cool technique that we did is you know as we uh cuz we're we're working as generalists, every time that we see uh you know a piece of architecture in a shot that wasn't really up to scratch, we'd kind of just like call it out in the shot and say, "Okay, this one we need to add a motif to it. We need to add like a wavy pattern or you know, you can see on the bottom right some sort of um uh wavy kind of geometrical detail there." And it adds quite a lot. You just add that and you add a few prim bars and you're good to go with a kind of uniform uh concrete shader. So another cool technique that we did is we did a technique called a recolored albdo. So what that is is you take any texture map. Here you can see like a any texture map with a kind of a uniform value to it. Um here you can see like a mega scan. I think it's a rock texture. You divide it by its average value. To find the average value, you can use a curve tool in Nuke. And then you get the uncolored texture, which is basically, you know, the texture divided by the average value. And then what can you do with that? You can multiply that uncolored texture by any arbitrary color. And then you've basically recolored your texture. Um, so why is this useful? So on the left here, uh, on the top you can see the raw mega scans that were ingested, uh, just from like, you know, the mega scan download. And it's very subtle, but you see there's some slight hue, saturation, and value variations in there. So, if you were to bash all those instances together, you might get a bit of a mish mash. So, on the bottom, you can see that we've basically recolored all those textures, like uncolored those textures and then multiplied that by like a sandy color, and it just makes everything a lot more uh homogeneous or uniform. On the right, you can see a more severe example with like a cyan or a turquoise color, but you preserve all those hue, saturation, and value uh uh values from the original texture, but you can have any color you want. So, here you see a result of that like all those cliffs kind of bashed together. Looks kind of uniform, but you can see in the the little picture and picture there, there were actually many different models from Mega Scan. Um, so for our lookdev pipeline, lookdev for all of our hero assets came baked down into the layer stacks that I described at the start. So like assets from assets department. The environment generalist lookdev however was contained within HDAS that loaded into our shot rendering scene. So actually nodes and assignments in the scene. Um, loading that lookdev as nodes keeps it live and allows us to make use of wild cards to assign shaders. Uh, because in USD it's static. So now I'm going to talk about lighting and rendering. So for all of our daytime shots, uh we started off every single daytime shot just using the karma physical sky and the karma fog box. Uh this is really cool because it gives us a really good starting point. You get a really realistic photographic exposure from it. All you have to do is like change the azimoff of the um sky, change the turbidity of the sky, um angle your light, and then you can increase and decrease the density of your fog, and you just get looks for shots super quick. So, we're basically like for every shot just do one of those and then, you know, once that's approved, we're going to lock that in and then switch that out for a more traditional um HDRI and a key light, but we'd still use the karma fog box in the final render. So, here's a result of that. You know, we have six shots uh here. Every single one of these shots started off as a karma physical sky and then it evolved into like your more traditional IBL, you know, HDRI technique. We also use cloud gobo blocker geo for the you know the cloud shadows. Um but yeah it's just that's it. Karma fog box HDRI and cloud blocker geos. For the nighttime lighting it was a lot more involved because we had hundreds of different lights. We had lots of spotlights area lights and we also used a lot of IE lights to kind of get that shaping that we needed uh onto the palace and the architectural features. So, this one was a, you know, a lot hundreds of lights and a lot more challenging to render. Um, yeah. Another cool thing we did for like I mean not just for the night shots but for the whole show is, you know, um anytime in our models uh where there were like there was a lamp shade or a light fixture, we'd have a really nice automated way to basically just uh extract that out and make uh an internal light geo. Uh I think I think it was like a it basically was done in soaps in the end but it was through Solaris and um you can basically make you know a nice light a nice internal uh geo that you're going to have as emission. You can set up all the light linking in Solaris and basically like you can view that live in your viewport with Karma XPU which I think was really cool. We rendered a lot of panoramas on the show. So like latl long renders and these are really cool because you know you render one of these latl longlongs and uh it gets rid of a whole sequence basically. Like here you can see the nighttime latl longl long for uh the lip environment. Uh we didn't actually render as a spherical render. We rendered as what we call a six-pack, which is six six renders uh six oric cameras that represent a cube and then we take those renders into a nuke and then we stitch them together and that you know that works at Karma XPU. So it's cool because you know like shots like this where there's not much parallax, it's just like over the guy's shoulder or looking out towards the ocean. Nodal uh nodal movement of the camera. You can render one of these renders and it get the whole sequence is gone basically like it's just comp after that which is really cool. So I want to just talk about our our like lighting and Solaris template pipeline. So the first concept I need to introduce is the concept of a chunk and all it is uh this is in our pipeline. It's basically a network box that you can publish. Um so you can publish it to the show the sequence the shot wherever you want and you can also embed HDAS in there which is a really cool uh feature. So here's the environment rendering template. It consists of uh five different chunks. Um we had to kind of because we're swapping on this project. Uh I didn't mention earlier but like all other projects in rodeo at this time we're using Arnold. This was the first one swapping to karma. So we had to change the template quite a lot. So this was the kind of new template that we came up with. Um so at the start we have our global settings chunk which is render settings nodes, camera resolution, overcan stuff like that. After that we have the pre- tweak. This is any edit to the scene uh before the lookdev and lighting is applied. So normally we use this for um kind of animated tree variants and stuff like that, setting variants of stuff. Um lookdev. So this is the you know the the lookdev HDAs that we spoke about earlier. Uh light rig chunk. So this is the lights and the light rig htas which I'll talk about in a second. And then post tweak is any edit to the scene after the look demon lighting is applied. So this is normally to use to configure the render path splitting. So here you can see an example of a light rig chunk. And um you see there there's two unique lights. There's the moon and the sky. But then after that we actually have the HDA that's the Palace Imp uh Imperial Palace nighttime light rig. So this is a really cool uh way of kind of building a light rig because you can have an artist work on a HDA um for your master light rig. all those hundreds of lights in the palace and you can have other artists working on like you know the directions of the light and all the other shots and then basically the chunk contains the HDA so you're kind of in it's kind of like uh you're almost inheriting but you're containing the the asset light rig for the palace and it updates throughout the whole show which is really nice. Um we also managed our light light light rigs uh in shot grid. So, we set up our key shots in shot grid and then all of those light rig chunks, we publish them to the key shot and uh what it means is is that any of the child shots uh the keyot light rig is going to build in the template automatically. So, at the start of the show, you just got to go through and do all your your angles and then you only do the lighting directions for like all of the key shots. And then once that's done, like you know, you you've got everything you need. And the nice thing about this is you can easily swap swap light rigs from other shots really easily. So the render layers were done similar fashion. You know different shots contain different render passes. These were also loaded as chunks. Um render layers were automatically built based on the shot and they could either be rendered by the artist manually like clicking on the USD render ops to send them or we also have our automation tool diacury. And what that is is it basically you don't even have to open Houdini. Um just from a a tool you can basically uh on the farm you know build the template for you know whatever shot you want. Uh it will build in all the chunks it will build in all the render passes and then it will send them to the farm do the renders and then do the nuke daily after that. So you just you know it's like a auto shot tool. Here there on the right you can see like the UI that we made for it. So you can kind of set up like your frame range what passes you're sending stuff like that. Um, and it was all automated. So, here's typical render layers for one of the daytime shots. So, here you can see there's four different uh layers for the environment. One for the ocean and then one for the atmos, which is the the karma CPU uh fog pass. So, we rendered a lot of things separately to try and save on memory uh especially so we can utilize the GPU more. And we merged everything deep. Uh we we merged everything deep except the atmos because that was just traditional matte. Um but this technique was really cool because you know like one one thing you can do is anything that you're not directly rendering. So let's say for example you're rendering the Main 2000 there the palace anything that's not uh directly visible to the camera you can set the USD purpose to the proxy um and you save a ton of memory from that. And not only can you do that with the geo but you can also do that for the shading. Here you can see all of those layers combining together for to for the final composite. So you see like all those deep layers uh all karma xpu, the ocean which is the ocean procedural and then the uh the atmos on top which is a traditional uh uniform volume rendered in karma CPU and matted. So here's the same situation but for a nighttime shot. Uh this one was a little bit more involved because we had FX passes but the same rule applies you know like all the environment uh is rendered deep merge deep in Nuke the uh FX pass. So you can see there's a little waterfall on the bottom left and there's ocean 2000 which is the ocean spray. They were rendered not as deep. They were rendered as again traditional 2D matting. Uh here we have all of the light groups for two of the passes in in that shot. So you see there there's 12 different light groups that comp has control over. This is really useful because in a in a crazy complex shot like that, comp needs to be able to turn up and down lights. So you see the first two is like the moon and the sky and then the rest is like all the the kind of complex light groups of the palace in the garden. And here you see the uh the final composite. So again, same situation like all the passes rendering deep um and then you know all those all those light groups coming together. At the start of this uh presentation I spoke about how we really wanted our um our CG to match the final composite as close as possible. And I think we achieved that because on the left here it's the final environment version for one of our shots. Then on the right is the final comp. So comp isn't completely you know rebuilding the images. they're kind of just really enhancing them, which is, you know, that that's what we should aim for. A cool trick that we did uh with our like uh AOV setup on the show is we made a few custom AOVs. So, this is an example of one of them. And what it the basically this is just an RGB noise that, you know, comp can use to do like an extra cloud gobo on top of the you know, the cloud goss we already have in the shot. And the way this works is you just make like a little material X setup somewhere in the scene with a with a noise or like any texture you want or anything. And then the cool thing about Solaris is you can take the output of that setup and then in a Python lop you basically can connect that output to every shader in the scene, every material to to its AOV uh input. So it means that every shader gets you know that uh that shading network as an AOV and we made extensive use of that to give comp extra stuff that they needed. All right, cool. So, production tips. Um, so if you're using Nuke to do your um you know your your your uh shot and asset review and you're going to use Karma XPU, it's really important to make some nice burnins uh to show you like the metadata of uh you know how much of the GPU you're using, how much of the CPU you're using. So you see there, this is like our very first test using Karma XPU. um like literally you know at the start of the show it's using a Houdini 20 and at the bottom there you can see the XPU devices and you can see for this render you know the the GPU is doing 67% of the sampling and the CPU is doing 32%. And this is extremely useful when you look at your render farm and you can see like what blades are using what hardware. Um yeah, so the X uh the optics device in Karma XPU is your GPU and you want to always make sure that you're using the GPU. Um so if you run out of VRAM, you're you're not going to be able to use the GPU anymore and your your render is only going to be using Mbry and you're going to be you're going to be basically what I like to refer to as CPU locked. So only rendering using the CPU. So to avoid this like you need to be wary of GPU memory limits at all stages during the production especially memory not related to textures. So Karma XPU it will page textures in and out of memory. Um but it won't page geometry or primars. So you really got to watch out there. You don't want to add too much geo to your scene. Um you need to remove unnecessary primars from all assets in your pipeline. If your pipeline has a ton of like attributes on everything, you might want to strip that back, um, because otherwise you're not going to be able to utilize a GPU, especially when your scene scales. Um, as I said earlier, you know, if you spill over on that GPU memory, you should consider splitting the scene into multiple passes and using a proxy purpose, you know, uh, for anything that's not directly visible to the camera if you can get away with it. Um, all the artists should be checking that log viewer pane in Houdini when rendering because it will tell them if their optics device is failing. something in the scene might be causing that to fail. Even if VRAM is not at its limit. Um, this is really useful because if someone does a lookdev somewhere and causes the optics device to fail for whatever reason, it could be just a bug, they could publish something that just causes all your renders to just be CPU locked. Um, you want to like this is maybe an unpopular tip, but you want to try to benchmark your renders for the worst case scenario. So, you know, your worst case scenario is you're only rendering on the CPU. turn off your optics device using the uh global variable and test just on the CPU because that's going to be your worst case. You see here at the bottom we have our final kind of burn-in setup on the show and the bottom uh bottom right there you see it says uh it's very small but it says GPU 0% x4. So that means that that render pass it had you know that machine had four GPUs on it but we used none of them. So it's a complete waste of those GPUs. So uniform volumes, calmer CPU was better for rendering uniform volumes because you have screened door samples. Drastically reduces uh the noise especially in area or cone lights. Um you want to render these uniform volume passes at a minimum ray depth, ideally zero. And assign a shader with all loes set to zero. So like a black shader to improve performance instead of just matting. You see on the left there is low screen door samples and on the right is high. Um you really want to balance that against your path trace samples as well. And you're going to have to find that balance, but that's how you're going to get a clean and fast render. So then the the last tip is shading. So if you abuse your procedurals in Karma XPU, uh that's how you're going to get the insane render time hit. So you want to keep procedural shading as simple as possible. um abusing hexile triplaners, uh 8K hexile triplaners, point cloud lookups, rounded edge shader, like if you you can use them, but if you use too much of it, the render time is going to explode. Um, use CPU time AOV to detect heavy shaders and objects in the scene. On the right there, you see we've got the CPU time AOV, which you can render in Karma, um, overlaid over the beauty. And you see there, the ground is like red, which means that it took longer to render the ground than it took to render the trees. So when I see that like the alarm bells are going off cuz you know in my mind it should be the opposite. So then we need to tell that artist like you need to optimize your shader. Um so as well you see on the left we have our um 33minut render of our cliffs and then we basically took out half the stuff from the shader and it renders in 20 seconds and the look is not too uh dissimilar. So what you have to when you're using XPU you should think like am I getting everything like out of the shader that like have I got too much here? if I pull it back, can I get a faster render time while still having the same look? And then the last thing is be very wary of your shader compilation time u when using uh when rendering with XPU and optics. Every single time that you uh fire up a render in optics, it has to compile every single shader in the scene. So if you have a lot of them, that can that can be a considerable hit. And the other thing as well is if you uh unplug and replplug nodes in your shader, that's going to recompile the shader. So just be very careful. So to conclude, uh we had a really great experience using Karma on the project and uh you know we look forward to more in the future and we're already looking uh at projects now where we're using it. Um many lessons were learned. That's the most important thing and taught us a lot not just about adopting a new technology. You know we were using Arnold on all other shows. On this one we adopted Karma and it just taught us a lot about how to adopt a new technology and next time we're going to be far better equipped. So merci everyone and thanks for coming. Amazing information, Scott. Thanks for sharing that. Um, we don't have time for any uh questions. We have to move on. But uh I just want to say, you know, this is a great example of uh production tips that have been learned in the trenches through work, through experimentation, through innovation. Uh and it's really really amazing to see that uh you guys are sharing that with the community and thank you very much. Thank you. [Applause]

Need a transcript for another video?

Get free YouTube transcripts with timestamps, translation, and download options.

Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact

The Role of Karma in Dune: Prophecy | Rodeo FX | FMX HIVE...