Just realized I never shared my Houdini Engine implementation for Godot here 😄 Released a big update a couple days ago that added parameter/input UI and a default solution for output handling for meshes, instancing and spawning objects/scenes github.com/peterprickarz/hego https://youtu.be/cviGlmKmFQ8
Category: hou-realtime
Archived post by paqwak
I stashed the low version of Disney cloud (I use the full one for the test -> solaris), and the grid too so you see at least something in Hou. I don’t know much about properly link karma render to copernic so yeah I just saved the 2 render files. Don’t judge the opencl code, it’s build with chatgpt. Beside you can probably render the effect directly in cop, but again no clue how import a point position, so I solve it in vex. Feel free to correct my crap :O)
Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20244011/23/24/SH_Setup.7z
Archived post by Atom
@art3mis I put together some simple python scripting examples for fetching image assets into the Unreal Editor and connecting them to a material. forums.unrealengine.com/t/solved-import-alembic-and-assign-material/240088 forums.unrealengine.com/t/solved-connect-a-texture-to-a-material/151392 forums.unrealengine.com/t/solved-folder-to-sequencer/152330 forums.unrealengine.com/t/solved-images-as-planes-for-unreal/152292
Archived post by mestela
yeah, the temp fix i’ve found is to make a copy of the sequence stored in the usd file (double click the sequence in the usd to load it, use the wrench icon to save it to a new name in your content folder), make a new camera, parent it underneath your usd camera, and drag that camera into sequencer
now and then i’ve been able to use the usd camera directly, but it wasn’t reliable, this hacky hack works
Archived post by Glad-Partikel
Can, yes
But that will cost performance
Precompute everything you can
Figured out how to do simple advection in my Grid2d example so I figured I might as well record that too.
Archived post by Glad-Partikel
Some tasty examples
I’ve been getting into Grid2d for Niagara and it’s a bit tricky to get it going to the point where you can start building your own advection, solvers etc. So I recorded a video so I remember all the steps needed to get something shown on screen. Might help someone else. https://www.youtube.com/watch?v=XVKpofOj44c
Archived post by Glad-Partikel
I made a thing! First tutorial in almost two years.
Archived post by nralabate
for VR, not sure what there is to learn it’s just regular R but rendered twice! unless you mean UX guidelines, then these: learn.unity.com/tutorial/vr-best-practice developer.oculus.com/documentation/mobilesdk/1.0.3/concepts/mobile-ui-guidelines-intro/
when i was learning VEG i found the official forums pretty decent, looks like there is one for shader graph as well
forum.unity.com/threads/feedback-wanted-shader-graph.511960/
rope-learning, this is probably too basic for you but again official unity tutorials plus download and examine official unity assets
unity3d.com/pt/learn/tutorials/s/roll-ball-tutorial
they have helpfully scattered their assets across an asset store a package manager and github but here is some sample shaders to study
github.com/UnityTechnologies/ShaderGraph_ExampleLibrary
i would be wary of studying their high-profile demos like ADAM though they write bespoke just-for-demo systems that never ship plus those projects are kinda a mess to navigate (smells like trade show demo deadline stress)
more sample shaders: github.com/keijiro/ShaderGraphExamples
lastly, realtime vfx forum often posts WIP of effects made in shader graph… sometimes will post the graph itself as well
brackeys has some ok effects done in shader graph
another official talk, from gdc
but don’t see paid classes on shader graph yet (one on old text-based shaders tho)
hope that helps!
Archived post by Glad-Partikel
In games and realtime VFX stands for Visual Effects Artist. This is to differentiate from Sound Effects. Often a gameplay feature (think a cool powerup) needs both. There’s no better reason for it than that. So a VFX artist handles particles, some shaders and some animation. There’s a lot of overlap with tech artists who also do shaders and certain types of animation. The other parts of what is known as VFX in film, is just part of the art pipeline. Making models, shading, lighting, texturing and so on is al under art. Now, scene building has overlap between environment art and level design as every scene needs to look good, but the first priority is that it plays well.
Because of this, it should be relatively easy to move from film as a modeler. The main new issue is the lower polycount and hard constraints on number of materials used. There is also a lot bigger demands on your LODs. Textures follow PBR rules which should match as long as your shaders are correct. For a lighter there are also similarities, but some more technical issues to handle. You need to be aware of how light gets baked into the scene, how to work with limited bounce, dynamically changing light like time of day and destruction of buildings. You will also usually be in charge of post processes, which essentially is as far as it gets into compositing. For shaders you go to Tech Artists. They know the ins and outs of the engine and pipeline. They are the link to coders. They make sure your shaders look good but are lightweight enough to render at 60 FPS. This role varies a lot from company to company. Animation has overlap. However, unless you work on cutscenes or cinematics, you won’t be doing long sequences. It will mostly be loops or parts of animations that then get blended together in the engine as the player does things. PCap and all that still happens though, so there’s a big part of the animation pipeline that’s the same.
Building assets is always just the first step. Everything needs to be collated in the engine. Animations need to trigger based on player input. Smokesimulations need to be brought in and played back on sprites. Environment art assets needs to be matched with collision so the player can interact with the world and traverse it. On top of this it all has to perform fast enough on pretty old hardware. Not only rendering which happens on the GPU, but the CPU calculations. The CPU is already handling all the fun gameplay code like making AIs find their way around the world or figuring out what the hell the joint rotation means for this skinned mesh and so on. It gets real sad if it also has to send a call to the GPU to draw your fancy rock more than once because you figured that the mosslayer deserved a material of its own. Now the GPU is busy trying to draw everything on screen within 16 milliseconds (60FPS). First it draws a bunch of opaque hard surface stuff. Then it needs to do it again because it was all covered in translucent smoke and treeleaves with transmission. On top of that, somebody is trying to simulate thousands of particles on the poor thing.
To keep track of all of this it gets shoved into RAM on what’s essentially a midrange PC from 2013 (PS4). And by all of this I mean the AI and stuff. The graphics goes into the VRAM and that includes all the textures needed. That’s impossible. Therefore we need to use MIPs and Streaming to shove things in and out of VRAM. That means you have to decide, what resolution you can afford to use on that shiny gun that takes up half the screen, compared to the badass rock that you photoscanned in Iceland. The gun will always win as it takes up more of the screen. Guess how much space you get as the dude who adds sparks and smoke in the background…
Archived post by Glad-Partikel
Perhaps the wrong crowd, as this is quite specific but I’e started a Realtime VFX glossary project. So far it’s just a list of terms you should know as a rtvfx artist. If any of you have a few minutes to spare I’d love to know what I’ve missed.
docs.google.com/spreadsheets/d/1l02nhiUdTFRG6BuHJd9nGmq4NKiIi_gkpiWCIFnA_Cg/edit#gid=0