Archived post by Glad-Partikel

In games and realtime VFX stands for Visual Effects Artist. This is to differentiate from Sound Effects. Often a gameplay feature (think a cool powerup) needs both. There’s no better reason for it than that. So a VFX artist handles particles, some shaders and some animation. There’s a lot of overlap with tech artists who also do shaders and certain types of animation. The other parts of what is known as VFX in film, is just part of the art pipeline. Making models, shading, lighting, texturing and so on is al under art. Now, scene building has overlap between environment art and level design as every scene needs to look good, but the first priority is that it plays well.  

Because of this, it should be relatively easy to move from film as a modeler. The main new issue is the lower polycount and hard constraints on number of materials used. There is also a lot bigger demands on your LODs. Textures follow PBR rules which should match as long as your shaders are correct. For a lighter there are also similarities, but some more technical issues to handle. You need to be aware of how light gets baked into the scene, how to work with limited bounce, dynamically changing light like time of day and destruction of buildings. You will also usually be in charge of post processes, which essentially is as far as it gets into compositing. For shaders you go to Tech Artists. They know the ins and outs of the engine and pipeline. They are the link to coders. They make sure your shaders look good but are lightweight enough to render at 60 FPS. This role varies a lot from company to company. Animation has overlap. However, unless you work on cutscenes or cinematics, you won’t be doing long sequences. It will mostly be loops or parts of animations that then get blended together in the engine as the player does things. PCap and all that still happens though, so there’s a big part of the animation pipeline that’s the same.


Building assets is always just the first step. Everything needs to be collated in the engine. Animations need to trigger based on player input. Smokesimulations need to be brought in and played back on sprites. Environment art assets needs to be matched with collision so the player can interact with the world and traverse it. On top of this it all has to perform fast enough on pretty old hardware. Not only rendering which happens on the GPU, but the CPU calculations. The CPU is already handling all the fun gameplay code like making AIs find their way around the world or figuring out what the hell the joint rotation means for this skinned mesh and so on. It gets real sad if it also has to send a call to the GPU to draw your fancy rock more than once because you figured that the mosslayer deserved a material of its own. Now the GPU is busy trying to draw everything on screen within 16 milliseconds (60FPS). First it draws a bunch of opaque hard surface stuff. Then it needs to do it again because it was all covered in translucent smoke and treeleaves with transmission. On top of that, somebody is trying to simulate thousands of particles on the poor thing.
To keep track of all of this it gets shoved into RAM on what’s essentially a midrange PC from 2013 (PS4). And by all of this I mean the AI and stuff. The graphics goes into the VRAM and that includes all the textures needed. That’s impossible. Therefore we need to use MIPs and Streaming to shove things in and out of VRAM. That means you have to decide, what resolution you can afford to use on that shiny gun that takes up half the screen, compared to the badass rock that you photoscanned in Iceland. The gun will always win as it takes up more of the screen. Guess how much space you get as the dude who adds sparks and smoke in the background…