Archived post by shadeops

General PSA when rendering VDBs with Karma (XPU/CPU) or the Vulkan Viewport
Currently when rendering VDB volumes in a Houdini renderer, Houdini will read the entire VDB file from disk regardless of the number of fields within the VDB that are actually used.
Say for example when working in SOPs, you exported a VDB with the following fields to a VDB on disk. – `density` ( 200MB ) – `temperature` ( 200MB ) – `scatter` ( 300MB ) – `vel` ( 350MB ) – `rest` ( 300MB ) – `flame` (150MB ) (Total, 1500MB file)
However on the USD Stage, either through pruning or selective loading with a Volume LOP, your final stage looks like “` /fx/geo/explosion/ [ Volume ] density [OpenVDBAsset] vel [OpenVDBAsset] scatter [OpenVDBAsset] “` Since only 850MB of data is needed to render, ideally that is all that would be loaded from the VDB files (since they support random access). However with Karma / Vulkan this isn’t the case and all the fields will be read from disk. Which can cause a lot of extra network I/O.
As for other renderers – – RenderMan 26 will only read the fields from disk that are referenced on the stage. (850MB) – V-Ray 7 will only read the fields from disk that are referenced on the stage and used within the volume shaders (850MB)
tl;dr – Make sure you only save the VDB fields you intend to render with, pruning on the stage doesn’t reduce I/O with Karma / Vulkan.

Technically you could have 1 field per VDB file, and assemble them under one Volume prim on the stage and Karma would be okay with that. Resulting in data I/O for only what is on the stage. However other renderers (V-Ray especially) will have an utter shit-fit if your fields are spread across multiple VDBs. So not really recommended.

(This was verified by using a file page monitor on Linux)