Archived post by lwwwwwws

kuwahara is a cool trick but it does date from 1976, bilateral filters are a more controllable and better behaved way to filter while avoiding edges… here’s one i use a lot in comp and just translated from glsl to opencl

it works ok on bacon but i do feel like it could be improved 🤔 it’s the kind of thing that works better in log than linear and the threshold control might be better if it was a hard cutoff so colours past a certain distance are ignored, instead of this drastic pow() curve thing i’m doing

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20261102/08/26/Screenshot_2026-02-08_at_16.18.15.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20261102/08/26/Screenshot_2026-02-08_at_16.17.55.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20261102/08/26/Ls_Dollface_v01.cl

Archived post by blacknye

Sure thing. I’ll upload a file today when I get back.

@.goldfarb. Here you go. 2 days late . Added a few more controls to it. There’s still a few things I want to fix and add. Big thing for me – figuring out how to set a LookAt constraint that allows you to move the Driven while also obeying the orientation toward the Up so then it’ll aim/rotate from the correct pivot.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20265902/08/26/apex_wheel_exp.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20265902/08/26/apex_wheel_experiment_v001.hip

Archived post by shadeops

General PSA when rendering VDBs with Karma (XPU/CPU) or the Vulkan Viewport
Currently when rendering VDB volumes in a Houdini renderer, Houdini will read the entire VDB file from disk regardless of the number of fields within the VDB that are actually used.
Say for example when working in SOPs, you exported a VDB with the following fields to a VDB on disk. – `density` ( 200MB ) – `temperature` ( 200MB ) – `scatter` ( 300MB ) – `vel` ( 350MB ) – `rest` ( 300MB ) – `flame` (150MB ) (Total, 1500MB file)
However on the USD Stage, either through pruning or selective loading with a Volume LOP, your final stage looks like “` /fx/geo/explosion/ [ Volume ] density [OpenVDBAsset] vel [OpenVDBAsset] scatter [OpenVDBAsset] “` Since only 850MB of data is needed to render, ideally that is all that would be loaded from the VDB files (since they support random access). However with Karma / Vulkan this isn’t the case and all the fields will be read from disk. Which can cause a lot of extra network I/O.
As for other renderers – – RenderMan 26 will only read the fields from disk that are referenced on the stage. (850MB) – V-Ray 7 will only read the fields from disk that are referenced on the stage and used within the volume shaders (850MB)
tl;dr – Make sure you only save the VDB fields you intend to render with, pruning on the stage doesn’t reduce I/O with Karma / Vulkan.

Technically you could have 1 field per VDB file, and assemble them under one Volume prim on the stage and Karma would be okay with that. Resulting in data I/O for only what is on the stage. However other renderers (V-Ray especially) will have an utter shit-fit if your fields are spread across multiple VDBs. So not really recommended.

(This was verified by using a file page monitor on Linux)

Archived post by vanity_ibex

Ported Mild Slope Equation Solver to COPs (Fancy Ripple Solver)

wrote a little bit about it here: vanity-ibex.xyz/blog/mildslopeequation/

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20264801/13/26/Mild-Slope_Equation_09.rop_image1.0001_output_compressed_crf20.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20264801/13/26/Mild-Slope_Equation.hiplc

Archived post by mysterypancake

dumb cops question – does anyone know how to get the size of the volume in hscript?

i thought volumeres but no luck 🙁

finished prefix sum, now around 15x faster 👀

also the iterations can be animated now

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20263801/12/26/image.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20263801/12/26/prefixsum.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20263801/12/26/cops_fast_prefixsum.hiplc
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20263801/12/26/prefixsum_animate.webp

Archived post by kolupsy

a while ago I thought it would be fun to try and optimize the “equalize” node. It’s not perfect since I don’t think cops allows image inputs of different sizes but it seems to run faster than the builtin “equalize” node by using a more parallel-friendly algorithm. Thought I’d drop that here in case anybody is interested

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20260001/10/26/cops_fast_normalize.hipnc
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20260001/10/26/image.png