Archived post by reinholdr

Calculate how much time artists are wasting on waiting for Houdini to open. Money is the only thing that gets the attention of the decision makers

The `hotl` command line utility has a flag to merge any number of otls into a single otl file. For dev work we work locally on otl files that are split per hda type and when the package is released the `hotl -m` command merges them into a single file

Archived post by mattiasmalmer

Useful trick for filtering 3D tracked cameras:
“`
/* SMOOTHCAM
turn on constraints on your camera. add a constraint network (green squiggle button)
jump into the constraintnetwork. drop a “TransformWrangle and paste this code. press the create spare parameters button.
set the targetdistance to somwhere a ways away from the camera usually somwhere in the middle of your 3d tracked pointcloud enable the orange outputflag on the tranwsformwrangle
connect the getworldspace to input 0 of the wrangle.
drop a Filter set common/units to frames and perhaps something like 15 frames filterwidth and a couple of passes.
connect the filtered version of getworldspace to input1 on the wrangle
on the getworldspace set to ../.. and make sure its channel range is set to “use full animation range” or even longer to get good “ends” on your filter
remember to extend your cam animation with plausible slopes to get good filtering */
chopTRS c0 = c->fetchInput(0); chopTRS c1 = c->fetchInput(1);
matrix m0 = c->fetchInputMatrix(0); matrix m1 = c->fetchInputMatrix(1);
float dist=chf(“targetdistance”);
vector source=set(0,0,0)*m1; vector target=set(0,0,-dist)*m0; vector up=set(0,10,-dist)*m0;
up=normalize(up-target);
matrix3 orient=lookat(source,target,up); matrix TM=matrix(orient); vector rot=cracktransform(0,0,1,0,TM);
// Set transform @t = source; @r = rot;
“`

this helps by filtering the position of the camera but making sure that the pointing of the camera is unaffected at the target point distance so none of your position jitter ends up shaking your 3d track.

a classic use case is if an object is supposed to be really close to the camera but most of your tracking dots are some ways away.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20254910/24/25/houdini_HrPUCSdRn5.mp4

Archived post by mysterypancake

for `mat3`, & isn’t required because `mat3` is an array of vectors (`fpreal3[3]`) arrays are passed as pointers to the first element, so `mat3` is already a pointer type
this isn’t true for `mat2` and `mat4` though, since they are vector types

i started working on an opencl page today, will add more examples/sections later. it’s missing a lot of basic info at the moment github.com/MysteryPancake/Houdini-OpenCL pls abuse me if i made any horrible mistakes

Archived post by animatrix2k7

thanks man these are nice. he is using polycut, nice use case for this node

Anyone know the nodegraphtitle.py trick? I searched here but there is no mention of it. If you override this file in your user directory, you can have custom stats on the network editor like this (top left).

I use this code:
“`python from __future__ import print_function import hou import nodegraphprefs as prefs
def networkEditorTitleLeft(editor): try: title = ‘\n’
#title += “Viewport\n\n”
selectedNodes = hou.selectedNodes() if len(selectedNodes) == 1 and hou.ui.updateMode() == hou.updateMode.AutoUpdate: currentNode = selectedNodes[0] nodePath = currentNode.path()
#title += str(len(currentNode.geometry().points())) ptcount = int(hou.hscriptExpression(“npoints(\”” + nodePath + “\”)”)) primcount = int(hou.hscriptExpression(“nprims(\”” + nodePath + “\”)”))
title += str(f'{ptcount:,}’) + ” points\n” title += str(f'{primcount:,}’) + ” prims\n\n”
opinfo = hou.hscript(“opinfo -n ” + nodePath)[0]
if “Memory: ” in opinfo: memory = opinfo.split(“Memory: “)[1].split(“\n”)[0] if ‘;’ in memory: # Split the text at the first ‘;’ occurrence parts = memory.split(‘;’, 1) # Rejoin with the second part encapsulated in parentheses memory = f”{parts[0]} ({parts[1].strip()})” title += memory + “\n”
cooktime = opinfo.split(“Last Cook Time: “)[1].split(“\n”)[0] title += cooktime + “\n\n”
screenbounds = editor.screenBounds() # Figure out how much we need to scale the current bounds to get to # a zoom level of 100 pixels per network editor unit. bounds = editor.visibleBounds() currentzoom = editor.screenBounds().size().x() / bounds.size().x() title += “{:0.2f}%\n”.format(currentzoom)
“`

“`python pwd = editor.pwd() playerparm = pwd.parm(‘isplayer’) if playerparm is not None and playerparm.evalAsInt() != 0: title += ‘Network in Playback Mode\n’ if prefs.showPerfStats(editor): profile = hou.perfMon.activeProfile() if profile is not None: profiletitle = profile.title() if not profiletitle: profiletitle = ‘Profile ‘ + str(profile.id()) title += profiletitle + ‘: ‘ + prefs.perfStatName(editor)
except: title = ”
return title
def networkEditorTitleRight(editor): try: title = ” pwd = editor.pwd() title += pwd.childTypeCategory().label()
except: title = ”
return title “`

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251010/18/25/image.png

Archived post by lewis.taylor.

your primary driver is _what_ scale are you rendering at? everything else is just a scaling factor going from that to the preferred working scale of the solver. Pyro will handle very small values, and very large ones fine, so you don’t tend to mess with the working scale. 1m is 1m for example.
FLIP is notorious for being fiddly as small scale, so anything under 1m in real world size you tend to work in larger scales. Example, simming liquid pouring in to a glass, real size might be 0.1m, but you would generally work at 10x that in FLIP. But on the other end, if the scene is 10m or 100m or 1000m you would leave FLIP at normal scale.
Bullet similar deal. Anything with pieces under 1-2cm can be a pain, so we routinely work 10x in scale. But if your smallest piece is going to be decently sized you might not change working scale at all.
Vellum is roughly based around real-world, so pretty much never change this working scale.
Heightfields, it really just comes down to working in the scale that the solver/defaults are built around.
At the end of it, you’re really only talking about working in the scale best for the solver/technique, and then scaling to render scale for output/lighting.

Archived post by lwwwwwws

poisson with the smooth fill COP or the attribute fill SOP leave the contour lines pretty visible because they only try to stay C0 smooth… i fear we must reach for everybody’s favourite

the help has an RBF interpolation that i just adapted and keeps things much smoother… good example of how brittle the linear solver can be as well though, changing the RBF function or the numerical range of the inputs can instantly stop it converging or make it output nonsense

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253909/18/25/Linear_Solver_SOP.jpeg
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253909/18/25/Screenshot_2025-09-18_at_13.57.21.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253909/18/25/Screenshot_2025-09-18_at_13.57.25.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253909/18/25/Screenshot_2025-09-18_at_13.57.36.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253909/18/25/Ls_TerrainFromContours_v01.hipnc