Archived post by lewis.taylor.

yeah

so normally I will set my density to the highest fidelity I need, and unless there is some insanely detailed velocity requirement will up the Vel scale

@hanhanxue VDB point advect will still suffer from not being able to 100% push them into the smaller eddies, so I think it’s best to just own it all in the pyro sim. I took Matt’s file and just added these vortex nodes. It’s a bit heavy handed here, but vortex confinement and vortex boost along with nice substeps will give you control to make these swirls. You can click the initialize option on the SOP pyro to drop down a volumes sharpen setup. This will sharpen density _just_ a bit more, saving you some precious sim time.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253911/09/25/image.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253911/09/25/porno_for_pyro.hip

Archived post by mattiasmalmer

here is a simple implementation of adjacency aware blur running over texture seams.

“` #bind layer src? val=0 #bind layer &tmp val=0 #bind layer adjacency? val=0 #bind layer &blur
@KERNEL { if (@Iteration==0) { @tmp.set(@src); } else { float4 sum=(float4)(0); for (int i=-@pixels; i<=@pixels;i++) { float2 uv= (@adjacency.bufferIndex((int2)(@ix+i,@iy)).xy); float4 px= (@blur.bufferSample(uv*(float2)(@xres,@yres)-(float2)(0.5f,0.5f))); sum+=px; } sum/=@pixels+@pixels+1;
@tmp.set(sum); } }
@WRITEBACK {
float4 sum=(float4)(0); for (int i=-@pixels; i<=@pixels;i++) { float2 uv= (@adjacency.bufferIndex((int2)(@ix,@iy+i)).xy); float4 px= (@tmp.bufferSample(uv*(float2)(@xres,@yres)-(float2)(0.5f,0.5f))); sum+=px; } sum/=@pixels+@pixels+1;
@blur.set(sum); } “`
bind “pixels” as int in the bindings tab. src and adjacency as inputs blur and tmp as outputs. enable writeback and include iterations.

you need the adjacency rasterizer i made a while back for the adjacency input…

now this is a pretty dogshit blur implementation mostly as proof of concept but it does blur the image so there’s that.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20253311/07/25/houdini_XnfFGtckSi.mp4

Archived post by reinholdr

Calculate how much time artists are wasting on waiting for Houdini to open. Money is the only thing that gets the attention of the decision makers

The `hotl` command line utility has a flag to merge any number of otls into a single otl file. For dev work we work locally on otl files that are split per hda type and when the package is released the `hotl -m` command merges them into a single file

Archived post by lwwwwwws

tidying up my desktop and remembered i had a go at smooth 3d colour correction a la discord.com/channels/270023348623376395/351975494947962882/1420903144707325953 didn’t take it too seriously because similarly to the “look at this RGB cube!!” tools in flame and that primatte thing that shows you the key in 3D it doesn’t actually work that well, combining multiple regular keys is really the way for surgical edits… maybe interesting for more gentle grading like the colour mesh tools baselight and resolve now have tho

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251510/31/25/Ls_ColourBooper_v01.8mb.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251510/31/25/Ls_ColourBooper_v01.hipnc

Archived post by mattiasmalmer

a production friendly way of making skeletons is to use minimal spanning tree to just grab a set of arbitrary points and build a skeleton from. then you can make super solid things like this:

all fun aside minimal spanning trees are great for building skeleton structures from unsorted points because you know that you get all the points and the hierarchy is logical and it always makes a valid skeleton.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20252710/31/25/houdini_qrNQIGMFRV.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20252710/31/25/houdini_ZIr4OV9u4L.mp4
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20252710/31/25/sop_mattias.minimalspanningtree.1.0.hdalc

Archived post by lwwwwwws

shit i have done to help fill in gaps between points like that: – copy the points a few times and offset each copy a different random amount along v if a pop sim, or if it’s procedural distortion keep N up to date through the distortion and move them at right angles to it to avoid making the shapes fluffier – transform to NDC, flatten in Z and calculate density by finding average distance of closest 10 points or whatever so you have a screen-space measure of how “piled up” the points are getting and can avoid adding too many in dense areas – render velocity or screen space tangent vector AOVs, dilate or infill them in comp, then vector motion blur which will follow the shapes and avoid blurring across them – vector blur in comp just based on image gradient – should be easy in cops now to do slope blur which blurs along contours instead of across, usually makes things look “silky”… it’s what you do for hair retouching in shampoo ads

Archived post by mattiasmalmer

Useful trick for filtering 3D tracked cameras:
“`
/* SMOOTHCAM
turn on constraints on your camera. add a constraint network (green squiggle button)
jump into the constraintnetwork. drop a “TransformWrangle and paste this code. press the create spare parameters button.
set the targetdistance to somwhere a ways away from the camera usually somwhere in the middle of your 3d tracked pointcloud enable the orange outputflag on the tranwsformwrangle
connect the getworldspace to input 0 of the wrangle.
drop a Filter set common/units to frames and perhaps something like 15 frames filterwidth and a couple of passes.
connect the filtered version of getworldspace to input1 on the wrangle
on the getworldspace set to ../.. and make sure its channel range is set to “use full animation range” or even longer to get good “ends” on your filter
remember to extend your cam animation with plausible slopes to get good filtering */
chopTRS c0 = c->fetchInput(0); chopTRS c1 = c->fetchInput(1);
matrix m0 = c->fetchInputMatrix(0); matrix m1 = c->fetchInputMatrix(1);
float dist=chf(“targetdistance”);
vector source=set(0,0,0)*m1; vector target=set(0,0,-dist)*m0; vector up=set(0,10,-dist)*m0;
up=normalize(up-target);
matrix3 orient=lookat(source,target,up); matrix TM=matrix(orient); vector rot=cracktransform(0,0,1,0,TM);
// Set transform @t = source; @r = rot;
“`

this helps by filtering the position of the camera but making sure that the pointing of the camera is unaffected at the target point distance so none of your position jitter ends up shaking your 3d track.

a classic use case is if an object is supposed to be really close to the camera but most of your tracking dots are some ways away.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20254910/24/25/houdini_HrPUCSdRn5.mp4

Archived post by mysterypancake

for `mat3`, & isn’t required because `mat3` is an array of vectors (`fpreal3[3]`) arrays are passed as pointers to the first element, so `mat3` is already a pointer type
this isn’t true for `mat2` and `mat4` though, since they are vector types

i started working on an opencl page today, will add more examples/sections later. it’s missing a lot of basic info at the moment github.com/MysteryPancake/Houdini-OpenCL pls abuse me if i made any horrible mistakes

Archived post by animatrix2k7

thanks man these are nice. he is using polycut, nice use case for this node

Anyone know the nodegraphtitle.py trick? I searched here but there is no mention of it. If you override this file in your user directory, you can have custom stats on the network editor like this (top left).

I use this code:
“`python from __future__ import print_function import hou import nodegraphprefs as prefs
def networkEditorTitleLeft(editor): try: title = ‘\n’
#title += “Viewport\n\n”
selectedNodes = hou.selectedNodes() if len(selectedNodes) == 1 and hou.ui.updateMode() == hou.updateMode.AutoUpdate: currentNode = selectedNodes[0] nodePath = currentNode.path()
#title += str(len(currentNode.geometry().points())) ptcount = int(hou.hscriptExpression(“npoints(\”” + nodePath + “\”)”)) primcount = int(hou.hscriptExpression(“nprims(\”” + nodePath + “\”)”))
title += str(f'{ptcount:,}’) + ” points\n” title += str(f'{primcount:,}’) + ” prims\n\n”
opinfo = hou.hscript(“opinfo -n ” + nodePath)[0]
if “Memory: ” in opinfo: memory = opinfo.split(“Memory: “)[1].split(“\n”)[0] if ‘;’ in memory: # Split the text at the first ‘;’ occurrence parts = memory.split(‘;’, 1) # Rejoin with the second part encapsulated in parentheses memory = f”{parts[0]} ({parts[1].strip()})” title += memory + “\n”
cooktime = opinfo.split(“Last Cook Time: “)[1].split(“\n”)[0] title += cooktime + “\n\n”
screenbounds = editor.screenBounds() # Figure out how much we need to scale the current bounds to get to # a zoom level of 100 pixels per network editor unit. bounds = editor.visibleBounds() currentzoom = editor.screenBounds().size().x() / bounds.size().x() title += “{:0.2f}%\n”.format(currentzoom)
“`

“`python pwd = editor.pwd() playerparm = pwd.parm(‘isplayer’) if playerparm is not None and playerparm.evalAsInt() != 0: title += ‘Network in Playback Mode\n’ if prefs.showPerfStats(editor): profile = hou.perfMon.activeProfile() if profile is not None: profiletitle = profile.title() if not profiletitle: profiletitle = ‘Profile ‘ + str(profile.id()) title += profiletitle + ‘: ‘ + prefs.perfStatName(editor)
except: title = ”
return title
def networkEditorTitleRight(editor): try: title = ” pwd = editor.pwd() title += pwd.childTypeCategory().label()
except: title = ”
return title “`

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251010/18/25/image.png