Archived post by mattiasmalmer

Useful trick for filtering 3D tracked cameras:
“`
/* SMOOTHCAM
turn on constraints on your camera. add a constraint network (green squiggle button)
jump into the constraintnetwork. drop a “TransformWrangle and paste this code. press the create spare parameters button.
set the targetdistance to somwhere a ways away from the camera usually somwhere in the middle of your 3d tracked pointcloud enable the orange outputflag on the tranwsformwrangle
connect the getworldspace to input 0 of the wrangle.
drop a Filter set common/units to frames and perhaps something like 15 frames filterwidth and a couple of passes.
connect the filtered version of getworldspace to input1 on the wrangle
on the getworldspace set to ../.. and make sure its channel range is set to “use full animation range” or even longer to get good “ends” on your filter
remember to extend your cam animation with plausible slopes to get good filtering */
chopTRS c0 = c->fetchInput(0); chopTRS c1 = c->fetchInput(1);
matrix m0 = c->fetchInputMatrix(0); matrix m1 = c->fetchInputMatrix(1);
float dist=chf(“targetdistance”);
vector source=set(0,0,0)*m1; vector target=set(0,0,-dist)*m0; vector up=set(0,10,-dist)*m0;
up=normalize(up-target);
matrix3 orient=lookat(source,target,up); matrix TM=matrix(orient); vector rot=cracktransform(0,0,1,0,TM);
// Set transform @t = source; @r = rot;
“`

this helps by filtering the position of the camera but making sure that the pointing of the camera is unaffected at the target point distance so none of your position jitter ends up shaking your 3d track.

a classic use case is if an object is supposed to be really close to the camera but most of your tracking dots are some ways away.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20254910/24/25/houdini_HrPUCSdRn5.mp4

Archived post by mysterypancake

for `mat3`, & isn’t required because `mat3` is an array of vectors (`fpreal3[3]`) arrays are passed as pointers to the first element, so `mat3` is already a pointer type
this isn’t true for `mat2` and `mat4` though, since they are vector types

i started working on an opencl page today, will add more examples/sections later. it’s missing a lot of basic info at the moment github.com/MysteryPancake/Houdini-OpenCL pls abuse me if i made any horrible mistakes

Archived post by animatrix2k7

thanks man these are nice. he is using polycut, nice use case for this node

Anyone know the nodegraphtitle.py trick? I searched here but there is no mention of it. If you override this file in your user directory, you can have custom stats on the network editor like this (top left).

I use this code:
“`python from __future__ import print_function import hou import nodegraphprefs as prefs
def networkEditorTitleLeft(editor): try: title = ‘\n’
#title += “Viewport\n\n”
selectedNodes = hou.selectedNodes() if len(selectedNodes) == 1 and hou.ui.updateMode() == hou.updateMode.AutoUpdate: currentNode = selectedNodes[0] nodePath = currentNode.path()
#title += str(len(currentNode.geometry().points())) ptcount = int(hou.hscriptExpression(“npoints(\”” + nodePath + “\”)”)) primcount = int(hou.hscriptExpression(“nprims(\”” + nodePath + “\”)”))
title += str(f'{ptcount:,}’) + ” points\n” title += str(f'{primcount:,}’) + ” prims\n\n”
opinfo = hou.hscript(“opinfo -n ” + nodePath)[0]
if “Memory: ” in opinfo: memory = opinfo.split(“Memory: “)[1].split(“\n”)[0] if ‘;’ in memory: # Split the text at the first ‘;’ occurrence parts = memory.split(‘;’, 1) # Rejoin with the second part encapsulated in parentheses memory = f”{parts[0]} ({parts[1].strip()})” title += memory + “\n”
cooktime = opinfo.split(“Last Cook Time: “)[1].split(“\n”)[0] title += cooktime + “\n\n”
screenbounds = editor.screenBounds() # Figure out how much we need to scale the current bounds to get to # a zoom level of 100 pixels per network editor unit. bounds = editor.visibleBounds() currentzoom = editor.screenBounds().size().x() / bounds.size().x() title += “{:0.2f}%\n”.format(currentzoom)
“`

“`python pwd = editor.pwd() playerparm = pwd.parm(‘isplayer’) if playerparm is not None and playerparm.evalAsInt() != 0: title += ‘Network in Playback Mode\n’ if prefs.showPerfStats(editor): profile = hou.perfMon.activeProfile() if profile is not None: profiletitle = profile.title() if not profiletitle: profiletitle = ‘Profile ‘ + str(profile.id()) title += profiletitle + ‘: ‘ + prefs.perfStatName(editor)
except: title = ”
return title
def networkEditorTitleRight(editor): try: title = ” pwd = editor.pwd() title += pwd.childTypeCategory().label()
except: title = ”
return title “`

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251010/18/25/image.png

Archived post by sniperjake945

i know we’ve all moved on from the hex sphere conversation but i will say the planarization method sidefx is using in the facet sop is actually so un flattering. Im assuming its generating some kind of per face normal and then using the average position and then projecting wrt to that. but like there are so many better methods… For instance, using the local/local solve from: roipo.github.io/publication/poranne-2013-interactive/planarization.pdf
We get the result on the right after 50 iterations (which is planar for all faceted polygons) vs what’s coming out of the facet sop with make planar turned on (left)…
The file also includes an example of the local global solve (or at least to the best of my ability it does)

if we planarize before faceting we can get even better results in some cases. Like this sphere example. The left is facet and the right is the local/local solve (faceted after).

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250810/18/25/image.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250810/18/25/jr_planarize_polygons.hip

Archived post by lwwwwwws

ok that pic flight took of the sunset shadow had been bugging me (discord.com/channels/270023348623376395/351983374510063636/1427701531674939523) so i did the obvious thing: downloaded an etopo DEM geotiff and used it to displace a scale model of the earth with a layer of uniform volume for the atmosphere, then looked up exactly which direction the sun set in on that day and put a sphere light over there 149 million km away 🌄 waddya know there are two mountains in just the right place and karma can kind of render a sunset even though it’s not spectral and doesn’t really have rayleigh scattering lobes

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250110/16/25/Ls_KarmaSunset_v01.zip
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250110/16/25/Screenshot_2025-10-16_at_21.20.12.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250110/16/25/Screenshot_2025-10-16_at_21.10.22.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20250110/16/25/Screenshot_2025-10-16_at_21.12.10.png

Archived post by lewis.taylor.

it can be sped up

regarding creating density, here’s a little trick I use with all my sourcing. It makes the emission more natural, and reduces having bad looking sourcing visible.

mult your density with a remapped normalized age. This starts it out from zero, ramps up, and fades down

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20254410/15/25/image.png