Archived post by sniperjake945

It’s probably similar to the clip sop? Here’s like a paired back version of the code, which will only work for quads (since disney modeling only ever makes full quad meshes with planar quads cuz ptex).
“`c void march(vector src_p, dst_p; float omega_a, omega_b; int new_pts[];){ if(sign(omega_a) == sign(omega_b)) return; float m = efit(0, omega_a, omega_b, 0, 1); if(m < 0 || m > 1) return; vector np = lerp(src_p, dst_p, m); int npt = addpoint(0, np); append(new_pts, npt); }
int pp[] = primpoints(0, @primnum); int new_pts[];
vector p0 = point(0, “P”, pp[0]); vector p1 = point(0, “P”, pp[1]); vector p2 = point(0, “P”, pp[2]); vector p3 = point(0, “P”, pp[3]);
float omega0 = point(0, “levelset”, pp[0]); float omega1 = point(0, “levelset”, pp[1]); float omega2 = point(0, “levelset”, pp[2]); float omega3 = point(0, “levelset”, pp[3]);
march(p0, p1, omega0, omega1, new_pts); march(p1, p2, omega1, omega2, new_pts); march(p2, p3, omega2, omega3, new_pts); march(p3, p0, omega3, omega0, new_pts);
if(len(new_pts) > 0){ int new_prim = addprim(0, “polyline”, (new_pts)); } “` now what i will say is that this might be a bit odd for people who are familiar with marching squares. It’s like even more simplified than normal. but that’s because our level set attribute in this case corresponds to the dot product between our point normals and the direction to the camera origin. And ideally there’s never going to be a primitive where that levelset enters and exits through more than 2 edges since that function will be linear across a planar quad. So we don’t really have to account for multiple crossings. 🙂

here’s an example with just a simple levelset

but yeah it’s effectively just the clip sop with an attribute

🙂

i used the above code though because it needed to get used in a houdini engine rig for maya artists so i needed everything to be as simple and fast as possible. The biggest bottleneck ended up being the I/O from maya to houdini engine and back 🙁

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20252208/01/25/image.png

Archived post by sniperjake945

TLDR: Its just a clamp on your velocity to not move something too far on any given timestep. It helps prevent instability.
**The long answer**: the cfl condition is a limit on how much something can move. Lets say you have velocity stored in volume. we’ll call that `v`. and we also know the length of our voxel, we can call that `dx`, and we know our timestep `dt`.
the cfl condition says for any value `v` we need to satisfy the inequality: “` dt * (v / dx) <= CFL_NUMBER ``` all that means is that we're normalizing velocity with respect to the length of a voxel. If our velocity * timestep would move us exactly one voxel length, then the result on the left would be 1. So if you had a default CFL of 1, then that velocity is okay and doesnt need to be limited. **To answer your question with respect to sourcing speed**: Increasing the CFL condition would allow for density and velocity in a pyro sim for example, to get moved more than one voxel at any given time step. So **yes**, it would allow for things to move faster with fewer substeps. The problem however is it will causes the sim to look generally worse, as something like pyro is using an explicit integration scheme. So mass conservation will generally suffer meaning you'll see more density loss than you otherwise would, and you might get artifacting depending on how high it's set. **Technicalities**: the actual formula is: ``` dt * sum_i(v_i / d_i) <= CFL_NUMBER ``` where `i` corresponds to each axis. in the case of 3d sims, that x,y,z. so its the sum of velocities over the length of the voxel edges. but that's not terribly important for understanding what's happening

Archived post by lewis.taylor.

enabling spot light for area lights just means you now have a scalable spot

Vs an infinitely small point emitting light

the best way to get proper glass fractures is to do the following. * make an attribute on the internal faces of the cracks, use it as emission in your shader, this simulates caustics * make an attribute that falls off from the crack center, and use this to drive refraction roughness, this simulates micro fractures
This is the approach we developed on John Wick, with all the shattering glass stuff

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20255807/10/25/image.png

Archived post by lewis.taylor.

enabling spot light for area lights just means you now have a scalable spot

Vs an infinitely small point emitting light

the best way to get proper glass fractures is to do the following. * make an attribute on the internal faces of the cracks, use it as emission in your shader, this simulates caustics * make an attribute that falls off from the crack center, and use this to drive refraction roughness, this simulates micro fractures
This is the approach we developed on John Wick, with all the shattering glass stuff

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251107/10/25/image.png

Archived post by siegmattel

this approach is more like you’re building different custom rig poses with much more advanced functionality

how do i make sure that blendshape animation comes through on a rop fbx character export?

i’m guessing it’s some combination of character blend shapes add and a few other nodes?

for anyone curious, found this hip file from edward on the sidefx forums, seems like it works. you basically pack each blendshape, give it the same name attribute as your capture geo, add blendshape_channel and blendshape_name attributes, hide the blendshape visibility, and then you set up a few detail attributes on the skeleton to act as the blendshape weights.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251607/03/25/kinefxBlendShapeFromScratch.hip

[hou-tops] Archived post by fabriciochamon

Here you go, node recommender trained in Houdini.
I’ve reorganized this thing to avoid external python code, so now a topnet scans hip files and stores node connections. In sops I create the ML examples, which are then trained in another topnet, and finally I do model inference in sops again.
added comments so its (hopefully) easy to follow. Worth noting that it builds a list of available node types based on the current houdini session and won’t evaluate HDAs that are not part of current environment (ie: unloaded packages).
First run takes a while since the ml regression top creates a venv with pytorch (5gb!)
regarding inference: It needs a good amount of examples to produce something useful! also its just prototype ofc, would definitely need some work on the UX, etc..

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251906/29/25/image.png
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20251906/29/25/node_recommender.hiplc