Archived post by Wyeth

Little tip that I have found useful when playing with the new bake tools: I really have been struggling to get an actually acceptable low poly out of Houdini’s auto tools like instant meshes or polyreduce. They get you halfway there but nothing I would consider super clean/shippable. One thing that is helping is the new game dev measure curvature, I throw one of those in after a remesh down to 50k or so to get overall curvature, take concave range to 10 and contrast/intensity to 3, blur a bit, convex to 0, single color value. Then polyreduce can use the color to guide the reduction while retaining detail across areas of high curvature. Starts to assist the silhouette preservation…
I still get holes, jagged polys, and artifacts quite a bit once the resolution gets down to game ready though. Definitely trying to find a great pipeline there.

Perhaps “LOD create” holds some tricks…

Exaggerated for effect but you get the idea

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20194304/02/19/unknown.png

Archived post by PaqWak

@bhenriksson , I can see this feedback loop working indeed, thanks a lot for the suggestion !

So the forloop with feedback works really well, I just had add a little bit of jittering to help the uvlayout packing algorithm.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20183411/04/18/RoomLayout.gif
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20183411/04/18/RoomV9.hiplc

Archived post by PaqWak

Hi Luiz, here’s my procedure :
A) I create a collection of N rooms, integer sized (1-4 unit long, 2-4 unit large) and I uv-pack them together (uv layout). B) Boolean / convertline the result, so I have unique polyline primitives between all the points. C) I create this red inside polyline structures D) Intersectionanalysis between the red and the black lines, so I can delete every red lines that touch nothing … E) and also keep only one red primitive primitive, randomly, for every black wall line. F) polyexpand the red line G) boolean subtract with the black wall.
The Game Jam version was slightly more complex because I also added a corridor in the mix, but the idea is the same.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/20181911/04/18/Logic.jpg

Archived post by Wyeth

If you guys want my favorite trick for this, it’s to pack a bunch of quads into either a regular or random grid, with two of the points as black vertex color and two as white. The white ones pull towards a world space position, the black ones stay locked to their position Then UV map the “sea” of quads planar, and play a movie back on the material (or pan a fog texture, or whatever). You can even use the luminosity of the shader pixels to decide whether or not to collapse the quads to zero to reduce overdraw on dim parts of the projection. Throw in a little fresnel, use the spline thicken function to make the quads always face the camera… whatever. This trick makes elaborate godrays that either project to a source (think a movie projector) or follow the intricacies of a shader of light coming through a window, or bright crepuscular rays in the distance coming down from clouds. It looks dope and nobody ever uses it for anything 😃 We made movie projectors and hologram projectors with it on a couple different projects.

BTW if you do this on real 3d geometry using dot products (take the dot of a world space position and geo and collapse any facing geo to that point) you can make projected 3d rays that conform perfectly to the shape of the object and its shader, even as it animates. It’s the god tier feature to make holograms look awesome in realtime, and again I never see anyone use it.

Archived post by Wyeth

@mestela are you in forward or deferred? In forward, baseline shaders cost 4x as much as deferred in instruction count, however the shader complexity viewmode doesn’t remap to show you a normalized view of “expensive”, it’s remappable in INI but isn’t immediately apparent.
Also what do you mean when you say redlining in vertex shader but not pixel shader cost?

Or do you just mean the little vs/ps guy jumping around? I don’t trust that view AT ALL. Edit: to explain , shader complexity view is great and can be trusted, it’s the little ps/vs dude jumping around which means nothing

Do you *really* want some mad info? Do the following:
type r.showmaterialdrawevents 1
Then, type profilegpu (or hit shift-ctrl-comma)

Now expand scene/translucency, and you will see all your materials, and their discrete costs

On sprites, pixel fill will ALWAYS be the bottleneck compared to vertex shader. There are four verts per quad but in an overdrawn effect you might be shading 10,000 pixels 10+times each.

This is always why offloading work to the vertex shader on sprites is so powerful of an optimization. Anything that can be linearly interpolated can be offloaded to the VS on sprites (texture coordinate work, for example, which is inherently lerped), which might only subsequently happen 40 times in the frame (once per vertex) vs. 100,000 times (once per pixel).

Also note: whether the particles are simulated on the CPU or GPU has no bearing on the shader cost. the engine doesn’t care if they are simulated on the GPU or the CPU, once it hits the renderer, it’s all the same.

For reference, if I capture the “tuturialparticlesystem” in engine content using the aforementioned showmaterialdrawevent combo with profile gpu, it shows up as costing 0.11 milliseconds when in mostly fullscreen

If we were working on VR I generally like to keep my GPU under 9 milliseconds, so that’s a small chunk of our frame, and I likely wouldn’t pay it too much mind in comparison to other costs.

Attachments in this post:
http://fx-td.com/houdiniandchill/wp-content/uploads/discord/unknown.png