Skinweighting Notes for Maya

General / 30 May 2020

Maya is not known to be helpful when it comes to manually skinweighting a rig. It's default smoothing options aren't very good, and unless you limit how many influences can control every joint as well as keep on top of which joints are locked and which ones aren't, you're most likely going to have issues with Maya assigning verts to the wrong joints when you start editing weights (in particular, removing them). 

So! Some basics I quickly learned working as a rigger in no order -> 

* Find or make your own custom smoothing tool.

In my case, my lead had written some really good smoothing tools in python that I've been using at work. Essentially what you want is something that will look at your verts, and smooth out/average out the values based on the edge rings next to it. You can have different versions of this tool as well, for instance if you want a slight smooth, you can look at writing a tool that will average out a vert selection (or obj), based on the verts directly next to it.



If you're not code savvy yet, like me (yet! I will get there, then write about it!), then there are other options. More on this later :) 


* Learn anatomy, and if you're working on something stylised - ignore accurate anatomy, and focus on volumes.

The studio I'm in excels at stylized animations, which meant that while yes I know where the anatomically correct positions of bones in a dog leg are, if you place the joints of a dog's backleg accurate to a dogs skeleton in the real world, you are unlikely to get good volume deformation in the leg. Placing bones in the centre of a mass is always a good default if you are unsure. 





* Learn how to predict a pivots orientation before binding anything to it. 

Especially useful for placing joints for a characters jaw! A good point that was shown to me at work was the rotate tool being a good visual of the arc gained by where your joint is currently placed. (image, wrong placement)


^Can determine whether this pivot point works using the rotate tool. In this case, it does not! The jaw swings too far back compared to what I want ^


In case of the jaw, more times than not for me, the pivot of the jaw actually ends up sitting somewhere in the neck instead of below the ear. (image, correct placement)



* Deformers aren't always necessary. 

If you enjoy using wire deformers and all that jazz, by all means use them. They can be useful when an animator plans on horribly breaking things and you don't have fx to save you. But if you don't like using them or can't, don't panic. No one in my rigging team uses deformers, everything is classic skin clusters and blendshapes. Where I work, blendshapes are for the face and are art directed, if you need more muscle definition in the shoulder for example, you can use joints. Deformers can get really tricky when you start to fuse them with other things.

If you need more volume in a specific area like the knees, consider using duel quaternium skinning instead of classic linear-- the weight blended version, so you can paint in your DQ  weights over the knees and leave the rest in classic linear. Another way is to think outside the box and use floating joints to hold volume and constrain them accordingly in your rig.
DQ is pretty good with preserving volumes but it doesn't like things scaling, which leads to my next point. 

If you can, make sure everyone is on board with sticking to the scale of the project. If your project is about a bunch of ants, then make everything a 10th of the size, don't rely on rigging, layout and anim to constantly scale it down for you (sounds ridiculous, but I've been in little group projects that have done that).

Oh yeah, ever had a model with a really big swollen looking knee joint? Consider using two joints for the knee bend, not one. One that sits at the top of the knee, and another that's just above the bottom of the knee bulge in the model.



Skinning Methods:

- Native Maya - involves more precision and locking weights once they're done

Block out each section. Create a low rez of your model, weight everything to the pelvis. The pelvis joint is like your root, it connects to your master controls, and is the base of your spine. 

Now I mentioned locking weights... in my experience, maya likes to throw influences around like it's a party. For instance I'd be working on the shoulder weights, and maya reassigns verts to the foot joint. I've found the best way to avoid this in general is just to lock the joints you aren't working on.

From there, you block out each section of the spine. Solid values, either 0 or 1. Nothing blended, just black and white for now. The Chest bone will also have the head, neck, shoulders, arms, hands and fingers weighted to it. Between your spine bones, choose an edgeloop to represent a 50% way mark between the influences and weight it as so. So if I were to photobash a couple of screenshots...


The initial spine blocking would look like this!


This allows you to see if your joints are correctly placed and gauge whether things will blend well and keep their volume.

After you've blocked out your torso, it's up to you whether you proceed blocking down the legs next or move onto the shoulders and neck, but it's the same principle either way. Once things are blocked out and looking good, smoothing it becomes much easier, and most of your thinking is done.

- Download NG Skin Tools - a layer based approach to skinning (there's a free non commercial version of this plugin)

Circling back to smoothing, ng skin tools has a great smoothing brush as well as a relaxing option. Both are great and work with soft selection too!

This plugin has it's own youtube tutorials that show how to use it, so i won't go into too much depth - I'll just explain what I use it for: 

- Assigning weights. This has given me fairly accurate results, and goes off whatever is closest to the joint. I use it all the time for joint chains in characters and props, then just relax and smooth the weights with a couple of clicks and them I'm done.

- Layered skinning - Incredibly useful for when you forget to add joints to your animal's ears! Pop your joints in your rig, make sure they're an influence on the geo, go to ng skin tools, create a new layer and just add in your weights. It's a way you can make additions without being destructive with the weights that you've worked so hard on already. If you don't like your addition, you can always delete it! (con - you can't remove weights added in a layer, however each layer has a mask that you can paint and you can remove values there to hide what you don't want in your layer. Think of it as an opacity map)

- A proper smooth brush. 

- The relax option for when I'm feeling really lazy

- Exporting and Importing weights. Occasionally things horribly break and you need to transfer your weights over to new models or rigs. This tool has an export/import weights option that saves your weights as .json files. This one has been surprisingly reliable and provided fairly accurate transfers,  but also brings it in as another layer, which is good if you want to mask out areas you don't like in the update. 


When I start a new character or creature rig, I prefer to block out the weights in native Maya's paint weights tool. After that point I switch to NG for smoothing, relaxing, and adding details.

This is just what I've picked up and my current way of working. If you have a way that works for you, by all means stick to it :)

Beginners Guide for Fur Grooming in Houdini

General / 26 November 2019


Intro 

Alrighty, so this is mainly to get my thoughts down from what I've learned this year, from not knowing houdini to now. 

Set up 

My biggest mistake when setting up my rig was making it rely on too much information from other departments (uvs and topology) because I got the task so early in production, the model was still in its very early stages. 

So if I was ever in that boat again, I'd try not to rely heavily on topology or uvs because that will change. 

Make sure you have concept art ready and references good to go before starting, it will save you a lot of time in wondering what things are meant to look like. Have a look at them and the model, see which bits should be split (e.g. The tail can be grouped and groomed separately) . You can always blend guides between different areas

Initial Guides:

What helps this process is remembering that houdini doesn’t read the guides that a hairgen node makes as fur. It’s just a line with some points, aka a primitive, with some fancy attributes. So really, you’re procedurally modelling at the moment.

What makes these lines hair are certain attributes, using a hairgen node initially to get lines as your guides gives you all the attributes you may need. It really depends on what you're after; for me it was easiest to use a hairgen node. 

The hairgen node creates lines everywhere, you can control the number of segments (no. of points) and length of the lines. Under the hood it creates all the relative attributes the groom nodes need. 

Once you groom your hair (more on this later), put another hairgen node as the final one.

There are 2 things that the hairgen node wants - the mesh (aka the “skin”)  and a VDB of the skin. Don’t panic if you can’t get a VDB, it works perfectly fine without it, and to be honest my whole setup doesn’t bother with it. 

The VDB is used more down the line when you plug in your guideprocess nodes - it acts as a way to avoid collisions with the skin when you bend your fur too much for example. The issue is, sometimes it causes the hairgen node to generate the lines not on the skin of the mesh, and that then causes problems down the line in lighting for example. 

There are other methods of course but this one is the one I'm familiar with :)

Fur Basics: 

I started off 2019 determined to learn some form of charfx, and the first thing I ended up getting was fur grooming and sim in Houdini.

Hair and fur are lines with width and length. If you wanted to treat these lines how houdini treats fur, then those lines require skinprimid and skinprimuv, which are automatically generated from a hairgen node.

I started off by attempting the fur masterclass on sidefx’s site, however my attention span is not long enough to get through it all. What I did learn from it at least was how to use the shelf tools for hair and fur.

At first, I didn’t mind the shelf tools, if your geo is set up the right way and you don’t want many custom attributes, then they work just fine.

I found that once I wanted layers with fur, and variations, the shelf tools were annoying because there was too much jumping in and out of sops. So then I set about doing an entire hairgen in one sops node. It’s entirely possible and in my mind it’s much clearer. 

The shelf tools run off hairgen nodes. They don’t create lines and copy to points like I originally experimented with. You can use this to create your own setup in SOPs. 

Create one hairgen node, set a low density, chosen length and segments. Generally if you’re dealing with really short fur, 2-3 segments is good. If you want some nice twisty shapes or frizz in a groom then up your segments depending on length.

changing segments to suit length and groom ^

Doing that creates your guides, and you’ll want to groom the guides first. Houdini’s guideprocess node is fantastic for that. With that node you’re able to set direction, length (useful if you wanted to override what the hairgen set. It’s also one of the ways you can mask out an area and only change that length, e.g. Tail), bend, frizz, lift and some more. 

Once you’ve groomed your fur as you like you then perform another hairgen to create your final amount of fur. After that is when you clump it using the clump node, which is another amazing piece of houdini node magic.

This is explained in more detail in a bit. What's also useful is that hairgen tends to automatically pick up normals on points and uses it as unguided fur direction. (blue dashes are normals)



If you have a closer look at the hairgen node, you’ll notice nearly every single  option/parameter can be overridden with an attribute. Use that to your advantage

Grooming: 

I've outlined the main groom tools below, it's really up to you which nodes to use to achieve the groom you want. Fur generally flows along the skin of the creature, so for me the first step was to replicate that. Bends, frizz, curls, clumping all came after setting the length and direction of the fur. The order I did things was:

hairgen(creating guides) -> set length -> set direction -> bend -> frizz(optional) -> clump -> hairgen -> clump 

___

This is a very manual setup I found on the side - 

Paint density > guide groom until you’re happy > hairgen > clump (+curling, if you like) > guide process (frizz, bend)

The result is really nice and works well for realistic hair grooms. Houdini may crash. Mine did soo many times. 

However, the manual setup isn't open to a lot of change, which is why I prefer a more procedural approach. 

If you have a closer look at the hairgen and guide nodes, you’ll notice nearly every single  option/parameter can be overridden with an attribute. Use that to your advantage

Guide groom tool :

The guide groom tool is a manual, more Xgen way of grooming, and it caches everything you do. That’s the most important thing to remember because if something before it in the network changes, you have to re-cache your guide grooms to show what it looks like with your edits. Note, sometimes the re-cache button doesn’t always work nicely and you’ll just have to do it again.

Guide Process:

I should mention – you can use any of these grooming tools after the big hairgen. I choose not to because I don’t have a computer that can handle it. It’s the difference between moving 100 polys (which is what houdini counts hair as) vs 100,000. If you do have a computer that can handle it, however, feel free to go nuts. Experiment and find the best setup for you. 

The guideprocess node contains these modes:

set lift, set length, set direction, displace, wave, straighten, smooth, frizz, bend, set simulation settings

You can put down a guide process and select the mode yourself, or for instance you can type in bend when searching for a node and there will be a preset guideprocess in the bend mode for you.

Setting direction: 

This is where you can experiment to get the directions you want, how you want. For me, i found the normals of the geo super important in setting direction of the fur. It was intuitive for me to comb the normals to represent the flow of the fur. 

There are other methods like drawing curves over the mesh to represent the general flow of the area, and attribute copying/blending those across to your curves directly, or you can apply them to your normals. That way when you need to tweak direction, you just tweak the drawn curves.

Easiest part of the fur to groom for me was the back legs and initially used that area plus the cheeks as a testing area.

These areas will generally have a curve to the fur, and some sort of swirl going on



Clumping:

Clumping is important! Nothing tends to look right without it. 

Important thing to remember - If you get the dragon scale look when all you wanted was large clumps, just turn down blend. 





The hairclump node also curls very nicely -

 



Play around with the settings to get a feel for what it can do.

I had a lot of clump nodes. Check out this video, it's super cool and explains clumping: https://vimeo.com/334510961 

halfway through it goes on to explain Subclumping and getting details in your clumps of fur. 


My favourite nodes. 

attribtransfer

hairgen

guideprocess

hairclump


Rendering in Renderman 

Get familiar with these -

https://renderman.pixar.com/resources/RenderMan_20/PxrMarschnerHair.html

https://rmanwiki.pixar.com/display/REN/PxrHairColor


What to remember

Always jump back to reference and concept art. It’s easy to go down a rabbit hole and get distracted with a groom, and then you look up and realise you’ve completely missed the mark.




Lava in Shaderforge

Tutorial / 27 June 2019

Hello! I was tasked with creating a blob of lava in unity.

Naturally everything I found was a premade asset you could buy or it was for UE4.

So I took the one UE4 tutorial that got close to what I wanted and tried to replicate the nodes in shaderforge.

At the same time I send a link to a premade substance lava material from textures.com to a friend of mine to ask what the best plan of action was to make a similar effect. With one look she could tell that the textures it used weren’t for colour, they were masks and had a normal map.

Knowing that makes everything easier!

So, what you need:

Texture maps!

You’ll need a mask map (for emission and colour), normal map, ambient occlusion and roughness maps would also be useful, but you can still get a basic flowing lava shader without the last two. Look up some examples and make your own, or do what I do and download some free ones! I’m sure there are some even better ones you can buy too.

I get 90% of my textures from this site, its super useful – https://www.textures.com/

Let’s get started –

Open up Shaderforge, create a basic shader.

Create a texture2D node, load in your normal map, tick normal map on. Rename it to normal map.

Create a texture asset node, load in your mask, plug the rgb output to the tex input of 2 texture2D nodes. Rename one Diffuse and the other Emission

If the node is green, it means it becomes an editable parameter for whoever uses the materials created from this shader.

From here on I’m going to split the steps up in categories instead of the method I used, that way if you’re only interested in panning uvs you can just take that bit.

Panning textures:

For that “flowing” effect, you move the UVs of the all the texture maps.

In this example we will be rolling the Mask (which is plugged into Emission).

Bring in a time node and a uv coordinates node. Add the t output on the time node and UV output on the coordinates node together, output goes into the UV input of your texture2D node.

 

Result is a moving texture!

Though it’s moving quite quickly, especially if you want slow moving lava.

To control the speed:

Bring in two value nodes

Name one something like Speed_U, the other Speed_V

(you can use spaces in these names, I just prefer not to)

Bring in an append and a multiply node.

Connect the value nodes to the Append node.

The output of the Append goes into the first input of the multiply node. Reconnect the time node from the add it is in to the multiply node.

The multiply node is then plugged in to the add node with the UV coordinates output as the second input.

The result of that add plugs into the UV input of the texture node.

Finally, change the values in the value nodes to any number and what the texture change speed. Set it to something you like.

Don’t forget to compile shader!

To tile UVs:

Place two value nodes down. Name one Tile_U and the other Tile_V

Create an append and multiply node.

Attach the values to the append, and plug in the output to the multiply node.

Plug in the result of the add from the speed set up above to the second input in the multiply node.

If you didn’t do that set up, or your setup doesn’t need panning textures, just plug in the UV coordinates.

The result of the multiply is plugged into the UV input of the texture asset.

Changing the values will now tile the texture!

Controlling strength of texture:

This one is super simple – add a multiply node with a constant/value in input B to a texture right before plugging into the shader

Done!

Now all that’s left is to set up your normal map and the other mask for colour using the above techniques and you will have completed a lava shader!

Extra tidbits:

Play around with the different ways you can put a colour over your mask

For example, you could divide the colour by the mask –

Or you could subtract it, whatever works for you.

Note: The shader will not compile if you have two green nodes with the same name.

Dynamic Finger Controls – Maya

General / 17 April 2019

I created a dynamic finger control setup in 2018, it’s pretty useful and stable so far, so I figured I’d share how it works.

Dynamic finger controls are when the animator can move the fingers manually, over the top of preset shapes. E.g. The rigger sets up a slider to make a fist, and then the animator can use that plus the manual controls to tweak the shape.

Example of what they are ^

Pros –

  • It allows for quicker animating
  • Can have more variety
  • You feel good
  • Uses fundamental parenting and hierarchy tricks – lower chance of it breaking between maya versions

Cons:

  • Time consuming

For ease of understanding, when I say “larger joints” or “orient joints” I’m referring to the images below – the larger joints are the orients/set driven key joints (SDK joint/control).

The larger joints are not the joints you bind to the skin

Making the skeleton:

Firstly, you’ll need to double up (duplicate them) on your knuckle joints and the bends in the fingers, like this:

So, the larger joints are the duplicates that will control the fingers in the main hand control – pointing or making fist shapes, will be using these joints.
Label them something different from your bind joints to indicate that they’re not to be bound/skinned.

I have labelled these joints orient instead of bind.

(hope this picture helps. Testing maya’s labelling system)

Outside of that, the hand skeleton is set up as you normally would. There’s many tutorials out there on how to set up a hand rig. The most basic being a joint for the wrist, and then a joint for each bone joint down the fingers (you can look at your own hand for reference ). All you need is your joints, no constraints or splines yet, just hand skeleton with duplicated knuckle joints

Here is how I’ve set up my skeleton and the naming conventions as well as hierarchy.

(more info below)

Controls:

Quick clarification –
Constrain vs Hierarchy: essentially it’s the same thing, though a proper constrain can overwrite hierarchy set up. As well, it can allow more more flexibility in what can control what, whereas hierarchy can be limited.
An easier way to think about this would be that if you parent a sphere to a cube, the sphere follows the cube as well as having freedom to do whatever it wants. If you were to constrain the sphere in any way, e.g. orient constrain the sphere to the cube, the cube determines what the orient of the sphere will be, and the sphere will lose its freedom/flexibility to do so.

I originally thought this was going to be simple. You have two sets of joints overlapped, so that one will control all your SDKs, then the other one to be your manual controls for the animators. These can all be accessed in the hand’s master control.

When you set up your skeleton the orient joints should rotate the chain of joints underneath. You can then constrain a curve to the orient joint, and another curve to control the bind joint that will be the manual control.

But you’ll find that once the controls are hooked up/constrained to the joints, the orient control no longer moves the manual control’s curve — easy fix, parent the curves in the hierarchy right?

That will work. Only catch is if you wanted your orient controls to be invisible like in my case, (I was using them to control shapes in the main hand control) then they will make the manual controls that the animator needs invisible too. I say that now because I don’t know what you’re after.

Solution?  Hide the shape nodes of the set driven key controls once you’re done (make *sure* you’re done)

Or

You can constrain them

Either way works. If you have a tendency to get lost with too many constraints like myself, parenting the groups and their curves in the hierarchy and hiding the SDK’s shape node later will probably work better.

There are some important things when making controls you need to keep in mind. If you’re trying to align the orientation of your control’s pivot point to your thumb joints for example, freezing transforms on your curve will reset your pivot orientation to world. To account for this,

  • Every control you make needs to be in its own group
  • Make sure you name them properly in a manner you understand

Hierarchy:

Naming breakdown – 
ORI = controls the pivot point orient of the curve parented underneath
Do not freeze orient transforms on this group – they are the offset groups that allows for correct pivot orientation on your curve

SDK = master ctrl/jnt – the controls that the Set Driven Key will be going off

-GRPctrls_L_Fingers

  • GRP_rigCTRLS_L_fingers_01
    • GRPctrls_L_thumb_01
      • GRP_CTRLS_L_thumb_a01
        • ORI_SDK_ctrl_L_thumb_a01
          • SDK_ctrl_L_thumb_a01
            • ORIctrl_L_thumb_a01
              • ctrl_L_thumb_a01
      • GRP_CTRLs_L_thumb_b01
        • ORI_SDK_ctrl_L_thumb_b01
          • SDK_ctrl_L_thumb_b01
            • ORIctrl_L_thumb_b01
              • ctrl_L_thumb_b01
      • GRP_CTRLs_L_thumb_c01
        • ORI_SDK_ctrl_L_thumb_c01
          • SDK_ctrl_L_thumb_c01
            • ORIctrl_L_thumb_c01
              • ctrl_L_thumb_c01
    • GRPctrls_L_index_01
      • GRP_CTRLS_L_index_a01
        • ORI_SDK_ctrl_L_index_a01
          • SDK_ctrl_L_index_a01
            • ORIctrl_L_index_a01
              • ctrl_L_index_a01

Etc.

^ hierarchy for groups and controls

— useful script for renaming – wp Rename (created by william petruccelli)–

Last steps for control setup:

Blue means curves, purple means joints (that’s their icon colour in the hierarchy)

  • Parent or orient constrain the SDK controls to the orient/large joints respectively (e.g. SDK_ctrl_L_thumb_a01 -> orient_L_thumb_a01).
  • Parent or orient constrain the manual controls that the animator will use (you can add the prefix anim to them quickly using that wp rename script if this is getting too confusing) to the bind joints respectively (e.g. ctrl_L_thumb_b01 -> bind_L_thumb_b01)
  • Parent constrain the last curve in each section to the first group in the next section:

ctrl_L_thumb_a01 -> GRP_CTRLS_L_thumb_b01

Repeat this for every section. So a01 will control b01 through a parent constraint, and b01 will control c01

Then repeat that for each finger

What now?

So now you have your controls working, you can move around your SDK controls, and then move the manual ones on top of those