The Elements Of Shaders
Non Software Specific
What is a Shader and How Are They Different From Textures?
Many students have trouble distinguishing between shaders and textures.
– Shaders are the collection of multiple surface properties of an object. (*also called materials)
– Textures are the maps that describe the variation of the surface property values.
The combination of shaders and textures can also be called “surfacing”. A “surfacer” is a person who creates both textures and shaders.
* Materials and Shaders are basically the same thing, the naming varies from program to program. in some programs they are slightly different. In Maya they are the same. Maya will always refer to shaders not materials, but many people will refer to either, even in Maya… especially when coming from other programs. Simply understand that they’re the same thing in Maya.
So the shader (material) is the collection of attributes that describe how light reacts on an object’s surface. This might include…
– diffuse color
– diffuse value
– subsurface scatter
Textures are optional, and are usually mapped alone UV coordinates. textures allow us to describe variation of any of these attributes. The most common would be “diffuse colour” an example would be the change from black to white over a chequerboard, but almost every attribute can be mapped too… say the specular of a face would be amplified on the lipstick of a characters lips but less visible on the skin.
These changes are described with maps. Texture maps can be either procedural (created by computer algorithms) or painted with image maps in paint programs like Photoshop or 3d Paint programs like Mudbox, Mari and even later versions of Photoshop.
Models don’t necessarily need any textures at all, very clean metal objects like chrome or car paint can look good without any texture variation. The following video by student Murray Gardner has no textures and just uses even values for shader properties.
If this Munster Koach became dirty however the shaders would need maps to describe how the grit and grime is distributed over the surface. Not only would the colour have to variate from black paint to brown mud where dirt collects, but other shader properties would also have to vary to, such as the reflectivity of the paint compared to the lack of reflectivity on the mud/dirt sections.
Computer Game Textures vs Physically Correct Shaders
I’ve found that computer games textures often mislead students. Computer game shaders (especially in older games) are very simple when compared to physical raytracing shaders we see in film. They render instantly and can’t be as complex.
Old Skool Game Shader Properties
– Diffuse Colour (only)
Film Shader (and some High End Game) Properties
– diffuse color
– diffuse value
– subsurface scatter
On simple games texture artists are limited to diffuse colour to surface their models. The success of such games depend more on the talent of the texture artist. These texture maps can look visually interesting even when seen alone.
Notice how all the details below including shadows and highlights are painted into the textures.
This form of texturing is common in low poly games and while going for stylised results we can even use it in high poly rendering (see the Illustrative Textures In Mudbox)
As technology gets better games are improving their shaders to include more control over shader attributes, more like film.
Texturing for Physically Correct Shaders
Some students want their physically accurate textures to imitate the textures of computer games. But textures for complex shaders, won’t look nearly as pleasing. This is because separted textures (colour/bump/spec/displacement) aren’t what we’d see in real life.
Such textures depend on hires geometry to catch the highlights and shadows. So it’s important for the models to have the details in the geometry.
Super hires geo can be mapped with normalized displacement maps (or faked with normal maps) these are automatically generated from sculpting programs. If viewed alone they seem to mimic strange rainbow effects (they’ll be explained later).
Other attributes like diffuse colour/specular/reflectivity/bump maps can also look odd when viewed alone.
These textures can bring very realistic results with correct shader settings.
See this example of the texture breakdown of Snail Main (concept by Mr. Jack, all 3d by myself).
Looking at each individual map from Snail Man, the textures can be surprising when viewed alone. especially in comparison to the games textures where the single colour map is more understandable and sometimes beautiful.
So how can we begin to understand how shader properties work when broken apart?
Common Shader Properties Explained
The common shader properties are
– diffuse color
– diffuse value
– subsurface scatter
– bump, normal and displacement maps
We’ll go through them one by one and explain how each property relates to surfaces in the real world.
Diffuse is simply the technical word for colour. What colour is a shirt, is the sky is a characters eyes, all of these are related to diffuse colour.
How much light is getting reflected back from an object. Some objects, particularly heavy metals will use up the energy of light shot at them, so their “diffuse values” may be lower than usual.
Specular and Reflection
Specular and reflection are actually the same thing. They are both just reflections. However due to the rendering time required for true reflections sometimes we’ll fake the bright highlights on a surface. We call this the specular highlights.
One of the most common specular highlights is the white round dot found in cartoon eyes.
In 3d specular highlights can be faked and are fast to render, they’re common in real time lighting like in viewport 2.0
In real life the specular highlight is simply the reflection from a bright light source such as the sun or lights in the room, usually the lights are blown out giving the reflection a specular effect.
Real raytraced reflections take some time to calculate and interestingly many objects do not reflect light evenly. Car paint for example will become more reflective at the glancing angle than if seen front on. Glass is like this too. This effect is most often equated to the fresnel equation but can vary away from that too depending on the material.
Fresnel Reflectivity in 3d, surfaces become more reflective at greater glancing angles
Fresnel reflectivity in the real world
Surfaces that are rough on micro level (many metals) can blur the reflection also know as a diffuse reflection. This will generally bump up render times considerably.
Ambience/Incandesence and Glow
Certain objects emit light. A TV screen for example or the tail of a glow worm. Raising the ambience/incandesence value will have the shader glow even if no lights are lighting it.
The glow values will have the object glow with a halo like effect. In real life the glow effect is usually caused by the lens of the camera, sort of like a lens flare. Sometimes these can be tweaked in 3d programs but it’s usually more conveniently to tweak and create the glows and flares in a compositing program where render times are minimal.
We can see through many surfaces in the real world and this setting will adjust how much we can see through the object. Opacity can be used instead of transparency. Opacity is the reverse of transparency, they are the same value just in reverse…
100% transparent = completely see through or invisible = 0% opaque
100% opaque = not see through = 0% transparent
Translucent objects are different from transparent. Light can travel through the back of objects and light them up from the front. Or the attribute that allows light to penetrate through its surface. The grass in bugs life is a great example of a translucent surface.
Subsurface scattering is related to translucency. But Subsurface scattering will require more calculating and includes the bouncing of light inside a surface. Translucency is often used on fairly flat objects like leaves of a tree or grass.
Subsurface Scattering (SSS)
Is the effect of light entering an object bouncing around inside the volume of the surface and being bounced out again. Think the wax lighting up on a candle. Many surfaces have elements of SSS mostly famously skin but also surfaces as diverse as marble. The Snail Man on this page also has SSS on both the shell as well as the body.
SSS takes some time to calculate so can bump up render times.
Also SSS if often broken into 3 layers and can be mixed in compositing…
– diffuse (regular surface)
– front scatter (light enters from the front bounces around and leaves from the front)
– back scatter (light enters from the back and leaves through the front)
On many transparent objects light will change as it glances through the surface. This is most simply seen in any lens like a magnifying glass. The magnifying glass refracts light so that objects appear bigger on the other side. Other lenses could do the reverse and make objects seem further away.
Refraction is used on transparent surfaces. Glass relies heavily on refraction to render realistically as does water and gems etc. Different objects will have different refractivity levels too.
Refractivity is a large hit on render times too and often needs to be tweaked in the render settings if there’s many refracting objects all looking through each other.
Also worth noting is that water in a glass can be tricky to set up, you have to model it correctly for the refractions to work properly.
Bump and Normal and Displacement Maps
Bump and Normal maps are fake effects that can be used to simulate small bumps in objects. They’re used commonly throughout visual effects and in computer games too.
It’s important to know the differences between bump, normal, displacement maps… and displacement normal maps.
The Fakes (fast to render, no render hit)
1. Bump Map = black and white image. Black is deep white is high. (fake effect)
2. Normal Map = multi-coloured map generated from a sculpting program (fake effect)
Normal maps are more accurate than bump maps. Normal maps are commonly used in most games and outdate bump maps.
The Real Deal (slow to render, big render hit)
3. Displacement Map = black and white image (often 32 bit) object is subdivided at render time and polygons are moved, this is not a fake effect. Can be painted or extracted using a sculpting program.
4. Displacement Normal Map = multicolored image (often 32 bit) very high quality version of a displacement map, the best one to use. Is generated by a sculpting package.
Dazza in Cane-Toad uses bump maps as does the small bumps on the dog bowl because displacement and normal maps weren’t available back in 2003…
Bump maps are black and white
I still use bump maps in certain situations like painting pores on skin.
Superior to bump maps, normal maps provide more information about the maps making them extremely convincing. Doom3 was one of the first games to take advantage of normal maps, notice in the picture below how the silhouette of the characters is extremely jaggy, meaning the characters were very low polygon. But the normal map on the surface provides a realistic look to the face dimples and skin texture on the belly.
Normal maps when viewed alone look purple and multi coloured.
Here’s another example of a low poly character with and without normal maps, remmeber this is a fake effect so the silhouette won’t change, but if viewed at distance it’s hardly noticeable…
Normal Maps can also be used in high resolution situations where they provide super fast render times in comparison to displacement maps. Usually when the objects aren’t super close to the camera or for fine detail.
Displacement maps will actually change the physical geometry on objects sometimes adding millions of polygons so that objects sculpted can look identical in the render engine. The polygons are subdivided and then each new polygon is moved into the position of the pixels of the map. Displacement maps can be perfectly accurate for renders, the downside is the slow render times.
There’s also two types of displacement maps. Regular (b&w image) and normal displacement (purple/multi coloured)
Both types can be extracted from a program like Mudbox or ZBrush. Normal Displacement maps are far more accurate than regular b&w maps. Normal Maps also offer the advantage of being able to render concave and convex surfaces like generating ears from a normal displacement map (cool trick in Mudbox).
It’s also worth creating normal displacement maps with 32 bit colour depth. If we render to 8 bit color depth (regular images) we only have 256 levels of displacement, and it’s common to see jaggy artifacts. Creating maps at 32 bit guarantees accuracy.
Snail man uses 32 bit normal displacement maps mixed with a bump map for the very fine details.
I’ll only generate b&w displacement maps if I’m wanting to extract height information for other maps like linking the specular to the displacement. I’ll then use the b&w displacement map in photoshop and merge it into the specular/colour etc.