Morten S. Mikkelsen posted recently a
technical paper describing method of perturbing the surface normal on GPU with
bump-mapping technique, used widely in offline rendering world. Main advantage over classic for realtime rendering
normal-mapping technique is one-versus-three channels required to store perturbation info. Also you don't need to carry with you all this tangent/binormal stuff in vertex buffers and shaders. Looked like a silver bullet to me.
Of course I've copy-pasted the code from
Listing 1 to give it a try. After few experiments and suggestion from Morten in
this thread I've got it working by scaling bump input value (coefficient 0.007 worked for me). The only thing that prevented me to merge this to trunk was moire. It polluted the solution pretty heavily. And again (thanks Morten!) author suggested a solution. It was just an usage of sampler instead of gradient instructions:
float2 TexDx = ddx(IN.stcoord.xy);
float2 TexDy = ddy(IN.stcoord.xy);
float Hll = height_tex.Sample(samLinear, IN.stcoord).x;
float dBs = (height_tex.Sample(samLinear, IN.stcoord+TexDx).x - Hll);
float dBt = (height_tex.Sample(samLinear, IN.stcoord+TexDy).x - Hll);
Guess what? Moire is gone! And I was able to compare two methods, which looks pretty much the same visually. Bump-mapping technique looks a bit more crisp, though:
The end of normal mapping era?
P.S. New method works a bit faster for me.
P.P.S. Added another view:
P.P.P.S. Added
two builds to compare.