## Normal Maps: WIP

with one comment

For the record, the below image is not yet correct, but here’s a first-pass at normal mapping in LxEngine:

The tangent vectors are calculated by GLGeom. The algorithm I’m using is somewhat…custom. I haven’t seen an implementation similar to how I’m attempting it – and thus suspect that I’ve missed a key point or two. That’s okay, however – this is for fun and to learn something.

The basic approach is to define a function F(p,v) = dUV, a function that takes a point on the surface of the mesh and a direction vector and returns the change in UV at that point in that direction. If we constraint this function to only be valid at vertices of the mesh (which is fine, since we’re only going to use this function at vertex points), then we can define F(p,v) as the weighted average of all the edges adjacent to the vertex at p – with the weights corresponding to the dot product between v and the direction of that edge.  So if we have F(p,v) defined, that we can find tangent and bitangent by finding directions v where F is at a local maximum. This can be done by simple calculus: where the derivative of a function is 0, it is at a local maximum or minimum. Given that F(p,v) is a fairly simple weighted average, the derivative is not complex and solving for where it is 0 is not complex. Thus we end up with the directions in which U increases most and V increase most…which, until I locate the flaw in my understanding of the problem, are theoretically the tangent and bitangent directions.

We’ll see. For now, I have compiling code and a screenshot. That’s enough for today. Testing and debugging will follow

Written by arthur

April 13th, 2012 at 8:19 pm

Posted in lxengine

## The Math of Bump Mapping

Note: this post is also available as a wiki article on athile.net. The wiki article is more likely to be updated with corrections and additions!

### Preface

This post describes one approach to bump mapping in GLSL – and then dives into a detailed explanation of the math behind the code.

It is important to note that this is only one approach. There are methods that are more efficient and produce better visual results. However, the approach described here should provide a good bit of detail behind bump mapping so that, from here, a reader can understand the math better and choose the method that works best for their needs.

• The GLSL code is general, concise, and easy encapsulated into a independent function
• It does not require explicit tangent and binormals to be specified with the geometry
• It works with any height mapping function, not necessary only texture maps
• It is relatively computationally expensive on the GPU: much of which could be done at higher level
• There are other approaches that produce smoother visual results

Let’s start with some completely uncommented code and then provide the explanation.

Assume the input variable “fragVertex” is the fragment position and “fragNormal” is the fragment normal. The coordinate system in which these are expressed does not matter as long as they are consistent. The “value” parameter should be the height value as sampled for that fragment.

vec3 bump_normal(vec3 fragVertex, vec3 fragNormal, float value)
{
vec2 dV = vec2(dFdx(value), dFdy(value));

vec3 dPdx = dFdx(fragVertex);
vec3 dPdy = dFdy(fragVertex);

vec3 dPdz = normalize(fragNormal);
dPdy = normalize(cross(dPdz, dPdx));
dPdx = normalize(cross(dPdy, dPdz));

vec3 N = normalize(-dV.x * dPdx - dV.y * dPdy + dPdz);
return N;
}

Now, let’s explain…

### Bump Mapping Perturbs Normals

The basic goal of bump mapping is to apply a height map to the surface of an object. This height map does not actually change the geometry of the object, but only perturbs the normal at that point of the surface as if the geometry were changed. This is computationally less expensive than actually changing the geometry but still yields a good visual effect.

Therefore, the goal is to compute the normal of the height map at the sample point, transform that into the coordinate system of the object, and use that normal as the surface normal in place of the “actual” geometric surface normal.

The first step is to find the normal at a given point on the height map…

### Find the Normal via Discrete Samples

I’m going to rush through this section assuming the reader is okay with the math being here. Leave a comment if more detail is desired.

If you imagine the height map as texture, the normal at a point can be computed creating a triangle at that point connecting it to it’s neighbors and computing the normal of that triangle.

In three-space, define three points where $F(x,y)$ is defined as the height map at $(x,y)$:

$P_0 = < x,y,F(x,y) >$
$P_1 = < x+1,y,F(x+1,y) >$
$P_2 = < x,y+1,F(x,y+1)>$

The normal of the surface is defined by the cross product of two perpendicular vectors lying along surface, thus:

$$N = normalize( (P_1 – P_0) \times (P_2 – P_0))$$

(I’m assuming this is all pretty straightforward to the reader at this point.)

Another common approach would be to sample the neighbors in both directions, with the equation thus re-defined as:

$U = < x + 1, y, F(x+1,y) > – < x -1, y, F(x-1,y) >$
$V = < x, y+1, F(x,y+1) > – < x, y-1, F(x,y-1) >$
$N = normalize(U \times V)$

### Find the Normal via a Derivative

Let’s now think of the height map as a continuous function $z = F(x,y)$ rather than necessarily a discrete texture. We’ll work through the steps to restate the above discrete sampling in terms of a derivatives of $F$.

Let’s go back to this definition of finding the height map normal:

$P_0 = < x,y,F(x,y) >$
$P_1 = < x+1,y,F(x+1,y) >$
$P_2 = < x,y+1,F(x,y+1)>$
$N = normalize( (P_1 – P_0) \times (P_2 – P_0))$

Step 1: Since we have a continuous function, let’s not move “1″ unit in x and y, but rather a small scalar delta $dx$ and $dy$.

$P_0 = < x,y,F(x,y) >$
$P_1 = < x+dx,y,F(x+dx,y) >$
$P_2 = < x,y+dy,F(x,y+dy)>$
$N = normalize( (P_1 – P_0) \times (P_2 – P_0))$

Step 2: Substitute in for $P$ and simplify:

$N = normalize( (< x+dx,y,F(x+dx,y) > – < x,y,F(x,y) >) \times (< x,y+dy,F(x,y+dy)> – < x,y,F(x,y) >))$

The equation is a bit long, but simple:

$$N = normalize( < dx,0,F(x+dx,y) - F(x,y)> \times < 0, dy, F(x,y+dy) - F(x,y) > )$$

Step 3: Reform the equation a bit by multiplying the last components by $dx/dx$ and $dy/dy$, respectively:

$N = normalize( < dx,0, \frac{F(x+dx,y) - F(x,y)}{dx}dx> \times < 0, dy, \frac{F(x,y+dy) - F(x,y)}{dy}dy > )$

We do this to use the fundamental theorem of calculus to state the equation in terms of derivatives. The fundamental theorem of calculus states:

$\frac{dF}{dx}= \lim_{dx\rightarrow 0} \frac{F(x + dx) – F(x)))}{dx}$

Thus our previous equation can be restated as:

$N = normalize( < dx,0, \frac{dF}{dx}dx> \times < 0, dy, \frac{dF}{dy}dy > )$

Thus we have the normal of a arbitrary height map function $F$ stated in terms of the partial derivatives of $F$.

Step 4: Simplify the math a bit further:

Multiplying the cross product out yields further simplifications:

$N = normalize( < dx,0, \frac{dF}{dx}dx> \times < 0, dy, \frac{dF}{dy}dy > )$
$N = normalize( < -dy \frac{dF}{dx}dx, -dx \frac{dF}{dy}dy, dx dy > )$

Since the vector is being normalized, we can safely scale each component by the same factor $\frac{1}{dx dy}$

$N = normalize( \frac{1}{dx dy} < -dy \frac{dF}{dx}dx, -dx \frac{dF}{dy}dy, dx dy > )$
$N = normalize( < -\frac{dF}{dx}, -\frac{dF}{dy}, 1 > )$

Voila! We have a simple equation stating the height map normal in terms of partial derivatives:

$$N = normalize( < -\frac{dF}{dx}, -\frac{dF}{dy}, 1 > )$$

### Transforming the Normal to the Correct Coordinate System

The equation:

$$N = normalize( < -\frac{dF}{dx}, -\frac{dF}{dy}, 1 > )$$

yields the normal in terms of the x and y of the flat planar surface of the 2D function $F(x,y)$. Stated differently, z=1 means “up” in the coordinate system of $F$, but we need to transform this to the surface of the object so that “up” is in the direction of the surface’s normal prior to any bump mapping.

We’ll do this transformation via derivatives as well.

### Transformation via Derivatives

Let’s define an arbitrary space $A$, whose axes we will label $x,y,z$. It could represent eye-coordinates, object coordinates, or any other orthonormal coordinate system. Let’s define a second coordinate system $B$ with axes $u,v,w$.

Let’s now see how we can express the axes of u,v,w in terms of $A$. Let’s take an identify function $G(x,y,z)$ which takes coordinates from space $A$ and returns them exactly:

$G(< x,y,z >) = < x,y,z >$

Now, let’s take the derivative with respect to $u$:

$\frac{dG}{du} = \frac{d < x, y, z >}{du}$

$\frac{dG}{du} = < \frac{dx}{du}, \frac{dy}{du}, \frac{dz}{du} >$

In other words, the derivative of the identity function in $A$ by an axis in $B$ is equal to that axis of $B$ expressed in terms of the coordinate system of $A$.

So! That sounds a bit complicated, but a concrete example should clarify what this implies:

If we have a function $P$ that is equal to a position in, say, eye-space and a set of function that computes derivatives in another space, say window coordinates (which we’ll call $\frac{d}{du}$, $\frac{d}{dv}$, and $\frac{d}{dw}$), then the axes of that second coordinate system can be expressed in terms of the first via the following:

$U = \frac{dP}{du}$
$V = \frac{dP}{dv}$
$W = \frac{dP}{dw}$

(The mapping is quite straightforward when you think of the above as, “How does the position in eye-coordinates change as we along the window coordinate axis u?” Then ask the same for v and w.)

…and thus transformations can be expressed as:

$N’ = N_u U + N_v V + N_w W$

$N’ = N_u \frac{dP}{du} + N_v \frac{dP}{dv} + N_w \frac{dP}{dw}$

Transformation of N from u,v,w to x,y,z space thus can be defined as:

$$N_{x,y,z} = N_u \frac{dP}{du} + N_v \frac{dP}{dv} + N_w \frac{dP}{dw}$$

### Back to the GLSL Fragment Shader

That was a lot of math. Let’s get back to the simple half dozen lines of GLSL code to implement bump mapping.

Step 1: Compute the height map normal in window coordinates

vec2 dV = vec2(dFdx(value), dFdy(value));

The first step of the shader is to compute the partial derivatives of the height value. These values of partial derivatives are with respect to window coordinates. GLSL computes the partial derivative approximations by looking at the values in neighboring fragments and computing the deltas; thus the coordinate system of these derivatives is window coordinates.

This gives us enough information to compute the height map normal in window coordinates:

$$N = normalize( < -\frac{dF}{dx}, -\frac{dF}{dy}, 1 > )$$

Step 2: Compute the window coordinate axes in the desired coordinate system.

We now need the normal in the same coordinate system as the fragment shader is primarily operating in.

The fragment position in that coordinate system acts as an identify function for that coordinate system. Thus, the partial derivatives of that function in window coordinates yields the window coordinate axes expressed in the coordinate system of the fragment position!

vec3 dPdx = dFdx(fragVertex);
vec3 dPdy = dFdy(fragVertex);

But, of course, there’s no dFdz GLSL function. However, we’re trying to create a orthogonal basis, therefore dPdz must be perpendicular to the other two axes and thus can be found by a cross product:

vec3 dPdz = cross(dPdx, dPdy);

Step 3: Transform the normal

From the equations above:

$$N_{u,v,w} = normalize( < -\frac{dF}{du}, -\frac{dF}{dv}, 1 > )$$

and

$$N_{x,y,z} = N_u \frac{dP}{du} + N_v \frac{dP}{dv} + N_w \frac{dP}{dw}$$

therefore:

$$N_{x,y,z} = -\frac{dF}{du} \frac{dP}{du} -\frac{dF}{dv} \frac{dP}{dv} + \frac{dP}{dw}$$

and written as GLSL using the variables defined in the prior steps:

vec3 N = normalize(-dV.x * dPdx - dV.y * dPdy + dPdz);

Compute the height map normal and then transform it using partial derivatives.

We’re done! We have the height map normal from an arbitrary function $F$ re-expressed in the coordinate system of the vertex position!

But wait, one more thing…

The above math is completely correct given the surface normal as determined by the vertex position. However, the surface normal usually isn’t determined solely by the positions of the vertices.

Remember smooth versus flat shading? Smooth shading is already layering a normal perturbation “approximation” of more complex geometry than is really there. By using $dPdx$ and other associated variables, the height map normal is being applied relative to the flat surface of the triangle. Therefore, where there is no indentation or bump (i.e. the height map function is constant), the result will be flat shading. This is not what is desired.

Therefore, we actually have to transform the height map normal onto the space as defined by the “pseudo-surface” defined by the smooth normal. Given all the math we’ve already been through, we’re going to throw a solution out there without much of an explanation. There’s still this one chunk of code from the middle of the function that hasn’t been mentioned yet:

dPdz = normalize(fragNormal);
dPdy = normalize(cross(dPdz, dPdx));
dPdx = normalize(cross(dPdy, dPdz));

Essentially all this is doing is “recreating” the window coordinate to fragment coordinate basis such that it’s rotated such that Z=1 maps to fragNormal rather than the implied “flat” normal of the triangle. Since the smooth and flat normally should be relatively similar this technique works correctly. Maybe someday I’ll post on how to improve that last little hack to be more mathematically accurate, but for now, this post is already way too long!

There you have it: bump mapping!

Written by arthur

September 19th, 2011 at 4:39 pm

Posted in lxengine

## General Update

About three weeks have passed since the last update, here’s a quick brain dump of some of what I’ve been working on:

### GLGeom

Inspired by the OpenGL Mathematics (GLM) library, I decided to factor out the lxcore “graphics math” code.  The factored out code fell into two buckets: code that can be replaced by GLM and code that has been moved into GLGeom, a new library I’ve added to the LxEngine codebase that builds functionality on top of GLM.   GLM contains a lot of useful base functionality for manipulating vectors, matrices, and quaternions – and at multiple precisions.

GLGeom builts on GLM and tries – for the most part – to follow its conventions.  Here’s some of what it adds:

• point and vector specializations of glm::vec (i.e. strict typing to differentiate a point from a directed length)
• color specialization
• radian and degree specializations (i.e. strict typing to avoid simple mismatch unit errors)
• ray, line segment, line, curve primitives
• plane, sphere, cone primitives
• bounding box, bounding sphere primitives
• intersection functions

Again, the intention is to be completely inter-operable with (i.e. complement, not replace) the OpenGL Mathematics library.

In addition to utilizing the excellent work done in GLM, the construction of GLGeom was also motivated to slice off a smaller piece of the very ambitious LxEngine project; within that smaller project, a higher degree of quality is likely achievable in a shorter amount of time (completeness, documentation, testing, benchmarking, etc.).  Having a “showcase” smaller sub-project like should help define what strategies work best for development.  What works there can then be transferred over to the larger LxEngine project as a whole.

### To Be Precise

A small, tangential comment in relation to GLGeom: I’ve posted on precision before, but in setting up the initial benchmarks and unit tests for GLGeom, I was once again surprised at the limits of precision in floating-point arithmetic.  In simple ray-sphere intersection tests, it’s quickly evident how much loss of accuracy can occur in a series operations on mixed large and small floats.  It immediately made me want to dig into possible fixed point math experiments and large-scene spatial indexing techniques to boost precision without having to just to double-precision math (and storage).

If I reached any other conclusion, it would be defining a precision goal for LxEngine: any object that can be contained within a cubic kilometer should have intersection accuracy up to the nearest millimeter.  Eventually, spatial indexing should allow for sub-spaces such that the units themselves do not matter, but rather the generalized goal of  accuracy to 1 part in 1,000,000.

### Ruby on Rails

Though not directly related to LxEngine, I’ve also been experimenting with Ruby on Rails.  I know very little about either so this is a good look at a “new to me” technology to keep my mind open to alternate approaches to problem solving.

The biggest idea sticking with me thus far is the “coding by convention” concept of Rails: basically having a large set of logical assumptions (enforced by convention) about how things fit together rather than manually configuring all the wiring between parts of the application.

This is not something I knew there was an explicit term for, however it has always been a goal of LxEngine.  It should just work out of the box.  Support incremental, live development – no need to code from scratch.  Provide logical stub functionality.  Be user-friendly to the developer as well as the user.  Etc.

### General Thoughts on Goals for LxEngine

• Exact, testable guarantees on precision (i.e. accurate to the millimeter on objects that span no more than one cubic kilometer)
• MVC mode, similar to Ruby on Rails
• Coding by Convention, similar to Ruby on Rails
• Incremental, not from-scratch development
• Live testing, developer-friendly mode (auto-reload scripts, plug-ins, etc.)
• Instructive defaults (i.e. stubs rather than failures)
• Write it once: no duplication between dependencies
• Unit test everything
• Automated benchmarks
• Domain Specific Languages
• Transaction based, with revision history
• Clear demarcation between persistent and transient operations
• Core, Extension, Prototype, Utility sub-sections of each library
• Data source abstraction / transparent serialization
• Person friendly development (e.g. use names, not UUIDs)
• Procedural generation at system limits (e.g. new content rather than boundaries)
• Development time is more important than run-time

Written by arthur

May 4th, 2011 at 7:45 pm

## Math Texts

Below is a link to an excellent set of texts on mathematics from William Chen.  Topics range from “Elementary Mathematics” to more advanced topics like “Distribution of Prime Numbers”.

Written by arthur

June 2nd, 2010 at 11:36 am