## Normal Maps: WIP

with one comment

For the record, the below image is not yet correct, but here’s a first-pass at normal mapping in LxEngine:

The tangent vectors are calculated by GLGeom. The algorithm I’m using is somewhat…custom. I haven’t seen an implementation similar to how I’m attempting it – and thus suspect that I’ve missed a key point or two. That’s okay, however – this is for fun and to learn something.

The basic approach is to define a function F(p,v) = dUV, a function that takes a point on the surface of the mesh and a direction vector and returns the change in UV at that point in that direction. If we constraint this function to only be valid at vertices of the mesh (which is fine, since we’re only going to use this function at vertex points), then we can define F(p,v) as the weighted average of all the edges adjacent to the vertex at p – with the weights corresponding to the dot product between v and the direction of that edge.  So if we have F(p,v) defined, that we can find tangent and bitangent by finding directions v where F is at a local maximum. This can be done by simple calculus: where the derivative of a function is 0, it is at a local maximum or minimum. Given that F(p,v) is a fairly simple weighted average, the derivative is not complex and solving for where it is 0 is not complex. Thus we end up with the directions in which U increases most and V increase most…which, until I locate the flaw in my understanding of the problem, are theoretically the tangent and bitangent directions.

We’ll see. For now, I have compiling code and a screenshot. That’s enough for today. Testing and debugging will follow

Written by arthur

April 13th, 2012 at 8:19 pm

Posted in lxengine

What I’ve been working on today…

### Created a stub for lxcore.lib

Currently, LxEngine is composed of the following:

• glgeom (header-only library of math functions)
• lxengine.lib (where the vast majority of the code is)
• lxengineapp.exe (a small front-end that drives the code in lxengine.lib)

What I’d like it to be is:

• glgeom (same as before)
• lxcore.lib (low-level, standalone functions and data types from lxengine.lib)
• lxframework.lib (the higher-level framework of Engine, Document, View, Element, classes)
• lxengine.exe (slightly larger front-end that drives the framework in a flexible, configurable form)

That’s a bit of an over simplification since there’s also extension libraries like lxrasterizer.lib and plug-ins like soundAL.dll, which exist now and will continue to exist.

The general point however is that I want make the framework “leaner.”  Right now it has too many specific features (mostly from early development when things weren’t very modular) and it should really be a rather bare but flexible MVC framework.  Those features belong in plug-ins and extensions. Even the utility stand-alone functions belong elsewhere (lxcore.lib). The end goal is a smaller framework is much better: frameworks are useful – right up until the point they become bloated and hard to understand (and thus defect prone and with steep learning curves).

### GLGeom Error Handling

Long story short: all this time, I’ve been using “assert(0)” as an error handling mechanism in GLGeom to keep from tying in any dependencies.  That’s not a very valid approach…

So I’m added a simple, but flexible error handling mechanism instead: docs here.

### GLGeom Compute Primitive Buffer Adjacency Info

Still a work-in-progress, I’m trying to port over some old code that is pretty fast at computing adjacency from a polygon soup.  This is also stressing GLGeom from a new angle, which is helping me fill in the blanks on some of the functionality in this currently-version-0.0.5-library.

Better get back to working on it…

Written by arthur

April 12th, 2012 at 10:13 am

## Wireframe Overlay

Added a new effect to Tutorial 5: a single-pass surface and wireframe rendering effect. Note how the wireframe is anti-aliased, fades into the surface color, and displays without z-fighting.

The effect derives directly from the shader in the previously mentioned OpenGL 4.0 Shading Cookbook. The implementation there, according to the author, derives from the one presented in this nVidia whitepaper. I won’t go into detail of the effect (since the explanation is available both in the whitepaper and the book) and will only briefly comment on the implemenation.

In short, it uses a geometry shader to compute the distance of each fragment from each edge of each triangle. The fragment shader then uses those distances to determine whether to use the surface shading color or the wireframe edge color. A mix() call is rather than a discrete choice to antialias the edges. Because it’s a single pass shader, there’s no chance of z-fighting.

### Setting the “ViewportMatrix”

A quick note since the OpenGL 4.0 Shading Cookbook does not explain how to set up the “ViewportMatrix”. It’s pretty simple, but just to clarify, here’s the code to set up the viewport matrix:

GLint viewport[4];
gl->getIntegerv(GL_VIEWPORT, viewport);

float halfWidth = float(viewport[2]) / 2.0f;
float halfHeight = float(viewport[3]) / 2.0f;

glm::mat4 m(
glm::vec4(halfWidth, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, halfHeight, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(halfWidth, halfHeight, 0.0f, 1.0f)
);

gl->uniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(m));

The principle is quite simple. Prior to the viewport matrix transformation, OpenGL coordinates in eye space range from -1 to 1 in x, y, and z. Therefore, these values get scaled by half the width/height and offset by half the width/height. This maps -1 to 0 (e.g. -halfWidth + halfWidth = 0) and 1 to the full width (halfWidth + halfWidth = width). The z values do not get scaled or offset since window coordinate retain the same range as eye coordinates.

### One More Image…

One last image of a slightly tweaked version of the shader that fades the wireframe intensity based on the diffuse intensity. Also, snow

Written by arthur

April 10th, 2012 at 7:06 pm

Posted in lxengine

## Snow (i.e. Point Sprites)

I added a quick, cheap attempt at snow to Tutorial 5. The primary purpose is to demonstrate point sprites.

Below is a quick vimeo clip of the results:

### What and What’s Next

The effect is achieved by sending point list (GL_POINTS) of about size 800 and using a geometry shader to create a screen-aligned, texture-mapped, alpha-blended quad for each point.

A extremely simple particle system is in effect here that resets the z position of the sprite when it falls below 0 (I feel like it’s almost an abuse of language in general to use a specialized term like “particle system” for as simple an update loop as that!). The point sprite shader derives directly from the code provided in OpenGL 4.0 Shading Language Cookbook by David Wolff (which I recommend, by the way – it’s a pretty decent book if you’re like me and hadn’t paid a lot of attention to what’s been happening since GLSL first was introduced).

Next it’d be great get the “Utah sky model” working in the tutorial, if I still had my copy of More OpenGL Game Programming around (which is a pretty good book, but I don’t endorse too highly – too much API reference material or other information easily found on the web is needlessly listed) or still had link to the web page that describes the sky shader code that was later included in the book. Or maybe I could just do a little reading here to add some nice sky effects…

Oh, and adding some first-person navigation and physics would be pretty nice as well if I get back into working with Bullet (I’m not sure I really liked the design I used with the LxMorrowind sample to implement physics – and there were some bugs in there that I never quite understood).

Another minor OpenGL lesson learned during the implementation: glDepthMask() affects calls to glClear(). That makes sense, but I wish I had thought of it before spending a while in the debugger wondering why my shader wasn’t working

Written by arthur

April 6th, 2012 at 5:26 pm

Posted in lxengine

## Pixelation Effect

I recently “borrowed” a pixelation effect posted some time ago by compsci89 to the Irrlicht forums. While it seems counter-intuitive on some level to write an effect that intentionally reduces pixel resolution and color depth, I like the effect:

The effect, modified from the original form to do non-uniform color reduction and a mix of pixelation and actual value, is now in the LxEngine code base and used in the work-in-progress Tutorial 5 code.

It does make me feel good about the state of LxEngine that adding a straightforward effect like this took about 2 minutes.

Written by arthur

April 4th, 2012 at 8:44 pm

Posted in lxengine

## Catmull-Clark Subdivision

### Update

The defect mentioned before in the Catmull-Clark subdivision code has been corrected. A cube now correctly approaches a sphere with repeated subdivision.

Ah, much better…

A 5-level subdivision of a cube rendered in wireframe mode

The changes were two in part:

1. The algorithm does require quads to be created rather than triangles (not surprising), but the change to quads didn’t work until I realized…
2. …the creation of the new face quads must start on an “edge point”; I was originally creating my quads starting with an original vertex – thus ‘rotating’ the quads created from the original face by one vertex, which significantly changes the result.

Both those changes are “obvious” in retrospect since they’re how the Catmull-Clark algorithm is described, however that second bit was not an easy one to pick out from scanning the code.

### Original Post

I’ve been working on my Javascript implementation of a half-edge based mesh that supports Catmull-Clark subdivision. The result is below for three levels of subdivision of a cube – and obviously not yet correct. (A cube should subdivide towards a sphere, not the distorted blob below.)

### What’s wrong with this picture?

I suspect the problem may be that I am creating triangles rather than quads when subdividing the faces. That would create different updated vertex positions upon multi-level subdivisions, I suspect (the math to prove it one way or the other is certainly possible, but I haven’t attempted that).

The whole experience definitely has me tossing around the pros & cons of implementing functionality in Javascript versus C++. There’s a lot to be said for both and I can’t make up my mind if implementing subdivision and half-edge meshes in Javascript was for the better or worse. (Well, actually I know it was for the better in this instance for the very reason it’s showed me numerous advantages and disadvantages, but I’m thinking of next time.)

Catmull-Clark subdivision is supposed to be part of Tutorial 4, though I wonder why in the world I’ve tried to cram so much new functionality into one new tutorial!

Written by arthur

March 30th, 2012 at 7:43 pm

## Rambling Thoughts on a Renderable Class

I’m using this blog post as a brain dump of thoughts on future work for LxEngine…

### Instance

After the material system refactoring, it seems that the Instance class may need a more complex encapsulating class.

The Instance class currently contains a set of data about one logical “instance” of geometry in the scene. It must contain a shared pointer to the actual geometry to render. It then can optionally contain a shared pointer to a material, transformation, camera, light set, and bounding box information; if any of these items is missing, it simply uses a reasonable default (i.e. default material for the pass, identity transformation, scene camera, scene light set, infinite bounds, etc.), but this easily allows per-entity information to be specified.

### Renderable

The Renderable would be a encapsulating class of Instance. In its simplest form, a Renderable would contain just an Instance. Nothing special.

However, the Renderable might have the ability to store a set of Instances. For example, an entity might want to use a simplified mesh for shadow rendering versus a higher-res mesh for display. The Renderable would contain a “cache” of instances which might store different Instances for different circumstances. (Or maybe just a cache of different Geometry objects?). Similarly, it might be have a spot to store dynamically generated LODs.

It might also contain a JSON object of “rendering properties.” These would be not strictly typed properties that might vary from application to application. For example, “shadows : { cast : true, receive : false }” might be a valid property. This would only make sense to an application supporting shadows – so it is not part of the core rendering system, but it does have a logic place on the Renderable.

The long and short of it would be that Renderable would be a dynamic, extensive class that allows for different Instances and configurations dependent on the rendering circumstances. The Renderable would interact with the “higher-level” rendering algorithm (LOD, rendering pass, etc.) while the Instance would describe the specific what-needs-to-be-rendered-now data.

Written by arthur

March 24th, 2012 at 8:44 pm

## Material System

Work continues on the LxEngine material system…

The above is a simple “toon” shader on the Suzanne model. The shader code is based on the simple example provided at LightHouse3d, but bases the color on a 1d texture look-up on a slightly blurred color texture rather discrete if-else statements.

### The Video

Here’s a quick video of some of the material effects:

### The Code

At the highest-level, the implementation of the new shader is very simple. The new shader is defined by creating a new directory “ToonSimple” in the materials sub-directory of the media directory. This directory contains a vertex shader, a fragment shader, and JSON parameters description.

The material is then loaded in the C++ code via a call to

pRasterizer->acquireMaterial("ToonSimple")

and attached to the Instance‘s spMaterial member. LxEngine handles all the shader loading and parameter activation.

On to the details…

The vertex shader code is quite simple and uses a fixed light direction:

uniform mat4    unifProjMatrix;
uniform mat4    unifViewMatrix;
uniform mat3    unifNormalMatrix;

in      vec3    vertNormal;

varying out float fragIntensity;

void main()
{
// Keep it simple and use a fixed light direction
vec3 lightDir = vec3(.5,-.5, 1.0);

// The fragIntensity is effectively just the intensity of the diffuse
// value from the Phong reflection model.
//
fragIntensity = dot(normalize(lightDir), unifNormalMatrix * vertNormal);

gl_Position = unifProjMatrix * unifViewMatrix * gl_Vertex;
}

The uniforms – unifProjMatrix, unifViewMatrix, unifNormalMatrix – are all “standard” LxEngine names, therefore it will automatically set the correct matrix values when activating the shader. Likewise with the attribute vertNormal; it too will be set automatically by the existing engine code. (This will be explained momentarily.)

The fragment shader is quite simple:

#version 150
#extension GL_ARB_explicit_attrib_location : enable

uniform sampler1D unifTexture0;

in      float fragIntensity;

layout(location = 0) out vec4 outColor;

void main()
{
outColor = texture(unifTexture0, fragIntensity);
}

Now the fragment shader does have an interesting detail: the uniform unifTexture0 is not a “standard” LxEngine uniform. (How could it be? The transformation matrices are common to many shaders, as are properties like the geometry’s normals, but is a texture map ever going to be “standard” enough that the engine would know what to set?)

This is a custom uniform, but it still does not require any C++ code for the engine to set it’s value properly. We’ll get to that momentarily.

### Automatically setting the shader variables

The automatic setting of uniforms and attributes is done via calls to getActiveUniform and getActiveAttrib after the GLSL program is compiled. The MaterialClass class wraps the GLSL program and provides iteration functions that exemplify the use of these OpenGL calls:

void
MaterialClass::iterateUniforms (std::function<void(const Uniform& uniform)> f)
{
int uniformCount;
gl->getProgramiv(mProgram, GL_ACTIVE_UNIFORMS, &uniformCount);
for (int i = 0; i < uniformCount; ++i)
{
Uniform uniform;
char    uniformName[128];
GLsizei uniformNameLength;

gl->getActiveUniform(mProgram, GLuint(i), sizeof(uniformName), &uniformNameLength, &uniform.size, &uniform.type, uniformName);

if (uniformNameLength >= sizeof(uniformName))
{
throw lx_error_exception("GLSL program contains a uniform with too long a name size!");
}
else
{
uniform.name = uniformName;
uniform.location = gl->getUniformLocation(mProgram, uniformName);
f(uniform);
}
}
}

The LxEngine internal rasterizer code, after compiling a GLSL shader for the first time, will iterate over the uniforms and attributes to generate a set of values that need to be set whenever that material is made active. The set of “instructions” necessary to set those values is encapsulated in a std::vector<std::function<void()>> – which, in effect, allows a sort of dynamic code generation at the expense of a bit of overhead to the std::function calls. The flexibility and simplicity definitely win out over the efficiency loss for the purposes of LxEngine.

For example, below is a code snippet from the shader attribute instruction generation function (or see the latest version of the material source code for more details):

std::function<void()>
Material::_generateInstruction(RasterizerGL* pRasterizer, const Attribute& attribute, lx0::lxvar& value)
{
...

if (attribute.name == "vertNormal")
{
return [=]() {
auto& vboNormals = pRasterizer->mContext.spGeometry->mVboNormal;
if (vboNormals)
{
gl->bindBuffer(GL_ARRAY_BUFFER, vboNormals);
gl->vertexAttribPointer(location, 3, GL_FLOAT, GL_FALSE, 0, 0);
gl->enableVertexAttribArray(location);
}
else
gl->disableVertexAttribArray(location);

check_glerror();
};
}

### Setting a custom uniform

The non-standard unifTexture0 uniform is set somewhat differently. The material definition – in addition to the vertex and fragment shaders – also includes a simple JSON parameter description file. In this case, it contains only one parameter:

{
parameters: {
}
}

In this case, the _generateInstruction() method loops over all unrecognized uniform names and searches for a user-specified parameter value for that uniform. In this case, it finds “unifTexture0″ as both an unrecognized uniform and a value in the parameter mapping.

Since the information about the uniform also includes the data type (GL_SAMPLER_1D), LxEngine can figure out to interpret that string value as an image filename, can load that file and store it in the texture cache, and generate an instruction to set that texture when activating the material:

else if (uniform.type == GL_SAMPLER_1D)
{
auto filename = value.as<std::string>();

TexturePtr spTexture = pRasterizer->mTextureCache.findOrCreate(filename );
GLuint textureId = spTexture->mId;

// Activate the corresponding texture unit and set *that* to the GL id
return [=]() {
const auto unit = pRasterizer->mContext.textureUnit++;

// Set the shader uniform to the *texture unit* containing the texture (NOT
// the GL id of the texture)
gl->uniform1i(loc, unit);

gl->activeTexture(GL_TEXTURE0 + unit);
gl->bindTexture(GL_TEXTURE_1D, textureId);

// Set the parameters on the texture unit
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, mFilter);
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, mFilter);
gl->enable(GL_TEXTURE_1D);
check_glerror();
};
}

The point really is that adding a simple shader, like this toon shader, is simple to do. The new material system in LxEngine makes it trivial as common uniforms and attributes are automatically set up and the mechanism for specifying custom uniforms is quite easy.

The objective is an engine designed to make experimentation and research simple.

The toon shader run with an alternate texture (i.e. same material “class” / shader program, different material “instance” / parameterization) displayed with a 2-pass, 9×9 Gaussian fullscreen blur active.

### What’s Next? LxEngine Tutorial 4

I’m currently working on cleaning up and writing up a good description of “Tutorial 4″ of LxEngine. I want to add a couple more effects to make the tutorial feel a bit more substantial first (perhaps add shadow mapping?), but would also like to get a finished tutorial out the door. As a preview, the fourth tutorial will include at least the following: writing an application via Javascript, geometry generated from scripts, multipass rendering, multithreading, time-lapse events, and…well, probably more if I don’t hurry up and finish this off!

Written by arthur

March 22nd, 2012 at 10:29 pm

Posted in lxengine

## Material System Update

An hour here, an hour there, LxEngine is gradually adopting the new material system. The refactoring is a work-in-progress and (this being a hobby project) some percentage of the older code has been unapologetically broken. I’m almost around the bend such that the advantages – other than simply adding cleaner, more logic code – will start coming online. For now, here’s the Standford Bunny run with a two-pass blur/color-inversion shader on top of the first pass’ quilt-like shader…

So much more still to do: better caching, paging and load on demand, loading materials entirely from file, inheritance of parameters, dynamically generating dependent data, sharing a common architecture with ray-tracer, etc., etc., etc.

(I also kicked off a \$60 experiment with a small SSD to see if it helps with development on my 4 year old desktop – should arrive next week.)

Written by arthur

March 18th, 2012 at 3:56 pm

Posted in lxengine

## Material System Design

Where I’m heading with the LxEngine material system design…

### Classes

MaterialType will be composed of:
- a unique string name
- a vertex, geometry, frag, etc. shader set
- an optional JSON list of default named parameter values for some or all of the attributes & uniforms in the shaders

MaterialInstance will be composed of:
- an optional unique string name
- a string name identifying the MaterialType to use
- a JSON list of named parameter values

### Runtime

The overall approach is to allow an arbitrary shader chain to be compiled into a program. The code will then analyze the list of uniforms and attributes to automatically set the values for the MaterialInstance when it is activated.

The set of actions necessary to setup a particular shader is handled by “compiling” at std::vector> of instructions.

On the first use of a MaterialType:
- the material type will be compiled into a GLSL program (nothing special)

On the first use of a MaterialInstance:
- query the MaterialType’s list of uniforms and attributes
- for each parameter…
- if the parameter list specifies a value for that parameter
- ..if a direct value is specified and the types match
- ….generate a std::function<> to set that value
- ..if a indirect value is specified
- ….generate a std::function<> to set the value from the current context
- else
- …check the above in the material default parameters
- else check if it is a “standard” name (e.g. unifProjectionMatrix)
- …generate a std::function<> to set the context default
- else
- …report an error that a parameter cannot be set for the material instance

The above may involve pulling data from the material type, the material instance, the geometry, etc.

On use of the MaterialInstance:
- Call each std::function<> generated on the first use

This basically amounts to dynamically generated code. The std::function<> incurs some overhead, but it’s very easy to use and the code is relatively elegant.

Written by arthur

March 8th, 2012 at 11:10 am

Posted in lxengine