## Wireframe Overlay

Added a new effect to Tutorial 5: a single-pass surface and wireframe rendering effect. Note how the wireframe is anti-aliased, fades into the surface color, and displays without z-fighting.

The effect derives directly from the shader in the previously mentioned OpenGL 4.0 Shading Cookbook. The implementation there, according to the author, derives from the one presented in this nVidia whitepaper. I won’t go into detail of the effect (since the explanation is available both in the whitepaper and the book) and will only briefly comment on the implemenation.

In short, it uses a geometry shader to compute the distance of each fragment from each edge of each triangle. The fragment shader then uses those distances to determine whether to use the surface shading color or the wireframe edge color. A mix() call is rather than a discrete choice to antialias the edges. Because it’s a single pass shader, there’s no chance of z-fighting.

### Setting the “ViewportMatrix”

A quick note since the OpenGL 4.0 Shading Cookbook does not explain how to set up the “ViewportMatrix”. It’s pretty simple, but just to clarify, here’s the code to set up the viewport matrix:

GLint viewport[4];
gl->getIntegerv(GL_VIEWPORT, viewport);

float halfWidth = float(viewport[2]) / 2.0f;
float halfHeight = float(viewport[3]) / 2.0f;

glm::mat4 m(
glm::vec4(halfWidth, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, halfHeight, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(halfWidth, halfHeight, 0.0f, 1.0f)
);

gl->uniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(m));

The principle is quite simple. Prior to the viewport matrix transformation, OpenGL coordinates in eye space range from -1 to 1 in x, y, and z. Therefore, these values get scaled by half the width/height and offset by half the width/height. This maps -1 to 0 (e.g. -halfWidth + halfWidth = 0) and 1 to the full width (halfWidth + halfWidth = width). The z values do not get scaled or offset since window coordinate retain the same range as eye coordinates.

### One More Image…

One last image of a slightly tweaked version of the shader that fades the wireframe intensity based on the diffuse intensity. Also, snow

Written by arthur

April 10th, 2012 at 7:06 pm

Posted in lxengine

## Snow (i.e. Point Sprites)

I added a quick, cheap attempt at snow to Tutorial 5. The primary purpose is to demonstrate point sprites.

Below is a quick vimeo clip of the results:

### What and What’s Next

The effect is achieved by sending point list (GL_POINTS) of about size 800 and using a geometry shader to create a screen-aligned, texture-mapped, alpha-blended quad for each point.

A extremely simple particle system is in effect here that resets the z position of the sprite when it falls below 0 (I feel like it’s almost an abuse of language in general to use a specialized term like “particle system” for as simple an update loop as that!). The point sprite shader derives directly from the code provided in OpenGL 4.0 Shading Language Cookbook by David Wolff (which I recommend, by the way – it’s a pretty decent book if you’re like me and hadn’t paid a lot of attention to what’s been happening since GLSL first was introduced).

Next it’d be great get the “Utah sky model” working in the tutorial, if I still had my copy of More OpenGL Game Programming around (which is a pretty good book, but I don’t endorse too highly – too much API reference material or other information easily found on the web is needlessly listed) or still had link to the web page that describes the sky shader code that was later included in the book. Or maybe I could just do a little reading here to add some nice sky effects…

Oh, and adding some first-person navigation and physics would be pretty nice as well if I get back into working with Bullet (I’m not sure I really liked the design I used with the LxMorrowind sample to implement physics – and there were some bugs in there that I never quite understood).

Another minor OpenGL lesson learned during the implementation: glDepthMask() affects calls to glClear(). That makes sense, but I wish I had thought of it before spending a while in the debugger wondering why my shader wasn’t working

Written by arthur

April 6th, 2012 at 5:26 pm

Posted in lxengine

## Pixelation Effect

I recently “borrowed” a pixelation effect posted some time ago by compsci89 to the Irrlicht forums. While it seems counter-intuitive on some level to write an effect that intentionally reduces pixel resolution and color depth, I like the effect:

The effect, modified from the original form to do non-uniform color reduction and a mix of pixelation and actual value, is now in the LxEngine code base and used in the work-in-progress Tutorial 5 code.

It does make me feel good about the state of LxEngine that adding a straightforward effect like this took about 2 minutes.

Written by arthur

April 4th, 2012 at 8:44 pm

Posted in lxengine

## Material System

Work continues on the LxEngine material system…

The above is a simple “toon” shader on the Suzanne model. The shader code is based on the simple example provided at LightHouse3d, but bases the color on a 1d texture look-up on a slightly blurred color texture rather discrete if-else statements.

### The Video

Here’s a quick video of some of the material effects:

### The Code

At the highest-level, the implementation of the new shader is very simple. The new shader is defined by creating a new directory “ToonSimple” in the materials sub-directory of the media directory. This directory contains a vertex shader, a fragment shader, and JSON parameters description.

The material is then loaded in the C++ code via a call to

pRasterizer->acquireMaterial("ToonSimple")

and attached to the Instance‘s spMaterial member. LxEngine handles all the shader loading and parameter activation.

On to the details…

The vertex shader code is quite simple and uses a fixed light direction:

uniform mat4    unifProjMatrix;
uniform mat4    unifViewMatrix;
uniform mat3    unifNormalMatrix;

in      vec3    vertNormal;

varying out float fragIntensity;

void main()
{
// Keep it simple and use a fixed light direction
vec3 lightDir = vec3(.5,-.5, 1.0);

// The fragIntensity is effectively just the intensity of the diffuse
// value from the Phong reflection model.
//
fragIntensity = dot(normalize(lightDir), unifNormalMatrix * vertNormal);

gl_Position = unifProjMatrix * unifViewMatrix * gl_Vertex;
}

The uniforms – unifProjMatrix, unifViewMatrix, unifNormalMatrix – are all “standard” LxEngine names, therefore it will automatically set the correct matrix values when activating the shader. Likewise with the attribute vertNormal; it too will be set automatically by the existing engine code. (This will be explained momentarily.)

The fragment shader is quite simple:

#version 150
#extension GL_ARB_explicit_attrib_location : enable

uniform sampler1D unifTexture0;

in      float fragIntensity;

layout(location = 0) out vec4 outColor;

void main()
{
outColor = texture(unifTexture0, fragIntensity);
}

Now the fragment shader does have an interesting detail: the uniform unifTexture0 is not a “standard” LxEngine uniform. (How could it be? The transformation matrices are common to many shaders, as are properties like the geometry’s normals, but is a texture map ever going to be “standard” enough that the engine would know what to set?)

This is a custom uniform, but it still does not require any C++ code for the engine to set it’s value properly. We’ll get to that momentarily.

### Automatically setting the shader variables

The automatic setting of uniforms and attributes is done via calls to getActiveUniform and getActiveAttrib after the GLSL program is compiled. The MaterialClass class wraps the GLSL program and provides iteration functions that exemplify the use of these OpenGL calls:

void
MaterialClass::iterateUniforms (std::function<void(const Uniform& uniform)> f)
{
int uniformCount;
gl->getProgramiv(mProgram, GL_ACTIVE_UNIFORMS, &uniformCount);
for (int i = 0; i < uniformCount; ++i)
{
Uniform uniform;
char    uniformName[128];
GLsizei uniformNameLength;

gl->getActiveUniform(mProgram, GLuint(i), sizeof(uniformName), &uniformNameLength, &uniform.size, &uniform.type, uniformName);

if (uniformNameLength >= sizeof(uniformName))
{
throw lx_error_exception("GLSL program contains a uniform with too long a name size!");
}
else
{
uniform.name = uniformName;
uniform.location = gl->getUniformLocation(mProgram, uniformName);
f(uniform);
}
}
}

The LxEngine internal rasterizer code, after compiling a GLSL shader for the first time, will iterate over the uniforms and attributes to generate a set of values that need to be set whenever that material is made active. The set of “instructions” necessary to set those values is encapsulated in a std::vector<std::function<void()>> – which, in effect, allows a sort of dynamic code generation at the expense of a bit of overhead to the std::function calls. The flexibility and simplicity definitely win out over the efficiency loss for the purposes of LxEngine.

For example, below is a code snippet from the shader attribute instruction generation function (or see the latest version of the material source code for more details):

std::function<void()>
Material::_generateInstruction(RasterizerGL* pRasterizer, const Attribute& attribute, lx0::lxvar& value)
{
...

if (attribute.name == "vertNormal")
{
return [=]() {
auto& vboNormals = pRasterizer->mContext.spGeometry->mVboNormal;
if (vboNormals)
{
gl->bindBuffer(GL_ARRAY_BUFFER, vboNormals);
gl->vertexAttribPointer(location, 3, GL_FLOAT, GL_FALSE, 0, 0);
gl->enableVertexAttribArray(location);
}
else
gl->disableVertexAttribArray(location);

check_glerror();
};
}

### Setting a custom uniform

The non-standard unifTexture0 uniform is set somewhat differently. The material definition – in addition to the vertex and fragment shaders – also includes a simple JSON parameter description file. In this case, it contains only one parameter:

{
parameters: {
}
}

In this case, the _generateInstruction() method loops over all unrecognized uniform names and searches for a user-specified parameter value for that uniform. In this case, it finds “unifTexture0″ as both an unrecognized uniform and a value in the parameter mapping.

Since the information about the uniform also includes the data type (GL_SAMPLER_1D), LxEngine can figure out to interpret that string value as an image filename, can load that file and store it in the texture cache, and generate an instruction to set that texture when activating the material:

else if (uniform.type == GL_SAMPLER_1D)
{
auto filename = value.as<std::string>();

TexturePtr spTexture = pRasterizer->mTextureCache.findOrCreate(filename );
GLuint textureId = spTexture->mId;

// Activate the corresponding texture unit and set *that* to the GL id
return [=]() {
const auto unit = pRasterizer->mContext.textureUnit++;

// Set the shader uniform to the *texture unit* containing the texture (NOT
// the GL id of the texture)
gl->uniform1i(loc, unit);

gl->activeTexture(GL_TEXTURE0 + unit);
gl->bindTexture(GL_TEXTURE_1D, textureId);

// Set the parameters on the texture unit
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, mFilter);
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, mFilter);
gl->enable(GL_TEXTURE_1D);
check_glerror();
};
}

The point really is that adding a simple shader, like this toon shader, is simple to do. The new material system in LxEngine makes it trivial as common uniforms and attributes are automatically set up and the mechanism for specifying custom uniforms is quite easy.

The objective is an engine designed to make experimentation and research simple.

The toon shader run with an alternate texture (i.e. same material “class” / shader program, different material “instance” / parameterization) displayed with a 2-pass, 9×9 Gaussian fullscreen blur active.

### What’s Next? LxEngine Tutorial 4

I’m currently working on cleaning up and writing up a good description of “Tutorial 4″ of LxEngine. I want to add a couple more effects to make the tutorial feel a bit more substantial first (perhaps add shadow mapping?), but would also like to get a finished tutorial out the door. As a preview, the fourth tutorial will include at least the following: writing an application via Javascript, geometry generated from scripts, multipass rendering, multithreading, time-lapse events, and…well, probably more if I don’t hurry up and finish this off!

Written by arthur

March 22nd, 2012 at 10:29 pm

Posted in lxengine

## Material System Update

An hour here, an hour there, LxEngine is gradually adopting the new material system. The refactoring is a work-in-progress and (this being a hobby project) some percentage of the older code has been unapologetically broken. I’m almost around the bend such that the advantages – other than simply adding cleaner, more logic code – will start coming online. For now, here’s the Standford Bunny run with a two-pass blur/color-inversion shader on top of the first pass’ quilt-like shader…

So much more still to do: better caching, paging and load on demand, loading materials entirely from file, inheritance of parameters, dynamically generating dependent data, sharing a common architecture with ray-tracer, etc., etc., etc.

(I also kicked off a $60 experiment with a small SSD to see if it helps with development on my 4 year old desktop – should arrive next week.) Written by arthur March 18th, 2012 at 3:56 pm Posted in lxengine ## Material System Design without comments Where I’m heading with the LxEngine material system design… ### Classes MaterialType will be composed of: - a unique string name - a vertex, geometry, frag, etc. shader set - an optional JSON list of default named parameter values for some or all of the attributes & uniforms in the shaders MaterialInstance will be composed of: - an optional unique string name - a string name identifying the MaterialType to use - a JSON list of named parameter values ### Runtime The overall approach is to allow an arbitrary shader chain to be compiled into a program. The code will then analyze the list of uniforms and attributes to automatically set the values for the MaterialInstance when it is activated. The set of actions necessary to setup a particular shader is handled by “compiling” at std::vector> of instructions. On the first use of a MaterialType: - the material type will be compiled into a GLSL program (nothing special) On the first use of a MaterialInstance: - query the MaterialType’s list of uniforms and attributes - for each parameter… - if the parameter list specifies a value for that parameter - ..if a direct value is specified and the types match - ….generate a std::function<> to set that value - ..if a indirect value is specified - ….generate a std::function<> to set the value from the current context - else - …check the above in the material default parameters - else check if it is a “standard” name (e.g. unifProjectionMatrix) - …generate a std::function<> to set the context default - else - …report an error that a parameter cannot be set for the material instance The above may involve pulling data from the material type, the material instance, the geometry, etc. On use of the MaterialInstance: - Call each std::function<> generated on the first use This basically amounts to dynamically generated code. The std::function<> incurs some overhead, but it’s very easy to use and the code is relatively elegant. Written by arthur March 8th, 2012 at 11:10 am Posted in lxengine ## Providing a Link to Generated Javascript without comments I’m working on LxLang a GLSL-like language that can be translated to Javascript, C++, and GLSL. The motivation is to avoid maintaining the same basic, low-level graphics functions in three different languages. The translator itself is written in Javascript, so the LxLang -> Javascript translation can occur live in a web page. However, I also wanted to provide a link to the translated source as a sort of “compiled” version, since I highly suspect anyone other than myself would be more willing to use the JS code than code written in some language they’ve never heard of. Enter Data URI, a way to provide a link to that generated source without having to ever save it to a file. I’ve use this scheme before with the extremely nice toDataUrl method of the HTML5 CANVAS element as a way to generate downloadable PNGs from dynamically generated CANVAS graphics. So: I should be able to do this with generated javascript… ### Data URI for Text The solution I’m using is quite simple. Call Javascript escape on the source text, add a “data:text/plan,” prefix, and set that as the URL. Done. var dataUri = 'data:text/plain,' + escape(source);$("#download-link").attr("href", dataUri);

The one downside that I haven’t been able to work past yet is that Chrome disables the “Save as…” option when you visit the resulting page. I don’t understand this. To download the file, you need to right-click save the link rather than visit the page and save that. Strange.

Written by arthur

December 31st, 2011 at 2:34 pm

Posted in company

with one comment

LxEngine currently allows for simple objects (i.e. one vertex buffer) to contain both flat-shaded and smooth shaded triangles. (I.e. finer granularity than glShadeModel() would allow).

This post shows you how to do per-pixel flat shading!

The engine currently does this by sending in per-face data in the form of a 1D texture along with the object. For each triangle, the geometry shader then does a texture look-up based on the primitive ID to determine whether to override the per-vertex normals with a single normal based on the cross product of the vectors between vertex positions. Not necessarily the most efficient approach, but it’s correct and “just works” (which is great for a hobby project aimed at rapid development and experimentation).

I realized there’s a simple way to implement flat shading in the GLSL fragment shader in a GPU expensive manner…

vec3 X = dFdy(fragVertexEc);
vec3 Y = dFdx(fragVertexEc);
vec3 flatNormalEc = normalize( cross(X, Y) );

(…where “fragVertexEc” is the fragment vertex position in eye coordinates).

Assuming the driver implementation is computing the derivative of the vertex position change “locally” (in the sense of only accounting for the vertices in the current triangle – to do otherwise would be really surprising), then the x and y derivatives are going to lie in the plane of the triangle: meaning their cross product is the normal of the triangle plane. Voila! Flat shading at the fragment shader level.

Of course, the dFdx() / dFdy() GLSL are fairly expensive so this is not necessarily an efficient approach in itself. Also, there’s nothing “new” here: anything that can be computed in the fragment shader that is constant across the entire primitive (or varies linearly between vertices) could be calculated in the vertex shader more cheaply. But it is a bit interesting. How about using a gray-scale texture to GLSL “mix” between flat shaded and smooth shaded triangles on a model? How about making that texture animated so the model shifts between tessellated and smooth form? Could be an interesting effect.

Because you can, doesn’t mean you should; but it’s nice to know what you can do.

### (But what brought this up?)

Writing a GLSL fragment shader for bump mapping. (In particular, one that doesn’t use the texture coordinates of the bump map and instead computes an implicit tangent space.)

Using the derivative functions it’s relatively easy to compute the new normal relative to the triangle’s “flat” normal. The resulting height-map affected normal then needs to be re-adjusted to be in the pseudo-”smooth surface” of the object.

//
// Measure the rate of change of the value as we move in x and y.
// Then measure the rate of change of the position in eye coordinates
// as we move in x and y, so that we can re-express the value change (dV) in
// eye coordinates - and thus create a new eye coordinate normal.
//
vec2 dV = vec2(dFdx(value), dFdy(value));
vec3 dPdx = dFdx(fragVertexEc);
vec3 dPdy = dFdy(fragVertexEc);

//
// dPdx is effectively the vector one unit in X as measured in eye space;
// thus to do height mapping, we bump up the "height" (i.e. direction of
// the fragment normal) by the relative change in value of the height as
// we move one unit in X (i.e. dV.x).  Repeat the same for Y.  Now we have
// a "tilted" surface plane to compute a new normal from.
//
vec3 T = dPdx + dV.x * fragNormalEc;
vec3 B = dPdy + dV.y * fragNormalEc;
vec3 N = normalize( cross(T, B) );

//
// But...
//
// The new bump-based normal was computed relative to the plane of the
// triangle: i.e. a flat shaded normal.  We'll assume flat and smooth
// normal are 'relatively similar' and use a simple linear delta between the
// flat and smooth normal to adjust to the bump normal onto the "smooth"
// surface of the object.
//
vec3 delta = fragNormalEc - normalize(cross(dPdx, dPdy));

return normalize(N + delta);

The above shader fragment requires a lot of GPU cycles, but note how it doesn’t even require UV texture coordinates to work – only the height value from the bump map. Generality over efficiency for this implementation.

Written by arthur

September 16th, 2011 at 7:24 pm

Posted in lxengine

## GLSL ray-tracing to debug point lights

While debugging, sometimes wouldn’t it be nice see where the point lights in your scene are? Wouldn’t it be nice to do this via a shader modification rather than by adding new geometry to the scene? Sure, it would.

Here’s a quick, rough post on getting an effect like the blue/orange sphere around the point lights below…

Display some debugging spheres around lights displayed using GLSL code (no added geometry!)

### GLSL Code

It’s pretty easy to add such functionality by adding a ray-tracing sphere intersection test to the fragment shader. The below code is a bit of a hack with hard-coded values, but this is a debugging technique so I’m okay with that. In the fragment shader’s inner loop on the lights add something like the below:

for (int i = 0; i < unifLightCount; ++i)
{
...

//
// BEGIN Point light debug!
//
vec3 U2 = unifLightPosition[i].xyz;
vec3 V2 = normalize(fragVertexEc);
float dotUV2 = dot(U2,V2);
if (dotUV2 > 0)
{
float D2 = dot(U2,U2) - dotUV2 * dotUV2;
if (D2 < 20)
{
c = mix(c, unifLightColor[i], (1 - D2 * D2 / 400));
c = mix(vec3(1,1,1) - c, c, (1 - D2 / 20));
}
}
//
// END point light debug
//
}

What’s happening in the above?

• U2 is the unnormalized vector from the eye (i.e. 0,0,0) to the light in eye coordinates
• V2 is the normalized vector from the eye to the fragment (i.e. the ray, if this were ray-tracing)
• dotUV2 is the cos(a) of the angle between those vectors
• If dotUV2 is negative, then the angle > 90 degrees, meaning a intersection with a sphere centered about the light is not possible (assuming we’re not inside the sphere)
• D2 is then the distance squared of the ray to the sphere / light center (via some trigonometry that’s what it simplifies to)
• Next we check against and arbitrary/this-looks-good radius value (i.e. 20, in this case) for how large we want the debug sphere to be
• Lastly we mix in some color for the light, in this case putting in a mix of the light color itself bordered by the usual color inverted (c = vec3(1,1,1) would be a much simpler alternative)

This is doing a bit of sphere ray-tracing in GLSL that can be used in a lot of ways. For example, light atmospheric effects: how about doing a look-up adding light color, modulated by distance from the light, and multiplying by an animated noise texture to simulate smoke?  Distance from a point can be used for plenty of effects.

### Update: Glow Effect

From the debugging code described above (and adding the couple lines of code to load Morrowind light models as well), it was not very difficult to add a glow effect around lights:

Before - no glow effect

After - atmospheric glow effect on lights

The code is not that different from the debugging trick describe above. The main difference is to reject the effect if the fragment is in front of the radius (i.e. the equivalent of a z-test):

    //
// Light glow effect
//
vec3 U2 = unifLightPosition[i].xyz;
vec3 V2 = normalize(fragVertexEc);
float dotUV2 = dot(U2,V2);
if (dotUV2 > 0)
{
float D2 = dot(U2,U2) - dotUV2 * dotUV2;
float R = 120;
float T = D2 / (R*R);
if (T < 1 && length(fragVertexEc) > length(U2) - R)
c += unifLightColor[i] * .5 * pow(1 - T,6);
}

Written by arthur

July 15th, 2011 at 10:24 am

Posted in lxengine

The shader builder from a while ago has been resurrected/rewritten and integrated into LxEngine. I prioritized this as I wanted LxEngine Tutorial 3 to have use procedural materials to produce a somewhat advanced look – given the naive objectives of Tutorial 1 (i.e. get a window up) and Tutorial 2 (i.e. draw something as basic as possible).

The shader builder is not complete yet, but the core architecture is done.  With the core architecture in place, advancing the code is largely a matter of adding support for more parameters and adding convenience via simple bits of intelligence in the builder.  Eventually, I’d like the shader builder to be a stand-alone component that any GLSL application can use, that is to say make it usable outside LxEngine. Of course, the application using it will inevitably have to adapt to some of the variable naming, but that’s more or less an unavoidable part of interface with any code that has named arguments.

### What?

Below is the demonstration screen shot of a cube with an unlit, nested spherical checker map. And immediately following is the XML + JSON code that’s the input to the shader generator.

Nested checker map procedural, dynamically generated from a JSON-based shader graph

    <Material id="checker_nested">
{
graph : {
_type : "solid",
color : {
_type : "checker",
color0 : {
_type : "checker",
color0 : [ 1, 0, 0 ],
color1 : [ 0, 0, 0 ],
uv : {
_type : "spherical",
scale : [ 8, 8 ],
},
},
color1 : [ 1, 1, .8 ],
uv : {
_type : "spherical",
scale : [ 2, 2 ],
},
},
}
}
</Material>

Update:

Below is a slightly more interesting looking example: a weave pattern nested with a checker pattern.  The code for the weave pattern comes from the LxEngine wiki (with the Javascript translated to GLSL):

LxEngine Shader Builder example - a weave pattern with a nested checker pattern

### How?

I’ll just fly through the how…

The graph is essentially a collection of functions, each with one output and N inputs.  The functions in the above are “solid”, “checker”, and “spherical”.  Each of these functions is packaged a snippet of GLSL code (i.e. the actual function code) and a chunk of JSON annotation that describes the functions data types and default values.  (These function + annotation pairs are not shown above.)

The builder then merely walks the JSON graph collects the set of functions being used, creates a tree of calls where each parameter is either (a) the default because it was unspecified in the material description, (b) a hard-coded value in the shader because a hard-coded value was used in the material description, or (c) the result of another function call – i.e. which has it’s value generated by recursively repeating the process.

Now, part (b) is inefficient at the moment as two identical shaders with different parameterizations will generate two different shaders because of the hard-coding; however, that’s easily fixable and on the todo list.  (The fix basically is to generate uniforms for all hard-coded values and modify the builder to output a shader + a set of uniform values.  This would then be used along with a simple cache mechanism so the same shader gets returned in case of identical graphs.)

Written by arthur

June 29th, 2011 at 9:29 pm

Posted in lxengine