## Material System

Work continues on the LxEngine material system…

The above is a simple “toon” shader on the Suzanne model. The shader code is based on the simple example provided at LightHouse3d, but bases the color on a 1d texture look-up on a slightly blurred color texture rather discrete if-else statements.

### The Video

Here’s a quick video of some of the material effects:

### The Code

At the highest-level, the implementation of the new shader is very simple. The new shader is defined by creating a new directory “ToonSimple” in the materials sub-directory of the media directory. This directory contains a vertex shader, a fragment shader, and JSON parameters description.

The material is then loaded in the C++ code via a call to

pRasterizer->acquireMaterial("ToonSimple")

and attached to the Instance‘s spMaterial member. LxEngine handles all the shader loading and parameter activation.

On to the details…

The vertex shader code is quite simple and uses a fixed light direction:

uniform mat4    unifProjMatrix;
uniform mat4    unifViewMatrix;
uniform mat3    unifNormalMatrix;

in      vec3    vertNormal;

varying out float fragIntensity;

void main()
{
// Keep it simple and use a fixed light direction
vec3 lightDir = vec3(.5,-.5, 1.0);

// The fragIntensity is effectively just the intensity of the diffuse
// value from the Phong reflection model.
//
fragIntensity = dot(normalize(lightDir), unifNormalMatrix * vertNormal);

gl_Position = unifProjMatrix * unifViewMatrix * gl_Vertex;
}

The uniforms – unifProjMatrix, unifViewMatrix, unifNormalMatrix – are all “standard” LxEngine names, therefore it will automatically set the correct matrix values when activating the shader. Likewise with the attribute vertNormal; it too will be set automatically by the existing engine code. (This will be explained momentarily.)

The fragment shader is quite simple:

#version 150
#extension GL_ARB_explicit_attrib_location : enable

uniform sampler1D unifTexture0;

in      float fragIntensity;

layout(location = 0) out vec4 outColor;

void main()
{
outColor = texture(unifTexture0, fragIntensity);
}

Now the fragment shader does have an interesting detail: the uniform unifTexture0 is not a “standard” LxEngine uniform. (How could it be? The transformation matrices are common to many shaders, as are properties like the geometry’s normals, but is a texture map ever going to be “standard” enough that the engine would know what to set?)

This is a custom uniform, but it still does not require any C++ code for the engine to set it’s value properly. We’ll get to that momentarily.

### Automatically setting the shader variables

The automatic setting of uniforms and attributes is done via calls to getActiveUniform and getActiveAttrib after the GLSL program is compiled. The MaterialClass class wraps the GLSL program and provides iteration functions that exemplify the use of these OpenGL calls:

void
MaterialClass::iterateUniforms (std::function<void(const Uniform& uniform)> f)
{
int uniformCount;
gl->getProgramiv(mProgram, GL_ACTIVE_UNIFORMS, &uniformCount);
for (int i = 0; i < uniformCount; ++i)
{
Uniform uniform;
char    uniformName[128];
GLsizei uniformNameLength;

gl->getActiveUniform(mProgram, GLuint(i), sizeof(uniformName), &uniformNameLength, &uniform.size, &uniform.type, uniformName);

if (uniformNameLength >= sizeof(uniformName))
{
throw lx_error_exception("GLSL program contains a uniform with too long a name size!");
}
else
{
uniform.name = uniformName;
uniform.location = gl->getUniformLocation(mProgram, uniformName);
f(uniform);
}
}
}

The LxEngine internal rasterizer code, after compiling a GLSL shader for the first time, will iterate over the uniforms and attributes to generate a set of values that need to be set whenever that material is made active. The set of “instructions” necessary to set those values is encapsulated in a std::vector<std::function<void()>> – which, in effect, allows a sort of dynamic code generation at the expense of a bit of overhead to the std::function calls. The flexibility and simplicity definitely win out over the efficiency loss for the purposes of LxEngine.

For example, below is a code snippet from the shader attribute instruction generation function (or see the latest version of the material source code for more details):

std::function<void()>
Material::_generateInstruction(RasterizerGL* pRasterizer, const Attribute& attribute, lx0::lxvar& value)
{
...

if (attribute.name == "vertNormal")
{
return [=]() {
auto& vboNormals = pRasterizer->mContext.spGeometry->mVboNormal;
if (vboNormals)
{
gl->bindBuffer(GL_ARRAY_BUFFER, vboNormals);
gl->vertexAttribPointer(location, 3, GL_FLOAT, GL_FALSE, 0, 0);
gl->enableVertexAttribArray(location);
}
else
gl->disableVertexAttribArray(location);

check_glerror();
};
}

### Setting a custom uniform

The non-standard unifTexture0 uniform is set somewhat differently. The material definition – in addition to the vertex and fragment shaders – also includes a simple JSON parameter description file. In this case, it contains only one parameter:

{
parameters: {
}
}

In this case, the _generateInstruction() method loops over all unrecognized uniform names and searches for a user-specified parameter value for that uniform. In this case, it finds “unifTexture0″ as both an unrecognized uniform and a value in the parameter mapping.

Since the information about the uniform also includes the data type (GL_SAMPLER_1D), LxEngine can figure out to interpret that string value as an image filename, can load that file and store it in the texture cache, and generate an instruction to set that texture when activating the material:

else if (uniform.type == GL_SAMPLER_1D)
{
auto filename = value.as<std::string>();

TexturePtr spTexture = pRasterizer->mTextureCache.findOrCreate(filename );
GLuint textureId = spTexture->mId;

// Activate the corresponding texture unit and set *that* to the GL id
return [=]() {
const auto unit = pRasterizer->mContext.textureUnit++;

// Set the shader uniform to the *texture unit* containing the texture (NOT
// the GL id of the texture)
gl->uniform1i(loc, unit);

gl->activeTexture(GL_TEXTURE0 + unit);
gl->bindTexture(GL_TEXTURE_1D, textureId);

// Set the parameters on the texture unit
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, mFilter);
gl->texParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, mFilter);
gl->enable(GL_TEXTURE_1D);
check_glerror();
};
}

### Adding simple shaders should be simple

The point really is that adding a simple shader, like this toon shader, is simple to do. The new material system in LxEngine makes it trivial as common uniforms and attributes are automatically set up and the mechanism for specifying custom uniforms is quite easy.

The objective is an engine designed to make experimentation and research simple.

The toon shader run with an alternate texture (i.e. same material “class” / shader program, different material “instance” / parameterization) displayed with a 2-pass, 9×9 Gaussian fullscreen blur active.

### What’s Next? LxEngine Tutorial 4

I’m currently working on cleaning up and writing up a good description of “Tutorial 4″ of LxEngine. I want to add a couple more effects to make the tutorial feel a bit more substantial first (perhaps add shadow mapping?), but would also like to get a finished tutorial out the door. As a preview, the fourth tutorial will include at least the following: writing an application via Javascript, geometry generated from scripts, multipass rendering, multithreading, time-lapse events, and…well, probably more if I don’t hurry up and finish this off!

Written by arthur

March 22nd, 2012 at 10:29 pm

Posted in lxengine

## Shader Builder Progress

Significant progress in the LxEngine ShaderBuilder.  The builder now supports Phong shading and procedural patterns such as tile, spot, diamond, and wave.

Below is a quick, low-quality demo video of the work-in-progress LxEngine Tutorial 3, which loads of a Blender model and allows the user to cycle through a set of shaders to apply (each material defined via a concise JSON description in the XML file):

### Video

Note how the specular highlights on the different, individual tiles of the checker patterns are not the same for the red checker materials. This really is a nested procedural! Each tile in the checker not only gets a color, but has its own Phong specification. Also check out the bright highlights on the last Phong checker: that’s actually another level of nesting where a border pattern adds much brighter specularity to the edges of the tile.

### Stills

Here is the Stanford bunny shaded with a checker pattern with a nested wave pattern:

Stanford Bunny

Here’s the Suzanne model from Blender, shaded with the normal-based shader:

Blender's Suzanne

Finally, here’s the classic Utah Teapot with a spot pattern:

Utah Teapot

### What’s Next?

I have a host of todo’s lined up, but…any reader suggestions on what next to add to LxEngine? I’m looking for something that – while still somewhat feasible for a single person to implement – would help the engine stand out as having potential to be a top-of-the-line engine someday.

• Continue the shader work and add a Tutorial 4 with even more advanced multi-pass, multi-layer rendering and animation?
• Further Bullet Physics integration to demo how that library can easily and effectively be used within LxEngine?
• A miniature MineCraft procedural world sample with an infinite world with a sky, rain, and snow since MineCraft is all the rage?
• A simple FPS to demo a complete game with LxEngine?
• Something completely different?

Written by arthur

July 1st, 2011 at 1:26 pm

## Per-Face Smooth/Flat Shading in GLSL

### Per-Object

Flat shading uses the same normal across the whole face of an object.  This is useful, for example, when rendering a cube: the normal should be even across each face with a hard edge between each of the six faces.  Smooth shading on the other hand is useful for a sphere: the normal is blended across each sample on the face, giving the appearance of a smooth curve even though the sphere is composed of a discrete tessellation.

In a OpenGL fixed function pipeline, the glShadeModel() function can be used to control the shading on a per-object basis.  It cannot be changed within a glBegin() / glEnd() block, however.   In a GLSL shader based pipeline, the “flat” keyword (and deprecated “varying” keyword) can be used to control the interpolation of a particular attribute between shader stages: but the toggling between the interpolation types requires separate shaders (i.e. must be the same across the entire object).  Furthermore, using the “flat” keyword requires setting up the provoking vertex correctly for the model – which means more processing on the loaded mesh before it can be rendered.

### Per-Face Smooth

An interesting case is a cylinder: the caps should be shaded flat with a uniform normal, but the length of the cylinder should be smooth like a sphere.  How can this be rendered?  One option is to break the object into two sub-objects: one for the caps and one for the length of the cylinder.  However, I wanted to render the object via single shader and a single object.

The input data…

LxEngine loads .blend models from Blender directly (no import/export – the .blend format is supported natively in LxEngine).  One feature in Blender is to control smooth/flat shading on a per-face rather than per-model basis.  This is stored as a flag on the face data in the .blend file (i.e. in Blender SDNA terms: the 8-bit “flag” field in the “mface” array of a “Mesh”).   Therefore, Blender can be used to create flat shaded caps on a cylinder and smooth shading on its length.  The input data is available.

Rendering…

One solution requires GLSL 1.50 or greater, but is quite simple:

• Create a 1D texture with a single float channel of width = number of faces for each object
• For each texel, set the value to 0.0 if the face should be flat and 1.0 if the face should be smooth shaded
• Query the texture in the geometry shader for each face via (gl_PrimitiveIDIn + .5) / textureSize(sampler, 0)
• If the value of the sample is < 1.0, then compute a flat normal for the face and pass that to the next stage for all the face’s vertices

(The above is the principle: it may be prohibitively expensive to literally create a unique 1D texture for all objects in the scene.)

### The Result

In the image below, the cylinder is a single mesh (i.e. vertex array object) but is partially flat shaded and partially smooth shaded.  The spheres are fully smooth shaded, the cubes fully flat shaded, and the cylinders are a mix. The shading model data comes directly from the .blend file – no extra work on the artist’s or programmer’s behalf.

Flat and smooth shading within a single mesh

A nice side-effect of this processing is that the shading reflects what occurs in Blender: no additional information needs to be tagged to the object – what the artist created in Blender should be reflected in the LxEngine renderer.

The Code

(TBD…still working on preventing WordPress from butchering posted code snippets)

Written by arthur

June 15th, 2011 at 6:49 pm

A prototype quality BlendReader class has been integrated into LxEngine.   The interface is small and simple. Part of the core vision for LxEngine is excellent usability for the development team during development. This means removing steps from the development process that technically can be automated.  Direct support for the Blender file format is an exciting addition in this regard.  No more export step: simply save the updated file and run the application.

I call the code ‘prototype quality’ at this stage because I know it won’t work correctly on .blend files from 32-bit systems or from big endian systems.  Both these issues will be trivial to solve, but have not yet been addressed.   There’s also likely a bit of room for optimization, but for the most part, I doubt that matters for anything but massive scenes (in which case, it’ll likely be better not to be loading a .blend file directly, but rather some leaner format).

If nothing else, the .blend file format is fairly interesting in itself.

### Usage

The BlendReader interface is trivial to use:

1. Create the BlendReader object
2. Call reader.open(std::string filename)
1. This call opens the file, reads the blend file’s “DNA” structure and build indices so objects can be read out of the file easily
3. Call reader.getBlocksByType(std::string type)
1. This returns a list of the info about the blocks in the file of the given type.  For example, “Mesh” or “Scene”.
1. This takes the address of a block and reads it in as a typed object.  It’s not a C++ type, but rather a wrapper on the block that lets the user grab the fields by name without any error-prone pointer arithmetic.
5. Call obj.field<type>(name, index)
1. Reads a particular named field out of the object and casts it to the given type.  Of course the caller needs to get the type right – but this is inevitable as at some point the opaque chunk of binary data needs to be cast into native C++ types

### Example Code

Here’s the chunk of prototype code that loads the blend files into an LxEngine document (using the LxEngine Mesh structure):

    float normalizeShort (short s)
{
return float (s) / float(std::numeric_limits<short>::max());
}

Mesh*
{
if ( reader.open(filename) )
{
Mesh* pMesh = new Mesh;

auto meshBlocks = reader.getBlocksByType("Mesh");

if (meshBlocks.size() != 1)
{
lx_warn("More than one mesh found in .blend file.  Processing only the "
"first one that is found.");
}

auto numVerts = spObj->field<int>("totvert");
auto numFaces = spObj->field<int>("totface");

pMesh->mVertices.reserve(numVerts);
pMesh->mFaces.reserve(numFaces);

pMesh->mFlags.mVertexNormals = true;

for (int i = 0; i < numVerts; ++i)
{
Mesh::Vertex v;
v.position = spVerts->field<point3>("co", 0);

// Normals are encoded as shorts
v.normal.x = normalizeShort( spVerts->field<short>("no", 0) );
v.normal.y = normalizeShort( spVerts->field<short>("no", 1) );
v.normal.z = normalizeShort( spVerts->field<short>("no", 2) );

pMesh->mVertices.push_back(v);
spVerts->next();
}
spVerts.reset();

for (int i = 0; i < numFaces; ++i)
{
q.index[0] = spFaces->field<int>("v1");
q.index[1] = spFaces->field<int>("v2");
q.index[2] = spFaces->field<int>("v3");
q.index[3] = spFaces->field<int>("v4");

pMesh->mFaces.push_back(q);
spFaces->next();
}
spFaces.reset();

return pMesh;
}
else
{
lx_error("Could not open file '%s'", filename.c_str());
return nullptr;
}
}

### Full Source Code

The latest code (assuming future submissions haven’t moved it) is available here on github.

And some related links on the .blend file format:

Written by arthur

December 21st, 2010 at 10:05 am

Posted in lxengine

From the world of unexciting screenshots, I present the following:

First run at loading a .blend file

Ok, the .blend loading application doesn’t do much at this point.

It’s simply a console app that pulls down the “SDNA” file structure index in the .blend file and identifies the number of “Mesh” objects in the file.   The code is all based on the excellent documentation available on Jeroen Bakker’s “The mystery of the blend” web page.

The SDNA data structure creates a self-descriptive file format: it contains a list of the actual named data structures used in the file, along with their types, sizes, and layouts.  This adds a layer of complexity to the import process (i.e. there is no hard-coded document with the data type layouts to simply copy), but it also allows a more intelligent importer to be written that can dynamically locate data if the overall format changes – but not the subset of the data of interest.

Given the vision for LxEngine as an extremely usable rather than cutting-edge performance engine, having direct .blend support would be an excellent addition.  Also, given the self-descriptive nature of the .blend file format, it shouldn’t be too much of a maintenance headache trying to keep the file format loader up to date (also, it likely will never be a full loader but rather only pull out the relevant sub-set of data which makes the job easier).

The code is low-quality, sandbox code at this point, but is available on github here, for those interested.

If nothing else, this early work based on Jeroen Bakker’s excellent documentation implies that direct .blend support is certainly feasible.

Written by arthur

December 17th, 2010 at 3:12 pm

## Terrain in Blender

A simple render from experimenting with multi-texturing in Blender 2.5:

Blender 2.5 Quick Terrain Experiment

The above was created by roughly following the tutorial posted on the Blender 3D Noob to Pro wikibook.

Testing out heightmaps in Blender

I experimented a bit with heightmaps in Blender 2.5 as well – there isn’t native support for image-based heightmaps, but it’s fairly easy to quick preview one: (1) create a dense 3d grid, (2) add a material/texture with the heightmap, (3) set the heightmap “RGB to Intensity” flag, turn off the diffuse color influence, and enable the Displace influence.

Written by arthur

December 13th, 2010 at 3:13 am

## Making Maps

Part of the aim of LxEngine is to provide an adventure / real-time strategy sample game.  As part of the conceptual design for this sub-project, a world map needs to be defined.  Again, (at this point) this map is only needed for conceptual design – not in-game use.

So what’s the best tool for creating a map?

The answer I experimented with was Blender:

Using the Blender 2.5 sculpt tool and a hi-res mesh, I can model the map – not just in 2D but with elevations.   I then used vertex paint mode to roughly denote the major regions of the land mass.

All in all, this seemed to be an okay but not great means of producing a map.  Some of the problems are: (1) it takes a very high-res model to be able to sculpt in detail on a large scale map – which means performance is not great, (2) vertex paint mode is not ideal for marking regions because – as far as I can tell – there’s not way to get vertex colors + lighting to display in viewport, (3) there is no three – after I spend some more time on it, I may have some more insightful thoughts on whether it’s a good tool for conceptual design of world maps.

Note that the other alternative I’m considering is Inkscape for creating the map as SVG graphic.  This may make sense as a 3D representation may be too detailed and reduce the freedom of change that is a necessary part of the conceptual design effort.

Written by arthur

November 28th, 2010 at 1:12 pm

## UV Unwrapping

I continued the exercise from the other day, this time aiming to create a “real” UV texture map rather than applying a seamless texture map.  The exercise this time was to create a simple box mesh representing a tiny shed of sorts and create a UV texture map for it.   The goal is a trivial box building not unlike one from the old game Bard’s Tale…

Screenshot from the old Apple IIgs game
The Bard’s Tale

Mesh
The mesh itself is incredibly simple: start with a box, extrude to the top.  Join the vertices at either end of the top extrusion – and done.

UV Unwrapping
Given the simple mesh, choosing the seams for the unwrapping was not difficult either.

Texture Map
From there, I then needed to create the actual texture map.  This is where the work started.

First problem: Blender 2.5 Alpha 2 appears to have an Export UV Layout feature which exports the UV Layout as an SVG file.  GIMP can open SVG files so no problem…except the Export UV Layout command didn’t seem to actually save anything in Blender 2.5 Alpha 2.  A little searching on Google and the Blender Artists forums convinced there was at least a strong enough probability that this was a bug, not a user error on my part, that I decided to look for a workaround.  The workaround: stretch the Blender UV layout window, ALT+PrintScreen, paste from the clipboard into GIMP, crop and resize to 512×512.   Not a perfect solution, but then again, this is an exercise, not production work.

The next problem was making a decent image.  I went to Google Images, searched for an old door and came across an image to use.  (Note: I did look and could not find copyright, licensing, or even original author information for the image.  Technically, I believe this means I should not have used the image…but I did.  In the future I should try to avoid this by either using my own photos are starting points or using images from someplace like flickr where images usually come with associated licensing info.)  The base image I used was from here, with a lower res copy below:

Base stock photograph used for
the texture map

The using the GIMP with a lot of resizing, rotating, cropping, smudging, cloning, and a little dodge and burn, I managed to create a texture map:

Custom texture map made from a highly sliced
and diced stock photograph

Result
The result was what I was aiming for.  Not overly impressive – and definitely could be improved even in its simple state, but it is in fact a custom texture mapped model of a simple shed, which is pretty close to what I had envisioned.

Final Model

Not perfect – but was what I was aiming for.

Written by arthur

July 7th, 2010 at 3:39 pm

## Blender 2.5 and UV Unwrapping

I spent some time attempting to teach myself a bit more about 3D modeling since I’m taking a break from graphics programming, but have a hard time giving up learning more about graphics entirely…

Blender 2.5

I’m using Blender 2.5 Alpha 2.  The new UI in Blender is amazingly improved.  I don’t think I’ve ever seen such a vast improvement in a free software application before.  I can’t compliment the team enough: this is a tremendous step forward.   As a poignant example, I have very limited Blender experience (see one of my earlier blog posts) but can say that – for the first time – in this program’s vast set of capabilities, I was often able to find what I was looking for myself without resulting to Google.  Finding tools in the UI without Google may sound like an “obvious” requirement for good/decent/acceptable software, but for an application of the level of complexity of Blender, I think it’s fair to say it’s rarely the reality that finding the desired functionality is to any degree intuitive to beginners.  Again, kudos to the Blender team.

(Warning: there are some stability problems still if you’re giving 2.5 a try.  This is an Alpha 2 release, after all.)

Modeling Tutorial

I also happened across an excellent tutorial on KatsBits.com that takes the reader from start to finish on a static model blender model.  (Note: the tutorial uses a pre-Blender 2.5 release, the UI is very different.  Again though, with only minimal help from Google, I was able to find the 2.5 equivalents of everything being done.)

It begins with a cube, shows how to do basic cuts and extrudes, then moves on to setting up a UV map for the model and applying a texture.  The reason I enjoyed this tutorial so much was that (to a non-modeler beginner like me) it gave the best, concise explanation of UV Unwrapping.

I highly recommend reading the whole tutorial, as both the original author deserves that and the images give essential context to explain it more fully, but here’s are core idea that triggered the insight for me:

The principle involved here is the same as if you were to cut a cardboard box down one side, laying the resulting top, bottom and sides flat on the ground so all the parts of it were spread out. A mesh is treated much the same way where-ever possible, edges are placed around a mesh to facilitate a similar end result.

In other words, a seam is a cutting line and the non-seams are where the Blender unwrapping algorithm will attempt to “unfold” the model.  For example, to unfold the top of a box (but to keep it connected to the box as a whole), three of the four top edges should be marked seems and the top will unfold along the unmarked seam.  Any non-planar collection of faces not adjacent to a seam will be more or less flattened or squashed down onto plane.

The second insight is that more seams means (likely) a more linear, clean unfolding but the trade-off is the UVs across each seam are discontinuous across that seam.

I suspect this was likely obvious to anyone who has spent any time 3D modelling before, but for a beginner like me it was good to finally “get it” about UV Unwrapping.

Results

The results?  Nothing fancy: just a chair similar to the one from the above tutorial.

Note: the wood texture was adapted in GIMP from a wood photograph available here (thanks, bittbox).  Using the GIMP Offset command, a bit of the Clone tool, and some color level adjustments, I managed to make a acceptable quality seamless texture out of the photograph.   I should also note that there’s no real need for a seamless texture in the final model given that the UV unwrapping; making the texture seamless (and with more desirable results than the Make Seamless built-in script) was a different, separate exercise.

Written by arthur

July 7th, 2010 at 12:00 am

## Status Update

While most updates thus far has come at the completion of a sub-feature of sorts, this update is more of a general status report (largely due to my lack of time of late to complete a noteworthy sub-feature).

Project Setup
The lxengine code now compiles in Release builds.  This required a short Perl script to generate the Visual Studio vcproj files since the code previously only built correctly in Debug.  The long and short of it is that I wanted to minimize the amount of time spent in project setup and it seems that Visual Studio is more tailored for creating a new project than adapting it to an existing one. With this project, I intend to have many small sub-projects with identical build settings (link to the same libraries, produce output files in a similar directory structure, reference the same data files) as well as desire flexibility to easily relocate project source and destination files (without updating dozens of project files).  It seemed easier to write a short script to auto-generate near skeleton vcproj files and work with the Visual Studio Properties (vsprops) files to manage all the common settings.

In any case, I now have a profile-able Release build of the lxengine code and hopefully will need to do minimal project setup to add new samples and tests.  If it turns out I do, it seems that CMake is a growing favorite in the project build tools these days.  That may be the best eventual destination.

Performance
With the Release build, I experimented with several optimizations that I should do a write up on: namely, discovering several ways not to do transform caches in a ray-tracer (the uncached version beat both my naive transformation caches), and that if you plan to squeeze every bit of performance out of the Microsoft VC 9.0 compiler, be prepared to use the __forceinline compiler-specific keyword (the C++ keyword inline may not be enough to sway cl.exe’s internal algorithm to actually inline the code).

This project, as all projects do, is reminding me of how features compound upon features.  In this case, the ray tracer is (in my mind) already “too slow” (despite performance not being a primary objective of this project) and soon I’ll need to get interested in spatial indexing structures, fully optimized intersection algorithms, multi-threading, and other performance-oriented functionality.  Or this might be the time to make the leap to progressive rendering using hybrid rasterization / ray tracing techniques…

Modeling
I would like to add some experiments with procedural geometry to the project.  Procedural geometry at the object level for entire meshes (e.g. a tree) as well as geometry and maps for materials (e.g. wood bark) would be interesting.  I’ve given a look into Perlin noise as well as an algorithm for natural rock procedural geometry and textures, though I haven’t experimented yet with either.

For now, with the bit of time I’ve had, I’ve been a bit distracted by polygonal mesh modeling with Blender.  This isn’t an area I have any expertise in.  I did stumble upon the page about the newest blender open move project, Durian; it will be interesting to see the results.  As for myself, I’m still pecking at the keyboard trying to learn the Blender interface to construct the building facade that I’ve begun:

I found what seems to be an excellent set of freely available Blender video tutorials by David Allen Ward (homepage).  I don’t think many tutorial writers appreciate the value in actually watching someone work.  A fundamental part of the process includes an appreciation for how much tweaking goes into it, what actions are done over and over, and all the details that are not seen in the final product.  Anyone who has learned a task by literally looking over someone else’s shoulder as they work knows that there are all sorts of practical tips, tricks, and general information that get picked up and otherwise would likely never be shared.

Written by arthur

December 22nd, 2009 at 7:46 pm

Posted in lxengine

Tagged with , , ,