## August

August 13th, 2012 at 7:27 am      no comments

Checking in again!

A new job is keeping me too busy to do much with the site. That’s the bad news.  The good news is the new job has me doing a lot of enjoyable graphics work.

I’m not sure when I’ll be able to revamp / port / return to the work on athile.net, but it is in hibernation not anything worse. I enjoy the graphics work I’ve done here far too much to consider flat out abandoning it.

Written by arthur

August 13th, 2012 at 7:27 am

Posted in company

## What’s going on?

June 27th, 2012 at 12:46 pm      no comments

About a month and a half since the last post –

I’m busy working. However, I’m exploring the possibility of moving “Athile Technologies” from the pseudo-company, hobby project of mine to its own sink-or-swim real business. There are a lot of details to work out. This site may go away or it may stay; the code is going to continue exist and be developed one way or another though!

Written by arthur

June 27th, 2012 at 12:46 pm

Posted in company

## Vacation (from the blog)

May 12th, 2012 at 10:35 am      1 comment

I’m realizing work is keeping me too busy to maintain the habit of posting here with fair frequency.  Work is likely going to be keeping me locked up for at least another month or two.  The blog is not dead! But it might be mid-June before the next post!

Written by arthur

May 12th, 2012 at 10:35 am

Posted in company

## Concerning Wheel Reinvention

May 6th, 2012 at 12:15 pm      no comments

Here’s a development pattern I’ve noticed.  This linear progression certainly doesn’t apply in all cases – it branches off in different ways in many cases, but at least sometimes the following pattern seems to occur…

### Wheel Reinvention and Understanding Roundness

1. Recognize There’s Already a Solution Out There
Find yourself solving a problem that falls into a general pattern – the kind of pattern that could be solved with a reusable library or external tool.  You note that to some degree you are most likely reinventing a wheel.  Or perhaps in reading about some other library or tool, you note that you have been trying to invent this yourself.  Either way, someone has already solved this class of problem and you’re potentially inefficiently redoing that work: should you be?
2. Research a Complete, Existing Solution
Read some blog posts, check out some tutorials, skim the API docs, and experiment with an existing library or tool that solves the problem you keep facing.  It’s a pretty good library.  People are using it.  It looks like it’s got decent documentation and is actively supported. This looks like a potentially good fit.  Your time is valuable and it sure seems a lot smarter to utilize this rather than reinvent.
3. Start Using It and Get Frustrated the Solution’s Complexity
After spending some time with the existing solution, you grow frustrated that you can’t quite wrap your head around everything the API /tool is trying to do, that it seems to have too many options, that it tries to solve multiple problems, that it doesn’t quite fit the problem as you would like, that for one valid or not-so-valid reason it doesn’t seem to be “right” for what you need.
4. Design a Simpler Solution
Go back to your own code. Use the general concepts from the existing library to write / improve your own solution.  Your own solution is one that you do understand, that is simpler, and more neatly fits the problem as you’ve conceptualized it.  Yes, you’ve reinvented the wheel, but actually it’s likely now a better solution than it was at step 1 (at least in theory, assuming you have the time to complete and test the library) and is tailored nicely to your needs.
5. The Requirements for Your Solution Grow
Not too long later, you find your simpler solution growing in requirements and complexity gradually creeping in.  Long story short: the task is no longer a clean re-visioning of that existing solution you read about; it’s turning into a burdensome, parallel reimplementation.  The price of re-implementation is beginning to outweigh the advantages of that simple solution you had a while ago.
6. Use the Existing Solution
Now you get it.  You’ve spent enough time with the problem class to see why the original authors of that existing library did it the way they did.  Maybe that existing solution still isn’t perfect in your mind, but at least you see why the features exist.  You know have a fairly strong understanding of the class of problem, even if you don’t like all the details about this specific existing implementation of a solution.  This key though: you now know enough to understand the general problem space of the existing solution.  You use it instead, get over your frustration with its idiosyncrasies when you think of how much effort reimplementing everything yourself would be, and in general are sufficiently happy with it as a solution.
7. New Team Members Wonder Why Such a Complex Solution is Being Used
Now we come full circle. You get why this imperfect, seemingly over-complex tool is being unused; they don’t.  You’ve been through Step 1-6; they haven’t.  They wonder we you would recommend such a needlessly complex tool.  How do you explain what you learned to them in those steps?  If this were easy to answer, couldn’t that same information have saved you going through those early steps as well and just arrived here where you are immediately?

Part of me thinks that the reinvention of the wheel in programming is a inevitable: it’s part of the learning process.  However, at the same time, there’s a significant difference between pure reinvention and informed rediscovery (i.e. pulling solutions from thin air is really hard, applying solutions learned elsewhere to your own instance of the problem is much easier).

At the same time, I think there are approaches that acknowledge the need for some degree of reinvention without offering huge jumps in complexity between the new person’s knowledge base and the full-fledged, robust solution.

I wonder if for many classes of problem if a graduated set of APIs is the solution: i.e. (1) here’s the basic library that will introduce the concepts and get you started quickly – but you will hit limitations with any long term use, (2) here’s the intermediate version of the API that follows the same basic conventions and is a direct super-set when possible, and (3) here’s the advanced API that does exactly what you need, solves all corner cases, but demands a strong conceptual understanding of the problem class.  A problem with this approach is that the author of a type “1″ API, in the process of implementing it, builds a knowledge base that makes it easy to implement a type “2″ API – so he or she does…and so on with (2) and (3).  (As an aside, I fear OpenGL has to some degree gone from a “graduated” API to an advanced API: this is great for experienced graphics programmers, but a glBegin(GL_QUADS) example sure is an easier starting point for beginners.)

I think it’s useful to acknowledge that there are different stages of user knowledge when designing an API.  I’m not sure what the right answer to solving the learning curve versus the power of the solution provided, but it’s an interesting topic.

Written by arthur

May 6th, 2012 at 12:15 pm

Posted in company

## What’s New

April 27th, 2012 at 9:48 am      no comments

No new work on LxEngine in the last couple weeks, unfortunately. Aggravating not being able to finish off that last chunk of work on normal mapping!

I’ve been busy consulting on a web project. I do have to say that taking up a totally different kind of programming (say, small website programming versus the lower-level desktop graphics) really does work well to break the tendency towards tunnel-vision in solving problems. For example, I still think JQuery is a fantastic example of an API that violates many of the “rules” about a good API that I previously held; yet, the API is great in many regards, so that’s a clear sign that those rules – like all rules – have exceptions.  A change in perspective is almost always helpful.

Also, have been reading Essential Mathematics for Games and Interactive Applications: A Programmer’s Guide at Christophe Riccio’s recommendation. So far it’s been an excellent book reminding me of the pure math behind all those math classes I use. I have to say the Google e-book version could have used a little more editing though…definitely numerous typographical errors in there that could / should be fixed.

Written by arthur

April 27th, 2012 at 9:48 am

Posted in company

Tagged with

## Normal Maps: WIP

April 13th, 2012 at 8:19 pm      1 comment

For the record, the below image is not yet correct, but here’s a first-pass at normal mapping in LxEngine:

The tangent vectors are calculated by GLGeom. The algorithm I’m using is somewhat…custom. I haven’t seen an implementation similar to how I’m attempting it – and thus suspect that I’ve missed a key point or two. That’s okay, however – this is for fun and to learn something.

The basic approach is to define a function F(p,v) = dUV, a function that takes a point on the surface of the mesh and a direction vector and returns the change in UV at that point in that direction. If we constraint this function to only be valid at vertices of the mesh (which is fine, since we’re only going to use this function at vertex points), then we can define F(p,v) as the weighted average of all the edges adjacent to the vertex at p – with the weights corresponding to the dot product between v and the direction of that edge.  So if we have F(p,v) defined, that we can find tangent and bitangent by finding directions v where F is at a local maximum. This can be done by simple calculus: where the derivative of a function is 0, it is at a local maximum or minimum. Given that F(p,v) is a fairly simple weighted average, the derivative is not complex and solving for where it is 0 is not complex. Thus we end up with the directions in which U increases most and V increase most…which, until I locate the flaw in my understanding of the problem, are theoretically the tangent and bitangent directions.

We’ll see. For now, I have compiling code and a screenshot. That’s enough for today. Testing and debugging will follow

Written by arthur

April 13th, 2012 at 8:19 pm

Posted in lxengine

## GLGeom v0.0.6 Released!

April 12th, 2012 at 5:06 pm      no comments

GLGeom v0.0.6 has been released!

GLGeom is the header-only C++ template math library developed in conjunction with LxEngine, modeled after and extending the GLM mathematics library. It provides strongly typed point, vector, and color classes, polygonal mesh and primitives classes (spheres, cylinders, etc.), bounding objects, intersection calculations, and more.

Release 0.0.6 contains many improvements and changes. The most significant changes may be to the glgeom_extension_primitive_buffer module, a module designed to provide a slightly higher-level, CPU-side take on a vertex array object: it stores a mesh (quads, triangles, lines, points) as well as optional arrays for face properties and vertex properties like normals, colors, and UVs. It now also includes some basic adjacency information. All of these properties can be set manually or, when possible, generated automatically.

GLGeom v0.0.6

As it will be for some time, it’s still a very early release but feedback is greatly appreciated. Leave a comment on the blog!

### Example Code

// Use a utility method to create a basic shape
glgeom::primitive_buffer primitive = glgeom::create_sphere();

// Automatically compute face normals on the sphere
glgeom::compute_face_normals(primitive);

//
// Generate some UV coordinates
//
// Use a spherical mapper with a scale transform of 10.
//
glgeom::compute_uv_mapping(
primitive, 0,
[](const glgeom::point3f& position, glgeom::vector3f& normal) -> glgeom::point2f {
glgeom::point2f baseUV = glgeom::mapper_spherical(position);
return glgeom::scale( baseUV, glgeom::vector2f(10, 10) );
});

### And a look at the primitive buffer functions…

Obviously a little incomplete, but we’re still at 0.0.6!

//===========================================================================//
// GEOMETRY CREATION
//===========================================================================//

inline primitive_buffer     create_cube                 (void);
inline primitive_buffer     create_sphere               (void);
inline primitive_buffer     create_cylinder             (void);
inline primitive_buffer     create_cone                 (void);
inline primitive_buffer     create_torus                (void);

inline primitive_buffer     create_vertex_normals_mesh  (primitive_buffer& mesh);
inline primitive_buffer     create_face_normals_mesh    (primitive_buffer& mesh);

//===========================================================================//
// ITERATORS
//===========================================================================//

inline void                 iterate_indices             (primitive_buffer& primitive,
std::function<void (size_t faceIndex, glgeom::uint16* vertexIndices)> f);

//===========================================================================//
// BOUNDS
//===========================================================================//

inline void                 compute_bounds              (primitive_buffer& primitive, glgeom::abbox3f& bbox);
inline void                 compute_bounds              (primitive_buffer& primitive, glgeom::bsphere3f& bsphere);
inline void                 compute_bounds              (primitive_buffer& primitive, glgeom::abbox3f& bbox, glgeom::bsphere3f& bsphere);
inline void                 compute_bounding_box        (primitive_buffer& primitive);
inline void                 compute_bounding_sphere     (primitive_buffer& primitive);
inline void                 compute_bounds              (primitive_buffer& primitive);

//===========================================================================//
// NORMALS
//===========================================================================//

inline void                 compute_face_normals        (primitive_buffer& primitive);
inline void                 compute_vertex_normals      (primitive_buffer& primitive);
inline void                 compute_uv_mapping          (primitive_buffer& primitive,
size_t channel,
std::function<glgeom::point2f (const glgeom::point3f&, const glgeom::vector3f&)> f);

//===========================================================================//
//===========================================================================//

bool report = true);

//===========================================================================//
// MISC
//===========================================================================//

inline void                 reverse_winding_order       (primitive_buffer& primitive);

Written by arthur

April 12th, 2012 at 5:06 pm

Posted in glgeom

April 12th, 2012 at 10:13 am      no comments

What I’ve been working on today…

### Created a stub for lxcore.lib

Currently, LxEngine is composed of the following:

• glgeom (header-only library of math functions)
• lxengine.lib (where the vast majority of the code is)
• lxengineapp.exe (a small front-end that drives the code in lxengine.lib)

What I’d like it to be is:

• glgeom (same as before)
• lxcore.lib (low-level, standalone functions and data types from lxengine.lib)
• lxframework.lib (the higher-level framework of Engine, Document, View, Element, classes)
• lxengine.exe (slightly larger front-end that drives the framework in a flexible, configurable form)

That’s a bit of an over simplification since there’s also extension libraries like lxrasterizer.lib and plug-ins like soundAL.dll, which exist now and will continue to exist.

The general point however is that I want make the framework “leaner.”  Right now it has too many specific features (mostly from early development when things weren’t very modular) and it should really be a rather bare but flexible MVC framework.  Those features belong in plug-ins and extensions. Even the utility stand-alone functions belong elsewhere (lxcore.lib). The end goal is a smaller framework is much better: frameworks are useful – right up until the point they become bloated and hard to understand (and thus defect prone and with steep learning curves).

### GLGeom Error Handling

Long story short: all this time, I’ve been using “assert(0)” as an error handling mechanism in GLGeom to keep from tying in any dependencies.  That’s not a very valid approach…

So I’m added a simple, but flexible error handling mechanism instead: docs here.

### GLGeom Compute Primitive Buffer Adjacency Info

Still a work-in-progress, I’m trying to port over some old code that is pretty fast at computing adjacency from a polygon soup.  This is also stressing GLGeom from a new angle, which is helping me fill in the blanks on some of the functionality in this currently-version-0.0.5-library.

Better get back to working on it…

Written by arthur

April 12th, 2012 at 10:13 am

## GLGeom UV generators

April 11th, 2012 at 11:42 am      2 comments

Per Dave’s request, I’ve recently added a new function to generate UV coordinates on a glgeom::primitive_buffer. The function name is glgeom::compute_uv_mapping().

The principle is simple: it’s a callback-based function that iterates over all the vertices and calls the callback to generate a UV coordinate. The existing glgeom_extension_mappers module already provides worker functions to use in the callback argument (e.g. mapper_cube, mapper_spherical).

### Images

Above is a spherical mapper with a 10x scale applied to the terrain geometry.

Above is a cube mapper with a 10x scale applied to the terrain geometry.

### Code

The syntax for generating a planar XY mapping is as follows:

glgeom::compute_uv_mapping(primitive, 0, [](const glgeom::point3f& p, const glgeom::vector3f& n) -> glgeom::point2f {
return glgeom::scale( glgeom::mapper_planar_xy(p), glgeom::vector2f(10, 10));
});

Using namespace glgeom, it looks like this:

compute_uv_mapping(primitive, 0, [](const point3f& p, const vector3f& n) -> point2f {
return scale( mapper_planar_xy(p), vector2f(10, 10));
});

For anyone living in the dark ages of pre-C++11, the above code reads as follows:

• Compute a UV mapping for “primitive”
• Store the computed UVs in channel 0
• Generate a lambda function as callback…
• …which takes in a 3d position and a normal and returns a 2d point (i.e. the UV coordinate)
• …which uses the GLGeom built-in “mapper_planar_xy” function to generate the UV
• …and lastly applies a scale factor of 10 to both the u and v coordinates

### API Design Commentary

I have to admit, I’ve not thoroughly satisfied with the resulting code above. It’s a bit verbose and has a very high syntactical-boilerplate-to-content ratio. However, this is the best compromise I’ve come up with yet.

• The lamdba notation is heavy-weight: there’s almost as much code to define the lambda signature as there is in the lambda body
• It’s possibly too general: in an ideal design, everything should have an immediately obvious place. Mappers transformations (scaling and rotation) as quite common so I’d be nice to have a “built-in” argument for these rather than relying on the user adding them manually into the lambda (and possibly introducing a typo into the multiplication order)
• Is a UV coordinate a “point”? In theory yes, in practice UVs are often treated as generic tuples, making the glgeom::point2t type occasionally cumbersome

#### Why not allow function pointers to be passed in directly?

I’d like it better if syntax like this worked:

  glgeom::compute_uv_mapping(primitive, glgeom::mapper_cube);

But this doesn’t work well. I want to support mapper functions that require positions and normals (e.g. mapper_cube) as well as those that depend only on position (e.g. mapper_spherical). I also want to support lambda functions for inlined custom mappers. Overloading a compute_uv_mapping template method to automatically choose the right variation did not seem to work (VS2010, at least, doesn’t seem to handle overloading well with lambdas of varying type). The compiler doesn’t seem to automatically resolve to the right template overload in a way that provides the syntax I’d like.

With this approach, I ended up with compiler complaints about ambiguous type resolution.

#### Why not provide an explicit set of compute_uv_* methods?

The ugly boilerplate syntax would go away if a series of functions where provided:

  glgeom::compute_uv_cube_mapping(primitive, uv_transform);
glgeom::compute_uv_spherical_mapping(primitive, uv_transform);
glgeom::compute_uv_planar_xy_mapping(primitive, uv_transform);
...

I like this approach that (a) syntax doesn’t get in the way and (b) it leads to about as self-documenting code as is possible.

What I don’t like is (a) it requires a mirror function for every mapper defined in the glgeom_extension_mappers and (b) creates a dependency between the glgeom_extension_mappers module and the glgeom_extension_primitive_buffer module.

#### So…?

I much prefer the principle that a generic function to apply an mapper exists alongside and independent of the set of mappers that can be applied (i.e. something a bit more along the lines of functional programming, thank you Javascript and JQuery for teaching me how nice this approach can be!). With that principle in mind, I’ve ended up with the syntactically verbose lambda based approach.

### What’s Next?

Hopefully, generating tangents and bi-tangents. I wrote a bit before about bump mapping and generating those values in the GLSL shader, but it would be nice to compute such data easily on the CPU side as well.

In any case, this will require me to add some functions to compute mesh adjacency information for a primitive buffer – a feature I’ve been meaning to add anyway.

Written by arthur

April 11th, 2012 at 11:42 am

Posted in glgeom

## Wireframe Overlay

April 10th, 2012 at 7:06 pm      no comments

Added a new effect to Tutorial 5: a single-pass surface and wireframe rendering effect. Note how the wireframe is anti-aliased, fades into the surface color, and displays without z-fighting.

The effect derives directly from the shader in the previously mentioned OpenGL 4.0 Shading Cookbook. The implementation there, according to the author, derives from the one presented in this nVidia whitepaper. I won’t go into detail of the effect (since the explanation is available both in the whitepaper and the book) and will only briefly comment on the implemenation.

In short, it uses a geometry shader to compute the distance of each fragment from each edge of each triangle. The fragment shader then uses those distances to determine whether to use the surface shading color or the wireframe edge color. A mix() call is rather than a discrete choice to antialias the edges. Because it’s a single pass shader, there’s no chance of z-fighting.

### Setting the “ViewportMatrix”

A quick note since the OpenGL 4.0 Shading Cookbook does not explain how to set up the “ViewportMatrix”. It’s pretty simple, but just to clarify, here’s the code to set up the viewport matrix:

GLint viewport[4];
gl->getIntegerv(GL_VIEWPORT, viewport);

float halfWidth = float(viewport[2]) / 2.0f;
float halfHeight = float(viewport[3]) / 2.0f;

glm::mat4 m(
glm::vec4(halfWidth, 0.0f, 0.0f, 0.0f),
glm::vec4(0.0f, halfHeight, 0.0f, 0.0f),
glm::vec4(0.0f, 0.0f, 1.0f, 0.0f),
glm::vec4(halfWidth, halfHeight, 0.0f, 1.0f)
);

gl->uniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(m));

The principle is quite simple. Prior to the viewport matrix transformation, OpenGL coordinates in eye space range from -1 to 1 in x, y, and z. Therefore, these values get scaled by half the width/height and offset by half the width/height. This maps -1 to 0 (e.g. -halfWidth + halfWidth = 0) and 1 to the full width (halfWidth + halfWidth = width). The z values do not get scaled or offset since window coordinate retain the same range as eye coordinates.

### One More Image…

One last image of a slightly tweaked version of the shader that fades the wireframe intensity based on the diffuse intensity. Also, snow

Written by arthur

April 10th, 2012 at 7:06 pm

Posted in lxengine