## HTML5 localStorage

The HTML5 localStorage feature is incredibly easy to use. It essentially is an string-string associative array. However, via JSON.stringify() and eval() it’s easy to store arbitrary data in the local storage cache.

This is a stub post that I’ll hopefully expand; however, for now the translated LxLang source files are stored in a cache (i.e. the Javascript code is loaded directly on the second visit) and the ray tracer sample now stores the ray traced images in a cache on subsequent visits (except, by design, the main demonstration image which is always ray traced).

### Cache Source

Below is a quick look at the basic localStorage wrapping that I’m currently using.

First, the acquireCache(name) function takes a unique cache name to allow for multiple distinct caches on the same site. If local storage is not supported, this function will return undefined. If it is supported, it returns an object with three methods and a single public data member. The ‘data’ member is a Javascript Object that can be used to store arbitrary Javascript objects. JSON.stringify() will be used to serialize this member when it is written to local storage. The ‘load’ method will refresh ‘data’ with the contents of the local storage (this is called automatically when acquireCache is called). The ‘save’ method will write the ‘data’ member back to local storage. The ‘clear’ method will delete the local storage data and the reset the ‘data’ object to an empty object.

function acquireCache (name)
{
var Cache = (function() {

var ctor = function(name) {
this.data = {};
this._name = name;
};

var util = {
tryEval : function (json, def)
{
if (json) {
try {
var value;
eval("value = " + json + ";");
if (value !== undefined)
return value;
}
catch (e) {
console.log("Exception evaluating '" + json + "'");
}
}
return def;
}
};

var publicMethods = {
{
this.data = util.tryEval(localStorage[this._name], {});
},
save : function ()
{
localStorage[this._name] = JSON.stringify(this.data);
},
clear : function()
{
delete localStorage[this._name];
this.data = {};
}
};

for (var name in publicMethods)
ctor.prototype[name] = publicMethods[name];
return ctor;
})();

if (localStorage !== undefined)
{
var cache = new Cache(name);
return cache;
}
else
return undefined;
};

Written by arthur

January 8th, 2012 at 3:08 pm

Posted in lxengine

## JQuery Seamless Image Preview

with one comment

I wrote a brief JQuery plug-in which transforms a normal <IMG> element into a dynamic, tiled version of the image with a slider for controlling the size of the image tiles.

### Purpose

The purpose would be to preview quickly seamless textures. I wrote this mostly to get a bit more experience with HTML5 and JQuery.

I also like the idea of improving this enough that I could a site like OpenGameArt.org could use it for images tagged with ‘seamless’ – but we’ll see about that: it needs more polish before I would feel right contacting anyone over there.

### Conclusion

Anyway, good to get something demonstrable and graphical back up on the blog!

Written by arthur

November 27th, 2011 at 2:17 pm

Posted in company

## Experimenting with Perlin Noise

### Update

Here’s another interesting noise pattern and the generating function, possibly useful as the basis for a stylized rock wall texture:

    function dim_sqr(s, t)
{
var f = Math.fract([s, t]);
var d = Math.abs( [f[0] - .5, f[1] - .5] );
return d[0] * d[1] * 4;;
}

function f8(s, t)
{
s += Math.noise3d(s, t, .4);
t += Math.noise3d(s, t, .4);
var v = dim_sqr(s, t);
return [v, v, v];
}

Slightly more interesting rocks?

    function f9(s, t)
{
s += Math.noise3d(s, t, .4);
t += Math.noise3d(s, t, .4);
var v = Math.checker_dim(s, t);
s += Math.noise3d(s, t, 22.4);
t += Math.noise3d(s, t, 22.4);
var u = Math.spot_dim(.4, [s, t]);
var w = Math.max(u, v);

return [w, w, w];
}

### Original Post

I implemented a perlin noise function quite some time ago (nothing fancy – more or less retyped from Perlin’s revised algorithm as described here). I recently exposed the noise3d function to the Javascript Math object in LxEngine and played with some procedural textures generated from Javascript fragments (as described in the last post).

I’m sure with a bit of research I could find known algorithms for generating some good patterns from variations on noise, but below are some textures generated simply by experimentation. Note that all the textures are generated in an (s, t) domain of (0,0) – (8, 8). The below textures are all generated from continuous functions, but as is obvious from the images are not seamless for the given domain.

### Functions

The images from left-to-right, then top-to-bottom correspond to the functions f1 through f7 below:

    function f1(s, t)
{
var v0 = Math.noise3d(s, t, .10);
var v1 = Math.noise3d(2.1 * s, 2.1 * t, 88.22);
var v2 = Math.noise3d(2 * s, 2 * t, .34);
var v = Math.max(v0, v1, v2);
var n = Math.min(v0, v1, v2);
v = Math.mix(v, n, .5);
return [v, v, v];
}

function f2(s, t)
{
var v0 = Math.noise3d(s, t, .10);
var v1 = Math.noise3d(2.1 * s, 2.1 * t, 88.22);
var v2 = Math.noise3d(2 * s, 2 * t, .34);
var v = Math.max(v0, v1, v2);
return [v, v, v];
}

function f3(s, t)
{
var v = Math.noise3d(s, t, .10);
v = .5 - Math.pow(Math.abs(v - .5), .5);
return [v, v, v];
}

function f4(s, t)
{
var v0 = Math.noise3d(s, t, .10);
var v1 = Math.noise3d(2.1 * s, 2.1 * t, 88.22);
var v2 = Math.noise3d(2 * s, 2 * t, .34);
var v = Math.max(v0, v1, v2);
var n = Math.min(v0, v1, v2);
v = Math.mix(v, n, .5);
v = .5 + Math.sign(v - .5) * Math.pow(Math.abs(v - .5), .5);
return [v, v, v];
}

function f5(s, t)
{
var v0 = Math.noise3d(s, t, .10);
var v1 = Math.noise3d(2.1 * s, 2.1 * t, 88.22);
var v2 = Math.noise3d(2 * s, 2 * t, .34);
var v = Math.max(v0, v1, v2);
var n = Math.min(v0, v1, v2);
v = Math.mix(v, n, .5);
v *= .5 + Math.pow(Math.abs(Math.fract(Math.mix(Math.fract(4 * s), v0, v1)) - .5), 1);
return [v, v, v];
}

function f6(s, t)
{
var r =  1.62 * Math.noise3d(s, t, .31);
s1 = s * Math.cos(r) + t * Math.sin(r);
t1 = s * Math.sin(r) - t * Math.cos(r);
v = Math.mix( (Math.sin(s1 * 6.28) + 1) / 2, (Math.cos(t1 * 6.28) + 1) / 2, .5);
return [v, v, v];
}

function f7(s, t)
{
var v = Math.max( f6(s, t)[0], f6(s * 2, t * 2)[0] );
return [v, v, v];
}

Written by arthur

August 13th, 2011 at 12:12 pm

Posted in lxengine

## Procedural Patterns

What I’ve been working on…

…patterns (and the unit tests that test them).  The above is image snapshot of some of the unit test image results of several procedurally generated patterns.

Written by arthur

July 7th, 2011 at 1:38 pm

Posted in lxengine

Tagged with , , , ,

Significant progress in the LxEngine ShaderBuilder.  The builder now supports Phong shading and procedural patterns such as tile, spot, diamond, and wave.

Below is a quick, low-quality demo video of the work-in-progress LxEngine Tutorial 3, which loads of a Blender model and allows the user to cycle through a set of shaders to apply (each material defined via a concise JSON description in the XML file):

### Video

Note how the specular highlights on the different, individual tiles of the checker patterns are not the same for the red checker materials. This really is a nested procedural! Each tile in the checker not only gets a color, but has its own Phong specification. Also check out the bright highlights on the last Phong checker: that’s actually another level of nesting where a border pattern adds much brighter specularity to the edges of the tile.

### Stills

Here is the Stanford bunny shaded with a checker pattern with a nested wave pattern:

Stanford Bunny

Blender's Suzanne

Finally, here’s the classic Utah Teapot with a spot pattern:

Utah Teapot

### What’s Next?

I have a host of todo’s lined up, but…any reader suggestions on what next to add to LxEngine? I’m looking for something that – while still somewhat feasible for a single person to implement – would help the engine stand out as having potential to be a top-of-the-line engine someday.

• Continue the shader work and add a Tutorial 4 with even more advanced multi-pass, multi-layer rendering and animation?
• Further Bullet Physics integration to demo how that library can easily and effectively be used within LxEngine?
• A miniature MineCraft procedural world sample with an infinite world with a sky, rain, and snow since MineCraft is all the rage?
• A simple FPS to demo a complete game with LxEngine?
• Something completely different?

Written by arthur

July 1st, 2011 at 1:26 pm

## Textures and the Sky

Not much writing today, but instead an image:

Texture mapping and the start of a sky

What’s New
Here’s what’s new in this image:

• Two RGB texture mapped spheres (one solid colored, the other phong shaded)
• PNG image support using LodePNG
• The beginning work on a sky dome using a blue/white gradient with an RGBA cloud texture layer on top. (Alpha blending is new and not yet completely supported)
It is definitely a work in progress.

CMake
Also, not visible in the image is that the SceneGraph demo and core engine builds with CMake, rather than the previous custom build script.  It’s really unfortunate that the CMake documentation is not as concise and clear as it could be, because otherwise it seems to be an excellent tool.

…and while I’m posting, the first Sintel trailer has been posted and it looks great.  Definitely worth downloading to see how well the Blender Open Movie Project is going.

Written by arthur

May 22nd, 2010 at 3:54 pm

The LxEngine code now an improved GLSL shader pipeline for triangle rendering.  The engine has received a couple architectural improvements as well, but the focus was primarily on learning the basics of GLSL.

• Support for multiple lights with basic per-pixel Blinn-Phong shading
• The geometry shader can optionally generate per-face normals or pass through per-vertex normals
• Support for a simple fixed-color point shader
• Support for multi-stream OpenGL vertex buffer objects (VBOs)
• Removed all direct OpenGL references from the main Renderer class; all GL calls implemented behind an IDriver interface
None of this is groundbreaking, so I don’t have much commentary on the above.  The next step is to continue work on the lighting properties and material properties.  [Note: it also looks like the ray-tracer has a defect with the light positioning, which explains the mismatch in the specular highlight locations and attenuation differences.]
Target result from the ray-tracer

Current OpenGL 3.2 rasterizer results

Written by arthur

April 4th, 2010 at 8:52 pm

Posted in lxengine

Tagged with , , , , ,

## More Basics

Current Goals
The current direction for the LxEngine is to support simultaneous ray-tracing and rasterization from the same scene graph data structure.   The motivating factor is that I’m interested in both ray-tracing and rasterization, and also that this provides an interesting challenge to create a properly thread-safe (but also efficient) data structure.

Currently, the OpenGL 3.2 rasterizer is a separate project from the ray-tracer.  The plan is to build up the data structures into the rasterizer to support everything the ray-tracer could do (see prior blog posts) and then re-incorporate the ray-tracing code into the rasterizer project.

The goal eventually would be to have the two pictures below match as nearly as possible. One rendering in a window rasterized by OpenGL and the other rendering simultaneously using a software raytracer…

Screenshot from the ray-tracer (circa November)

Latest screenshot from the OpenGL 3.2 rasterizer
Progress
Needless to say the rasterizer has a bit of catching up to do.  On the other hand, while the final image is not exactly visually stunning, the internal architecture has been progressing solidly.
Since the last blog update, here are some of the changes to the rasterizer:
• Load the scene from a JSON file
• Support for planes
• Support for object colors
• Support for keyboard camera movement
• Internal JSON-like variant data object for creating pluggable scene graph elements
• Pluggable scene attributes (e.g., color, transform)
• Pluggable scene nodes (e.g. sphere, triangle mesh)
The next steps are likely to improve the lighting and shading support (i.e. improve the current naive GLSL shader code), improve cube and sphere support (e.g. instancing support), and add material mapping.  From there the rasterizer will likely be close enough visually that integrating the ray-tracer into the same program would begin to make sense.

Written by arthur

March 28th, 2010 at 10:44 am

Posted in lxengine

Tagged with , , ,

## SuperPixel

A new feature of adaptive multisampling has been added. The adaptive multisampling code currently works by taking four samples per pixel, measuring the delta between the largest and the smallest sampled color values, and if that delta exceeds a fixed threshold, eight more samples are taken.

The code has also been refactored such that the sampling mechanism is an pluggable interface. The adaptive multisampling is one such implementation. Others implementations are a standard one sample per pixel, four fixed grid samples per pixel sampler, and N samples jittered about the pixel.

The adaptive implementation works by changing the internal sample class from a simple RGB float tuple to a SuperPixel class. In this context, “super pixel” refers to a pixel with more data than the standard single color plus depth information. For the particular implementation here, the additional data is straightforward. Each super pixel tracks the sum of the sampled floating-point RGB values, integer count of the number of samples, as well as the maximum and minimum value of all samples thus far. As each sample is recorded, the values of the super pixel are updated accordingly.

The sampling interface is simply a loop where the sampler is asked for a sample location, the sample is taken, and then the sampler is asked if another sample is needed. Using this design, the adaptive sampler is quite straightforward. After the fourth sample, it checks the delta between the minimum and maximum samples. If the value is below the threshold, it tells the render loop to move on to the next pixel, others it queues up for eight more samples.

The code looks like this:

SuperPixel spixel;
spixel.setCenter( frustum.cellCenter(x, y, width, height) );

while (!sampler->done(spixel))
{
vec3f target;
sampler->generate(spixel, target);
…

First Pass Rasterization

The current multisampling approach samples a minimum of four samples per pixel to get some determination of the color variance at that pixel. It would be useful to instead take one sample per pixel, but check the variance against neighboring pixels. Theoretically, there’s no difference between this and a regular grid sample done at a 1/4th resolution. In reality, this requires some architectural changes to the code as it is currently written.

With the above in mind, it would also be interesting to explore a fast first-pass hardware rasterization of the image. The rasterization could track the depth, the surface normal, and the material identifier. That information would likely give a good indication of shading discontinuity without even having to run a shading algorithm. Tracking directly by color would likely work, but for accuracy the rasterization shading algorithm would require fidelity with the raytraced shading algorithm, which could turn into a time-consuming maintanance issue.

Of course, the same could be said of the geometric representation: the tessellated sphere representation must match the raytraced parametric representation to avoid inaccuracies.

The gap between a coarsely tessellated and finely tessellated sphere is shown below by in the light red arcs.  All those pixels would be provide inaccurate first pass information in the rasterization pass.  This is very significant since one of the intentions of the first-pass would be to correctly identify object boundaries for additional multisampling.

On the other hand, a first-pass hardware rasterization done with attention to accuracy could like build in multiple advanced acceleration techniques to get ray-traced quality results faster. For example, basic visibility tests and occlusion culling could rapidly create potentially visible sets for rectangular segments of the viewport. More obviously, it could be used as an instantaneous draft-quality preview of the scene to be rendered.

Written by arthur

January 24th, 2010 at 9:55 pm

## Area Lights

Area lights have not been added to the code base ‘properly’, but were hacked into the code for the effect below.   The basic trick is just to increase the number of pixel samples (32x in this case) and when querying the light position during the shading process, choose a random point on the light’s facing surface rather than it’s center point.

The implementation uses physically inaccurate non-uniform sampling, since I didn’t quite have the energy to dive into the probability distribution functions for the projected area of a sphere1.  However, for a roughly 3 line code addition, the physically inaccurate approach was good enough to post an image:

I do intentionally refer to this functionality as area lights, not soft shadows.  In this case, I prefer the terminology because soft shadows are the effect but it is area lights that are the cause.

In my mind, the term soft shadows in computer graphics refers to effects usually done in real-time graphics where an algorithm aimed at true area lights would be too computationally expensive.  ‘Soft shadows’ usually implies some modulation of the shadow area itself rather than a change in the fundamental properties of the source light.  The soft shadow algorithm then uses a technique that is less tied to mathematical realism in terms of the optics and more tied to improving the perceived realism.

Update: I’m realizing that I could muddle the distinction between area lights and soft shadows further by modifying the ray tracer’s implementation.  The implementation used 32 light samples per pixel sample: each such light sample was used to calculate both the direct illumination value and the associated binary (0 or 1) shadow term.  However, I could have chosen to use 1 sample for the direct illumination and then modulated that by a variable shadow term (0/31 to 31/31) generated from 32 samples.  The effects of this alternate approach would likely be minor on the direct lighting while achieving similar softening of the shadows.  Since this approach wouldn’t affect the direct illumination but does affect the shadow term, according to my own definition, would this be an area light or soft shadow implementation?  I’m thinking the latter, but it’s hard to say.

1 State of the Art in Monte Carlo Ray Tracing for Realistic Image Synthesis (2001) – SIGGRAPH ’01 notes on the topic, if you’d like to know how to do it properly

Written by arthur

November 29th, 2009 at 6:22 pm

Posted in lxengine

Tagged with , ,