## GLSL ray-tracing to debug point lights

While debugging, sometimes wouldn’t it be nice see where the point lights in your scene are? Wouldn’t it be nice to do this via a shader modification rather than by adding new geometry to the scene? Sure, it would.

Here’s a quick, rough post on getting an effect like the blue/orange sphere around the point lights below…

Display some debugging spheres around lights displayed using GLSL code (no added geometry!)

### GLSL Code

It’s pretty easy to add such functionality by adding a ray-tracing sphere intersection test to the fragment shader. The below code is a bit of a hack with hard-coded values, but this is a debugging technique so I’m okay with that. In the fragment shader’s inner loop on the lights add something like the below:

for (int i = 0; i < unifLightCount; ++i)
{
...

//
// BEGIN Point light debug!
//
vec3 U2 = unifLightPosition[i].xyz;
vec3 V2 = normalize(fragVertexEc);
float dotUV2 = dot(U2,V2);
if (dotUV2 > 0)
{
float D2 = dot(U2,U2) - dotUV2 * dotUV2;
if (D2 < 20)
{
c = mix(c, unifLightColor[i], (1 - D2 * D2 / 400));
c = mix(vec3(1,1,1) - c, c, (1 - D2 / 20));
}
}
//
// END point light debug
//
}

What’s happening in the above?

• U2 is the unnormalized vector from the eye (i.e. 0,0,0) to the light in eye coordinates
• V2 is the normalized vector from the eye to the fragment (i.e. the ray, if this were ray-tracing)
• dotUV2 is the cos(a) of the angle between those vectors
• If dotUV2 is negative, then the angle > 90 degrees, meaning a intersection with a sphere centered about the light is not possible (assuming we’re not inside the sphere)
• D2 is then the distance squared of the ray to the sphere / light center (via some trigonometry that’s what it simplifies to)
• Next we check against and arbitrary/this-looks-good radius value (i.e. 20, in this case) for how large we want the debug sphere to be
• Lastly we mix in some color for the light, in this case putting in a mix of the light color itself bordered by the usual color inverted (c = vec3(1,1,1) would be a much simpler alternative)

This is doing a bit of sphere ray-tracing in GLSL that can be used in a lot of ways. For example, light atmospheric effects: how about doing a look-up adding light color, modulated by distance from the light, and multiplying by an animated noise texture to simulate smoke?  Distance from a point can be used for plenty of effects.

### Update: Glow Effect

From the debugging code described above (and adding the couple lines of code to load Morrowind light models as well), it was not very difficult to add a glow effect around lights:

Before - no glow effect

After - atmospheric glow effect on lights

The code is not that different from the debugging trick describe above. The main difference is to reject the effect if the fragment is in front of the radius (i.e. the equivalent of a z-test):

    //
// Light glow effect
//
vec3 U2 = unifLightPosition[i].xyz;
vec3 V2 = normalize(fragVertexEc);
float dotUV2 = dot(U2,V2);
if (dotUV2 > 0)
{
float D2 = dot(U2,U2) - dotUV2 * dotUV2;
float R = 120;
float T = D2 / (R*R);
if (T < 1 && length(fragVertexEc) > length(U2) - R)
c += unifLightColor[i] * .5 * pow(1 - T,6);
}

Written by arthur

July 15th, 2011 at 10:24 am

Posted in lxengine

## Bethesda Softworks’ Morrowind

For various reasons, one of them being to test LxEngine with “real” data, I’ve been experimenting with loading and displaying the game data from Bethesda Softwork’s 2002 game, The Elder Scrolls III: Morrowind (buy it here on Steam). There’s a fair amount of information out there about the Morrowind file formats – as it is a highly moddable game.  I’ve been using NifTools to parse the actual models and been using custom code for the ESM/BSA parsing (neither are very complicated formats).

The primary purpose of the project has been to test out LxEngine with dated, but production-quality data and data formats.  The experiment thus far has been serving it’s purpose.  It has raised questions like, “Hey, what should the engine do when the current cell has 17 lights and the current shader only supports 8 at a time?”  The LxEngine project has hardly been lacking in TODOs, but in any case, this is helping identify the necessities versus the niceties.

A secondary purpose of the project is to learn a bit more about how Morrowind works, so that potentially as a side-effect of working on my own goals produce some useful contribution to the OpenMW project.  (This project certainly isn’t meant to compete with OpenMW – the goals here are to demo some basic rendering, physics, sound, etc. from Morrowind to test out LxEngine’s capabilities.  The goal of the OpenMW project is to produce a fully playable game with full fidelity to the original.)

As for the current progress, here are some screenshots:

Morrowind data game rendered (with obvious limitations!) with LxEngine

An actual, in-game screenshot from Bethesda Softworks' Morrowind of that same scene

Update: Texture Mapping

Adding texture mapping involved a couple core changes:

• Adding UVs and texture samplers to the LxEngine GLSL shader builder. This is less complex than some of the existing features of the shader builder, but hadn’t yet been added.  The support is somewhat minimal and will require revisiting for multi-texturing.
• Adding DDS texture format support to the GL Rasterizer.  DDS stands for DirectDraw Surface, i.e. a Microsoft DirectX format, that furthermore has some strange patent issues, which seems to bode poorly but video cards usually handle this format natively. There’s a EXT_texture_compression_s3tc OpenGL extension that allows DDS format data to be passed more or less directly to the card.  There’s a simple nVidia tutorial showing how to do this.
• Passing DDS streams from within a BSA understood by the TES3 loader to the LxEngine Rasterizer which knows nothing about Morrowind format data.  This was the fun one – which actually still requires a bit of work – abstracting the LxEngine rasterizer from the texture data source in a flexible, general way that both (a) allows the Rasterizer to know nothing about BSA files while the BSA loader knows nothing about OpenGL and (b) still streams the data directly from a disk-based std::istream to OpenGL without any superfluous copies.    The Rasterizer now allows textures to be created with a “type” and “acquire callback”.  In this case, the type is a stream and the callback is over in the TES3 loader: the only shared concept necessary is the std::istream.

Ever wonder what the Morrowind UV mapping looked like?

And after a couple bug fixes (like, ehem, remembering to open the binary DDS stream in std::ios::binary mode)…

Morrowind cell rendered with LxEngine

Next, I need to add multisampling support to the renderer: these screenshots would look so much better with it enabled!

### Update: Multisampling

Multisampling…

16x Multisampling

Written by arthur

July 13th, 2011 at 6:15 pm

Posted in lxengine

## Area Lights

Area lights have not been added to the code base ‘properly’, but were hacked into the code for the effect below.   The basic trick is just to increase the number of pixel samples (32x in this case) and when querying the light position during the shading process, choose a random point on the light’s facing surface rather than it’s center point.

The implementation uses physically inaccurate non-uniform sampling, since I didn’t quite have the energy to dive into the probability distribution functions for the projected area of a sphere1.  However, for a roughly 3 line code addition, the physically inaccurate approach was good enough to post an image:

I do intentionally refer to this functionality as area lights, not soft shadows.  In this case, I prefer the terminology because soft shadows are the effect but it is area lights that are the cause.

In my mind, the term soft shadows in computer graphics refers to effects usually done in real-time graphics where an algorithm aimed at true area lights would be too computationally expensive.  ‘Soft shadows’ usually implies some modulation of the shadow area itself rather than a change in the fundamental properties of the source light.  The soft shadow algorithm then uses a technique that is less tied to mathematical realism in terms of the optics and more tied to improving the perceived realism.

Update: I’m realizing that I could muddle the distinction between area lights and soft shadows further by modifying the ray tracer’s implementation.  The implementation used 32 light samples per pixel sample: each such light sample was used to calculate both the direct illumination value and the associated binary (0 or 1) shadow term.  However, I could have chosen to use 1 sample for the direct illumination and then modulated that by a variable shadow term (0/31 to 31/31) generated from 32 samples.  The effects of this alternate approach would likely be minor on the direct lighting while achieving similar softening of the shadows.  Since this approach wouldn’t affect the direct illumination but does affect the shadow term, according to my own definition, would this be an area light or soft shadow implementation?  I’m thinking the latter, but it’s hard to say.

1 State of the Art in Monte Carlo Ray Tracing for Realistic Image Synthesis (2001) – SIGGRAPH ’01 notes on the topic, if you’d like to know how to do it properly

Written by arthur

November 29th, 2009 at 6:22 pm

Posted in lxengine

Tagged with , ,