Archive for the ‘nVidia’ tag
Added a new effect to Tutorial 5: a single-pass surface and wireframe rendering effect. Note how the wireframe is anti-aliased, fades into the surface color, and displays without z-fighting.
The effect derives directly from the shader in the previously mentioned OpenGL 4.0 Shading Cookbook. The implementation there, according to the author, derives from the one presented in this nVidia whitepaper. I won’t go into detail of the effect (since the explanation is available both in the whitepaper and the book) and will only briefly comment on the implemenation.
In short, it uses a geometry shader to compute the distance of each fragment from each edge of each triangle. The fragment shader then uses those distances to determine whether to use the surface shading color or the wireframe edge color. A mix() call is rather than a discrete choice to antialias the edges. Because it’s a single pass shader, there’s no chance of z-fighting.
Setting the “ViewportMatrix”
A quick note since the OpenGL 4.0 Shading Cookbook does not explain how to set up the “ViewportMatrix”. It’s pretty simple, but just to clarify, here’s the code to set up the viewport matrix:
GLint viewport; gl->getIntegerv(GL_VIEWPORT, viewport); float halfWidth = float(viewport) / 2.0f; float halfHeight = float(viewport) / 2.0f; glm::mat4 m( glm::vec4(halfWidth, 0.0f, 0.0f, 0.0f), glm::vec4(0.0f, halfHeight, 0.0f, 0.0f), glm::vec4(0.0f, 0.0f, 1.0f, 0.0f), glm::vec4(halfWidth, halfHeight, 0.0f, 1.0f) ); gl->uniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(m));
The principle is quite simple. Prior to the viewport matrix transformation, OpenGL coordinates in eye space range from -1 to 1 in x, y, and z. Therefore, these values get scaled by half the width/height and offset by half the width/height. This maps -1 to 0 (e.g. -halfWidth + halfWidth = 0) and 1 to the full width (halfWidth + halfWidth = width). The z values do not get scaled or offset since window coordinate retain the same range as eye coordinates.
One More Image…
One last image of a slightly tweaked version of the shader that fades the wireframe intensity based on the diffuse intensity. Also, snow
For various reasons, one of them being to test LxEngine with “real” data, I’ve been experimenting with loading and displaying the game data from Bethesda Softwork’s 2002 game, The Elder Scrolls III: Morrowind (buy it here on Steam). There’s a fair amount of information out there about the Morrowind file formats – as it is a highly moddable game. I’ve been using NifTools to parse the actual models and been using custom code for the ESM/BSA parsing (neither are very complicated formats).
The primary purpose of the project has been to test out LxEngine with dated, but production-quality data and data formats. The experiment thus far has been serving it’s purpose. It has raised questions like, “Hey, what should the engine do when the current cell has 17 lights and the current shader only supports 8 at a time?” The LxEngine project has hardly been lacking in TODOs, but in any case, this is helping identify the necessities versus the niceties.
A secondary purpose of the project is to learn a bit more about how Morrowind works, so that potentially as a side-effect of working on my own goals produce some useful contribution to the OpenMW project. (This project certainly isn’t meant to compete with OpenMW – the goals here are to demo some basic rendering, physics, sound, etc. from Morrowind to test out LxEngine’s capabilities. The goal of the OpenMW project is to produce a fully playable game with full fidelity to the original.)
As for the current progress, here are some screenshots:
Update: Texture Mapping
Adding texture mapping involved a couple core changes:
- Adding UVs and texture samplers to the LxEngine GLSL shader builder. This is less complex than some of the existing features of the shader builder, but hadn’t yet been added. The support is somewhat minimal and will require revisiting for multi-texturing.
- Adding DDS texture format support to the GL Rasterizer. DDS stands for DirectDraw Surface, i.e. a Microsoft DirectX format, that furthermore has some strange patent issues, which seems to bode poorly but video cards usually handle this format natively. There’s a EXT_texture_compression_s3tc OpenGL extension that allows DDS format data to be passed more or less directly to the card. There’s a simple nVidia tutorial showing how to do this.
- Passing DDS streams from within a BSA understood by the TES3 loader to the LxEngine Rasterizer which knows nothing about Morrowind format data. This was the fun one – which actually still requires a bit of work – abstracting the LxEngine rasterizer from the texture data source in a flexible, general way that both (a) allows the Rasterizer to know nothing about BSA files while the BSA loader knows nothing about OpenGL and (b) still streams the data directly from a disk-based std::istream to OpenGL without any superfluous copies. The Rasterizer now allows textures to be created with a “type” and “acquire callback”. In this case, the type is a stream and the callback is over in the TES3 loader: the only shared concept necessary is the std::istream.
And after a couple bug fixes (like, ehem, remembering to open the binary DDS stream in std::ios::binary mode)…
Next, I need to add multisampling support to the renderer: these screenshots would look so much better with it enabled!