Archive for the ‘seamless’ tag
Updating the prior post…
To the seamless procedural texture generator, I added a grass texture generator. In addition to making numerous enhancements to the original code including automatically generating the UI elements for each plug-in and, of course, having the page support multiple plug-ins.
There’s still a lot of room for improvement, but this is a much better demo than the proof-of-concept spot pattern.
The grass need color controls, the dirt needs to be better, the blades need to be sharper, the algorithm needs to be changed to draw incrementally without stealing the browser process, etc. Nonetheless, a good step forward.
Continuing with the HTML5 Canvas experiments…
I put together a draft tool for procedurally generating seamless images. It’s pretty limited at this point, but it does serve as a proof-of-concept. See later in the post for where I’d like to go with this.
The above image is seamless.
The crux of the algorithm is this:
- Create a canvas
- Add helper function that repeats each canvas draw call 9 times: repeating the draw with an offset to each of the 8 directly neighboring “tiles” – thus handling overlap in draw calls across the borders
- Add simple code to allow the generated canvas image to be saved as a PNG
- Reuse the seamless image viewer I wrote the other day to preview the image in tiled form
- This little tidbit on how easy it is to generate a PNG from a CANVAS element
The collection of spot-based tiles has only a limited range of value – especially given that the example page only exposes a subset of the possible parameters. Again, it’s a proof-of-concept for a more useful implementation.
Dirt, Rock, & Grass. I’d like to use the basic framework to procedurally generate some basic terrain textures. Namely because these have greater utility – i.e. there’s a lot larger audience that’d want to use freely available seamless terrain textures.
Make it pluggable. It’d like to separate the framework from the particular tile being generated. I.e. make dirt, rock, grass, spot-pattern, etc. all different plug-ins that automatically generate a UI, etc. yet utilize common code.
Better automatic UI generation. For a tool like this, the UI for controlling the parameters should be a lot simpler to auto-generate without the common boilerplate code…
General polish. There are some basic UI issues with the current demo. For example. 512×512 is a decent size to save the image as when done, but it certainly would be a lot nicer to get the base image and the tiled preview all fitting one screen (at a reasonably normal resolution).
I wrote a brief JQuery plug-in which transforms a normal <IMG> element into a dynamic, tiled version of the image with a slider for controlling the size of the image tiles.
The purpose would be to preview quickly seamless textures. I wrote this mostly to get a bit more experience with HTML5 and JQuery.
I also like the idea of improving this enough that I could a site like OpenGameArt.org could use it for images tagged with ‘seamless’ – but we’ll see about that: it needs more polish before I would feel right contacting anyone over there.
Note: I’ve called this “version 0.2.0″ since this is really draft quality work: I haven’t written the docs (though the normal JQuery plug-in convention of
$("#myimage").seamlessImage() will get you started pretty quickly), I haven’t done any unit testing (i.e. different layout scenarios, different browsers), there are some basic features that should probably be added, etc.
The absolutely minimal documentation is located here.
Use the slider below the image to use the plug-in!
- Uses the HTML5 “range” control to implement the slider
- The JQuery $.data method is used to store the internal state of the plug-in
Anyway, good to get something demonstrable and graphical back up on the blog!
I spent some time attempting to teach myself a bit more about 3D modeling since I’m taking a break from graphics programming, but have a hard time giving up learning more about graphics entirely…
I’m using Blender 2.5 Alpha 2. The new UI in Blender is amazingly improved. I don’t think I’ve ever seen such a vast improvement in a free software application before. I can’t compliment the team enough: this is a tremendous step forward. As a poignant example, I have very limited Blender experience (see one of my earlier blog posts) but can say that – for the first time – in this program’s vast set of capabilities, I was often able to find what I was looking for myself without resulting to Google. Finding tools in the UI without Google may sound like an “obvious” requirement for good/decent/acceptable software, but for an application of the level of complexity of Blender, I think it’s fair to say it’s rarely the reality that finding the desired functionality is to any degree intuitive to beginners. Again, kudos to the Blender team.
(Warning: there are some stability problems still if you’re giving 2.5 a try. This is an Alpha 2 release, after all.)
I also happened across an excellent tutorial on KatsBits.com that takes the reader from start to finish on a static model blender model. (Note: the tutorial uses a pre-Blender 2.5 release, the UI is very different. Again though, with only minimal help from Google, I was able to find the 2.5 equivalents of everything being done.)
It begins with a cube, shows how to do basic cuts and extrudes, then moves on to setting up a UV map for the model and applying a texture. The reason I enjoyed this tutorial so much was that (to a non-modeler beginner like me) it gave the best, concise explanation of UV Unwrapping.
I highly recommend reading the whole tutorial, as both the original author deserves that and the images give essential context to explain it more fully, but here’s are core idea that triggered the insight for me:
The principle involved here is the same as if you were to cut a cardboard box down one side, laying the resulting top, bottom and sides flat on the ground so all the parts of it were spread out. A mesh is treated much the same way where-ever possible, edges are placed around a mesh to facilitate a similar end result.
In other words, a seam is a cutting line and the non-seams are where the Blender unwrapping algorithm will attempt to “unfold” the model. For example, to unfold the top of a box (but to keep it connected to the box as a whole), three of the four top edges should be marked seems and the top will unfold along the unmarked seam. Any non-planar collection of faces not adjacent to a seam will be more or less flattened or squashed down onto plane.
The second insight is that more seams means (likely) a more linear, clean unfolding but the trade-off is the UVs across each seam are discontinuous across that seam.
I suspect this was likely obvious to anyone who has spent any time 3D modelling before, but for a beginner like me it was good to finally “get it” about UV Unwrapping.
The results? Nothing fancy: just a chair similar to the one from the above tutorial.
Note: the wood texture was adapted in GIMP from a wood photograph available here (thanks, bittbox). Using the GIMP Offset command, a bit of the Clone tool, and some color level adjustments, I managed to make a acceptable quality seamless texture out of the photograph. I should also note that there’s no real need for a seamless texture in the final model given that the UV unwrapping; making the texture seamless (and with more desirable results than the Make Seamless built-in script) was a different, separate exercise.