## August

without comments

Checking in again!

A new job is keeping me too busy to do much with the site. That’s the bad news.  The good news is the new job has me doing a lot of enjoyable graphics work.

I’m not sure when I’ll be able to revamp / port / return to the work on athile.net, but it is in hibernation not anything worse. I enjoy the graphics work I’ve done here far too much to consider flat out abandoning it.

Written by arthur

August 13th, 2012 at 7:27 am

Posted in company

## What’s going on?

without comments

About a month and a half since the last post –

I’m busy working. However, I’m exploring the possibility of moving “Athile Technologies” from the pseudo-company, hobby project of mine to its own sink-or-swim real business. There are a lot of details to work out. This site may go away or it may stay; the code is going to continue exist and be developed one way or another though!

Written by arthur

June 27th, 2012 at 12:46 pm

Posted in company

## Vacation (from the blog)

with one comment

I’m realizing work is keeping me too busy to maintain the habit of posting here with fair frequency.  Work is likely going to be keeping me locked up for at least another month or two.  The blog is not dead! But it might be mid-June before the next post!

Written by arthur

May 12th, 2012 at 10:35 am

Posted in company

## Concerning Wheel Reinvention

without comments

Here’s a development pattern I’ve noticed.  This linear progression certainly doesn’t apply in all cases – it branches off in different ways in many cases, but at least sometimes the following pattern seems to occur…

### Wheel Reinvention and Understanding Roundness

1. Recognize There’s Already a Solution Out There
Find yourself solving a problem that falls into a general pattern – the kind of pattern that could be solved with a reusable library or external tool.  You note that to some degree you are most likely reinventing a wheel.  Or perhaps in reading about some other library or tool, you note that you have been trying to invent this yourself.  Either way, someone has already solved this class of problem and you’re potentially inefficiently redoing that work: should you be?
2. Research a Complete, Existing Solution
Read some blog posts, check out some tutorials, skim the API docs, and experiment with an existing library or tool that solves the problem you keep facing.  It’s a pretty good library.  People are using it.  It looks like it’s got decent documentation and is actively supported. This looks like a potentially good fit.  Your time is valuable and it sure seems a lot smarter to utilize this rather than reinvent.
3. Start Using It and Get Frustrated the Solution’s Complexity
After spending some time with the existing solution, you grow frustrated that you can’t quite wrap your head around everything the API /tool is trying to do, that it seems to have too many options, that it tries to solve multiple problems, that it doesn’t quite fit the problem as you would like, that for one valid or not-so-valid reason it doesn’t seem to be “right” for what you need.
4. Design a Simpler Solution
Go back to your own code. Use the general concepts from the existing library to write / improve your own solution.  Your own solution is one that you do understand, that is simpler, and more neatly fits the problem as you’ve conceptualized it.  Yes, you’ve reinvented the wheel, but actually it’s likely now a better solution than it was at step 1 (at least in theory, assuming you have the time to complete and test the library) and is tailored nicely to your needs.
5. The Requirements for Your Solution Grow
Not too long later, you find your simpler solution growing in requirements and complexity gradually creeping in.  Long story short: the task is no longer a clean re-visioning of that existing solution you read about; it’s turning into a burdensome, parallel reimplementation.  The price of re-implementation is beginning to outweigh the advantages of that simple solution you had a while ago.
6. Use the Existing Solution
Now you get it.  You’ve spent enough time with the problem class to see why the original authors of that existing library did it the way they did.  Maybe that existing solution still isn’t perfect in your mind, but at least you see why the features exist.  You know have a fairly strong understanding of the class of problem, even if you don’t like all the details about this specific existing implementation of a solution.  This key though: you now know enough to understand the general problem space of the existing solution.  You use it instead, get over your frustration with its idiosyncrasies when you think of how much effort reimplementing everything yourself would be, and in general are sufficiently happy with it as a solution.
7. New Team Members Wonder Why Such a Complex Solution is Being Used
Now we come full circle. You get why this imperfect, seemingly over-complex tool is being unused; they don’t.  You’ve been through Step 1-6; they haven’t.  They wonder we you would recommend such a needlessly complex tool.  How do you explain what you learned to them in those steps?  If this were easy to answer, couldn’t that same information have saved you going through those early steps as well and just arrived here where you are immediately?

Part of me thinks that the reinvention of the wheel in programming is a inevitable: it’s part of the learning process.  However, at the same time, there’s a significant difference between pure reinvention and informed rediscovery (i.e. pulling solutions from thin air is really hard, applying solutions learned elsewhere to your own instance of the problem is much easier).

At the same time, I think there are approaches that acknowledge the need for some degree of reinvention without offering huge jumps in complexity between the new person’s knowledge base and the full-fledged, robust solution.

I wonder if for many classes of problem if a graduated set of APIs is the solution: i.e. (1) here’s the basic library that will introduce the concepts and get you started quickly – but you will hit limitations with any long term use, (2) here’s the intermediate version of the API that follows the same basic conventions and is a direct super-set when possible, and (3) here’s the advanced API that does exactly what you need, solves all corner cases, but demands a strong conceptual understanding of the problem class.  A problem with this approach is that the author of a type “1″ API, in the process of implementing it, builds a knowledge base that makes it easy to implement a type “2″ API – so he or she does…and so on with (2) and (3).  (As an aside, I fear OpenGL has to some degree gone from a “graduated” API to an advanced API: this is great for experienced graphics programmers, but a glBegin(GL_QUADS) example sure is an easier starting point for beginners.)

I think it’s useful to acknowledge that there are different stages of user knowledge when designing an API.  I’m not sure what the right answer to solving the learning curve versus the power of the solution provided, but it’s an interesting topic.

Written by arthur

May 6th, 2012 at 12:15 pm

Posted in company

## What’s New

without comments

No new work on LxEngine in the last couple weeks, unfortunately. Aggravating not being able to finish off that last chunk of work on normal mapping!

I’ve been busy consulting on a web project. I do have to say that taking up a totally different kind of programming (say, small website programming versus the lower-level desktop graphics) really does work well to break the tendency towards tunnel-vision in solving problems. For example, I still think JQuery is a fantastic example of an API that violates many of the “rules” about a good API that I previously held; yet, the API is great in many regards, so that’s a clear sign that those rules – like all rules – have exceptions.  A change in perspective is almost always helpful.

Also, have been reading Essential Mathematics for Games and Interactive Applications: A Programmer’s Guide at Christophe Riccio’s recommendation. So far it’s been an excellent book reminding me of the pure math behind all those math classes I use. I have to say the Google e-book version could have used a little more editing though…definitely numerous typographical errors in there that could / should be fixed.

Written by arthur

April 27th, 2012 at 9:48 am

Posted in company

Tagged with

## State of the Engine

without comments

For various reasons, I’ve spent some time away from LxEngine coding over the last several weeks. I’ve decided to intentionally use this time away from the code to attempt to distinguish the forest from the trees and ramble about where LxEngine should be headed…

### Script + Engine Driven

I believe I’d like to modify lxengine.lib into lxengine.exe. Instead of a library to build applications on, it should be an executable that is driven by a configuration file, scripts, and native plug-ins. A new app should be writable without any native C++ code. This mindset fits better with the original goals of rapid development as a priority. I’ve envisioning a manifest file suppling a list of shared and custom plug-ins (which are in turn a series of configuration files, scripts files, DLLs, and resource files) and a “main” script file to control execution. This seems like a better architecture for my goals.

### Smaller Core, More Plug-ins

Physics, Sound, and many other utilities should be separate dynamically loaded DLLs. Lots of small DLLs is less efficient at runtime, but again – the goal of the project is easy, intuitive development rather than bleeding-edge performance. I need to constantly remind myself of this. I need to break out the non-core services into a set of plug-ins per the architecture described above.

It’s likely that Javascript and JSON support will be built into the core, as these will be the primary platform upon which further extensions can be built.

### More but Smaller Samples

With a “no native code required” engine setup, I’d like to throw together a lot more minimalist examples. This fits better for testing purposes – not to mention it fits better with my erratic coding schedule.

### Concurrent Architecture

I’d also like the LxEngine architecture to support a few high-level constructs more “natively” than directly coding in C++ or Javascript would. I want LxEngine to be different enough from a collection of standard libraries that the advantages (or disadvantages) of it are immediately apparent. I want it to heavily assume a concurrent, event-driven model.

The first-class concepts I’d like to include are:

States both in the form of sequences (context-dependent behaviors) and stacks (inherited and overridden behaviors). Concurrency, i.e. in the form of good threading support. Task support in the form of elegant coding mechanisms for large sets of small functional units. Coroutines, i.e. interruptible and resumable functions for spreading behaviors over a non-sequential time-period (useful for animation and AI scripts). Events – i.e. the engine should be almost entirely event-driven rather than sequential to better fit with the concurrency model.

### Resource Manager

I’d follow the same “manifest file” approach with individual resources. A model, texture, script, etc. _cannot_ be loaded unless it provides a manifest file with licensing and other use / authorship metadata. I’d like to have part of the engine itself be a service to enumerate the proper credits and licensing information for any set of resources. Sure, this would be trivial to circumvent, but that’s not the goal; the goal is to make it easy for those developers who want to use open resources to automatically properly credit them.

### Still a Development Platform

I still want LxEngine to be both an engine itself as well as a easy to use development environment / platform. Even if Bullet (for example) is no longer linked into the core engine, I want to retain LxEngine’s current scripts for pulling down the source release and building Bullet locally as part of the environment. I still want to support the use case where the third-party “auto-build” and configuration can still be used even if the lxengine core code itself is not used. Build and configuration is too much of a hassle on Windows. It shouldn’t be.

### NaCL and the Web

Not sure exactly where I need to head in this direction. After some fruitful time working with Javascript in a purely web-based environment, I see a lot of advantages to immediately web-deployable work…but at the same time, I have not forgotten that C++ has some serious advantages. I don’t have a conclusion for where the native code base and the web code base should merge and if the answer is Google’s NaCl – but it’s an inevitably lingering question.

Written by arthur

February 4th, 2012 at 10:43 am

Posted in company

## Git Stash

without comments

This is likely obvious to experienced git users, but the use of the git “stash” command was new to me. It seems very useful. I’ll give one such example of it’s use, and let the rest of the internet and the git documentation handle the rest:

### Stashing Local Changes to Simplify a Merge

Let’s start with two computers with the same project on them, both synced to the latest commit on the project.

I make changes on computer A, I commit, and I push those changes to a central repository like github.

I then make changes on computer B…when I realize I haven’t pulled down the latest changes to the file I’m working on. I attempt a “git pull”, but git aborts because of the changes in the file.

Updating fee7adb..22cc2f1
error: Your local changes to the following files would be overwritten by merge:
lx0/dev/web/site/tools/rasterizer/index.html
Please, commit your changes or stash them before you can merge.
Aborting

### Stash: Save, Pull, and Apply

Here’s the trivial way to handle the above:

1. git stash save 'wip changes' (this will save the current changes and reset the branch to HEAD)
2. git pull (the pull will succeed this time, since the branch is clean)
3. git stash apply (take the stashed away changes and auto-merge them into the branch)

I suppose the auto-merge won’t work in all cases, but for the many cases the above workflow is simple and works.

Written by arthur

January 17th, 2012 at 8:08 am

Posted in company

Tagged with , , , , , ,

## Providing a Link to Generated Javascript

without comments

I’m working on LxLang a GLSL-like language that can be translated to Javascript, C++, and GLSL. The motivation is to avoid maintaining the same basic, low-level graphics functions in three different languages. The translator itself is written in Javascript, so the LxLang -> Javascript translation can occur live in a web page. However, I also wanted to provide a link to the translated source as a sort of “compiled” version, since I highly suspect anyone other than myself would be more willing to use the JS code than code written in some language they’ve never heard of.

Enter Data URI, a way to provide a link to that generated source without having to ever save it to a file. I’ve use this scheme before with the extremely nice toDataUrl method of the HTML5 CANVAS element as a way to generate downloadable PNGs from dynamically generated CANVAS graphics. So: I should be able to do this with generated javascript…

### Data URI for Text

The solution I’m using is quite simple. Call Javascript escape on the source text, add a “data:text/plan,” prefix, and set that as the URL. Done.

var dataUri = 'data:text/plain,' + escape(source);
$("#download-link").attr("href", dataUri); The one downside that I haven’t been able to work past yet is that Chrome disables the “Save as…” option when you visit the resulting page. I don’t understand this. To download the file, you need to right-click save the link rather than visit the page and save that. Strange. Written by arthur December 31st, 2011 at 2:34 pm Posted in company ## Accessing HTML5 CANVAS Pixels without comments Let’s fill a HTML5 Canvas row-by-row to a checker pattern, directly setting the pixels rather than using fillRect(): $(document).ready(function () {
// Get the canvas element with id="canvas" using JQuery
\$("#canvas").each(function () {
var ctx = this.getContext('2d');

for (var y = 0; y < this.height; ++y) {
var rowData = ctx.createImageData(this.width, 1);
var pixels = rowData.data;
for (var x = 0; x < this.width; ++x) {
var c;
if ((x + y) % 2 == 0)
c = [127, 127, 127];
else
c = [255, 0, 0];

pixels[x * 4 + 0] = c[0];
pixels[x * 4 + 1] = c[1];
pixels[x * 4 + 2] = c[2];
pixels[x * 4 + 3] = 255;
}
ctx.putImageData(rowData, 0, y);
}
});
});

Written by arthur

December 18th, 2011 at 7:55 pm

Posted in company

## The Web Browser as a Build System

with one comment

This post can largely be qualified as daydreaming, or maybe just hoping aloud. In any case, here’s a look at some browser changes that I think could revolutionize the web.

The best part about this is that there’s absolutely nothing new being proposed below. It’s about taking existing ideas and fitting them together in a way that would work efficiently.

### A Standard Browser Bytecode/Virtual Machine

Create an open ISO standard bytecode. Create a sandboxed virtual machine to run the code.

Now the browser can run script via Javascript or via any language that can be compiled into that byte code. Share the runtime environment so that chunks of Javascript and bytecode can be interweaved.

Wait, isn’t that what Java was? Key difference: integrate the DOM with the virtual machine. Honestly, that’s the key difference. You don’t end up with a disparate Java applet sitting in a hosted rectangle, essentially disconnected from the rest of the page, with a totally different looking UI. No, you get the simplicity of HTML5 for the UI because the virtual machine is seamlessly part of the page. Don’t reinvent what makes HTML so popular: use it instead.

Wait, emscripten already does this. Yes! In many ways it does, and it’s a fantastic idea. The big differences are that I’d love to see the LLVM interpreter as a standard part of the browser – plus, I imagine it’d be more efficient to embed LLVM’s interpreter into the browser native code than interpret the bytecode via a virtual machine running through the Javascript interpreter. Let’s get native support for integer types running in the VM.

Wait, what about Google’s NaCl? To me, it seems a lot of excellent technology but not put in its optimal place. Where is the advantage to running native binaries directly rather than adopting a standard bytecode instead since those native binaries need to be compiled through a custom toolchain anyway? Also, NaCl code can’t directly access the DOM and, in my mind, that’s critical.

### Base it on LLVM bytecode

Let’s make it based on a subset of LLVM’s bytecode. Update the version via committee as the various parts stabilize and are agreed to be “good”. Let the virtual machine evolve to handle the needs of future web programming, rather than bloating the Javascript language into a do-all language.

Perhaps Microsoft’s .NET CIL is sufficient. I don’t know if it is “open” enough or not, though from what I’ve read it does seem pretty solid from a technical perspective.

The question really is: are there enough quality, free tools for generating the bytecode to hit a critical mass such that more quality tools start to be developed around it?

### Using C++ on the Web Would Be Possible

This is the part I like, since I think in many ways, C++ is the best general purpose, production-code language.

Clang compiles C++ code into LLVM bytecode. If the browser can safely run sandboxed LLVM code, it can now run C++. Use clang on the developer machine to compile C++ code into bytecode, host the bytecode on the web, run the bytecode. (Ensuring simple Clang installers for all major platforms would certainly be a helpful side project here.)

Some run-time linking will be necessary on the part of the VM to hook that bytecode into the runtime environment (i.e. the DOM), but theoretically nothing overly burdensome has to happen.

(As a slight aside, don’t allow “unsafe” or “trusted” scripts to run outside the sandbox and access the machine resources directly. Just don’t. Ever. If hardware resources need to be exposed, let them be exposed as they are now by advancements in the web standards as WebGL, the HTML5 File API, etc. are being exposed. Use currently existing plug-in mechanisms outside the standard scripting environment if you really need to reach beyond the run-time environment. These will be available to scripts, as the DOM is, but not implemented as scripts. Keep the script environment itself clean and always sandboxed.)

### Using C++ as a Web Scripting Language Would Be Possible

Now, Clang is self-hosting, so compile Clang (or a subset of it) to LLVM bytecode. The web page environment can now have the moral equivalent of a “compileCpp(source)” function.

Use a trivial bit of Javacript to transform script tags of type=”text/C++11″ into LLVM bytecode – and voila, C++11 can now be used as an interpreted scripting language.

Languages like CoffeeScript or even LESS already do a similar transformation from a new language to a browser standard; however, rather than targeting a language designed for direct programming, target a generalized bytecode intended from the start to be a target of other language transformations. (Things being used for what they were designed for tend to be more efficient.)

The potential uses for C++ on the web seem enormous to me: prototyping native code in a dynamic environment, a sandboxed development environment for new C++ modules, live unit tests of code bases, rapid previews of code modules and segments, live testing during code reviews, finally a semi-standard cross-platform UI solution for C++, shared code between web and desktop versions, a strict typing language to add robustness to critical production web code, etc.

The list of uses almost seem self-evident to me. (Perhaps I like C++ too much.)

### Browser Caching Enhancements

I’m not an expert on browsing caching capabilities, but the above assumes either uploading the bytecode or compiling the C++ code on the web page load.

Let’s ensure the browser has sufficient logic to write out a compiled version of the C++ code to a cache (on a per-section basis, i.e. like O/OBJ object files). Visit the page the first time and wait while it compiles; visit a second time and it loads the compiled code from the cache (and probably only does some minimal linking). I mean literally the first visit to the server and the second visit: cache this on the server, not per-user.

Of course, some sort of “trusted user” mechanism would need to be in place if the compilation is occurring on the client-side and the caching being done on to a shared server-side. (Definitely a solvable problem – even if means something like invoking a PHP script in a password protected directory to write out the cached file; but it would be more fluid to have a standardized browser mechanism for this.)

Next, let’s make sure those caches work with CDNs (content delivery networks). There’s no reason to host the standard C++ library on my site and have that compile separately for every page that uses it. Instead, the script src tag references a library version hosted on a CDN where there will be a cached, compiled copy already available.

Also, once the idea of CDNs comes into place, the problem of “dependencies” becomes much simpler. Add a URL to the library on the CDN: never worry about compiling or installing any dependency. The URL would likely implicitly or explicitly contains the version number, compiler used, compile flags, etc. CDNs could even host the compilers they used to absolutely ensure no ABI (application binary interface) issues between their hosted code and locally hosted code.

### A Web-based Distributed Compiler/Make

Now simply think of the above in broader terms: cache on a module-by-module basis, add in dependencies between segments of code, mark everything with version numbers, cache on CDNs…and it’s not much of a stretch to think of how a full Make system can be hosted in the browser (i.e. via a script/bytecode, not directly in the browser source) and how CDNs start making the whole internet start to act like a large distributed compiler.

Because this is all distributed, it’s not really bloating what the web browser does. A web page might have only a couple dozen lines of custom code. The rest is pre-compiled code sitting out there on CDNs in the same way stock photography or massive databases are sitting out there hosted on high-end servers elsewhere.

With CDNs and caching, “compile-time” should become a non-issue for the end-user; download time is the only bottleneck – and heck, that’s already true for native apps today.

### A Bit of Client-Side Caching

Let’s also reserve a chunk of disk space for client-side caching of LLVM-to-native code caching. The browser can work with the VM’s JIT to cache optimized native code for the bottleneck sections of the bytecode. This should speed up the most frequently used apps to near natively-compiled code speeds (see current C# versus C++ performance benchmarks if you want a estimate of “near”).

For safety, simplicity, and security, let’s keep any cached native code local: don’t host anything that runs outside the safety of a sandbox on a CDN. (Sure, NaCl could run the native code in a sandbox as well, but let’s keep things simple.)

### Other General Languages and Domain-Specific Languages

Now repeat this process for languages other than C++. Anything that can be compiled to LLVM bytecode can safely run on the web. Anything language that has a compiler that can be compiled into LLVM bytecode can be run on the web.

Any language that has a compiler hosted somewhere on the web can be used (no worrying about if the user has
the right version of Python installed or not!). The barrier to entry for domain-specific languages drops notably.

<script compiler=’http://cdn.somesite123.org/hosted-compilers/python-2.7.0′ src=’mycode.py’ />

<script compiler=’http://cdn.somesite123.org/hosted-compilers/joelanguage-0.2.3′ src=’dslcode.txt’ />

### An Quick Aside on Code Reuse

I’m convinced that code reuse is dependent primarily on how easy it is to reuse the code.

Having an explosion of different languages on the web and of different language versions will not undermine code reuse. Rather, if including large, production code modules was as simply as adding support for JQuery via a CDN – then heck, a lot more code reuse would happen. Forget the build dependencies, install locations, etc. that can add serious overhead to using a third-party library (especially on Windows). All this disappears. Sure the specific locations for a particular compiler, library, etc. might not be standardized but because the same instance is shared across the web that itself making that single instance a standard: it’s going to be there for anyone who connected to the internet.

Standards make things easier to use: and a standard virtual machine does not create a standard for authoring code but rather creates a standard for using existing code. I conjecture that increasing the number of languages available on the web in this manner would actually drastically increase code reuse on the web.

### These are Not New Ideas

Obviously, nothing above is new. I’ve mentioned Java, .NET, NaCl, etc. and I’m sure others can point out much earlier projects exemplifying these ideas.

I think one of the major strengths of Unix was something Windows still has failed to realize: include a compiler – that is always there – with the operating system and far more people will build tools and applications for it. (To be clear: I’m using “operating system” here to mean the full default operating environment, not just the kernel.)

The browser is becoming in many ways like an OS. What I’m envisioning is creating a standard “tool-chain” that’s always there in the browser. Look how successful simply having a standard interpreter in the form of Javascript has become. Now imagine if you took the good ideas behind Java, .NET, CDNs, Make systems, etc. and provided that instead of just Javascript.

Again, nothing here is new. However, the idea of connecting all these ideas into a simple, usable, coherent, and standard system seems so tantalizingly within reach at this point in the evolution of the web. It’s agitating. It’s exciting.

The web browser as a build system: I can hardly imagine how the web and computing world would change.

### Postscript: Interested?

Think this is a great idea and have the discretionary funds to hire me to work on this?

I’m only half joking

Written by arthur

December 13th, 2011 at 9:50 am