## Enumerating Active Display Resolutions

This post is quick C++ tidbit on Win32 for enumerating the displays on a system as well as the current resolution and pixel offset of each display. I’m using this to automatically layout multiple windows within an application to different monitors (when possible).

The code is straightforward and comes directly from the MSDN documentation; however, this may provide a direct example for someone looking to accomplish something similar. (Note for understanding the code below: the “lxvar” data type is a variant data-type very similar to a Javascript var – i.e. a object capable of storing ints, floats, strings, arrays, and maps in a nested fashion.)

### The Code

lxvar
_lx_monitor_info ()
{
//
// Hide the boilerplate of the WIN32 callback in an hidden class
//
class _MonitorInfo
{
public:
_MonitorInfo()
{
index = 0;
::EnumDisplayMonitors(NULL, NULL, enumMonitorsCallback, (LPARAM)this);
}

int     index;
lxvar   data;

protected:
static
BOOL CALLBACK
enumMonitorsCallback (HMONITOR hMonitor, HDC hdcMonitor, LPRECT lprcMonitor, LPARAM dwData)
{
_MonitorInfo& monitorInfo = *reinterpret_cast<_MonitorInfo*>(dwData);

auto& info = monitorInfo.data[monitorInfo.index];
info["offset"][0] = lprcMonitor->left;
info["offset"][1] = lprcMonitor->top;
info["size"][0] = lprcMonitor->right - lprcMonitor->left;
info["size"][1] = lprcMonitor->bottom - lprcMonitor->top;

monitorInfo.index++;
return TRUE;
}
};

_MonitorInfo info;
return info.data;
}

Written by arthur

January 23rd, 2012 at 5:05 pm

Posted in lxengine

## The Web Browser as a Build System

with one comment

This post can largely be qualified as daydreaming, or maybe just hoping aloud. In any case, here’s a look at some browser changes that I think could revolutionize the web.

The best part about this is that there’s absolutely nothing new being proposed below. It’s about taking existing ideas and fitting them together in a way that would work efficiently.

### A Standard Browser Bytecode/Virtual Machine

Create an open ISO standard bytecode. Create a sandboxed virtual machine to run the code.

Now the browser can run script via Javascript or via any language that can be compiled into that byte code. Share the runtime environment so that chunks of Javascript and bytecode can be interweaved.

Wait, isn’t that what Java was? Key difference: integrate the DOM with the virtual machine. Honestly, that’s the key difference. You don’t end up with a disparate Java applet sitting in a hosted rectangle, essentially disconnected from the rest of the page, with a totally different looking UI. No, you get the simplicity of HTML5 for the UI because the virtual machine is seamlessly part of the page. Don’t reinvent what makes HTML so popular: use it instead.

Wait, emscripten already does this. Yes! In many ways it does, and it’s a fantastic idea. The big differences are that I’d love to see the LLVM interpreter as a standard part of the browser – plus, I imagine it’d be more efficient to embed LLVM’s interpreter into the browser native code than interpret the bytecode via a virtual machine running through the Javascript interpreter. Let’s get native support for integer types running in the VM.

Wait, what about Google’s NaCl? To me, it seems a lot of excellent technology but not put in its optimal place. Where is the advantage to running native binaries directly rather than adopting a standard bytecode instead since those native binaries need to be compiled through a custom toolchain anyway? Also, NaCl code can’t directly access the DOM and, in my mind, that’s critical.

### Base it on LLVM bytecode

Let’s make it based on a subset of LLVM’s bytecode. Update the version via committee as the various parts stabilize and are agreed to be “good”. Let the virtual machine evolve to handle the needs of future web programming, rather than bloating the Javascript language into a do-all language.

Perhaps Microsoft’s .NET CIL is sufficient. I don’t know if it is “open” enough or not, though from what I’ve read it does seem pretty solid from a technical perspective.

The question really is: are there enough quality, free tools for generating the bytecode to hit a critical mass such that more quality tools start to be developed around it?

### Using C++ on the Web Would Be Possible

This is the part I like, since I think in many ways, C++ is the best general purpose, production-code language.

Clang compiles C++ code into LLVM bytecode. If the browser can safely run sandboxed LLVM code, it can now run C++. Use clang on the developer machine to compile C++ code into bytecode, host the bytecode on the web, run the bytecode. (Ensuring simple Clang installers for all major platforms would certainly be a helpful side project here.)

Some run-time linking will be necessary on the part of the VM to hook that bytecode into the runtime environment (i.e. the DOM), but theoretically nothing overly burdensome has to happen.

(As a slight aside, don’t allow “unsafe” or “trusted” scripts to run outside the sandbox and access the machine resources directly. Just don’t. Ever. If hardware resources need to be exposed, let them be exposed as they are now by advancements in the web standards as WebGL, the HTML5 File API, etc. are being exposed. Use currently existing plug-in mechanisms outside the standard scripting environment if you really need to reach beyond the run-time environment. These will be available to scripts, as the DOM is, but not implemented as scripts. Keep the script environment itself clean and always sandboxed.)

### Using C++ as a Web Scripting Language Would Be Possible

Now, Clang is self-hosting, so compile Clang (or a subset of it) to LLVM bytecode. The web page environment can now have the moral equivalent of a “compileCpp(source)” function.

Use a trivial bit of Javacript to transform script tags of type=”text/C++11″ into LLVM bytecode – and voila, C++11 can now be used as an interpreted scripting language.

Languages like CoffeeScript or even LESS already do a similar transformation from a new language to a browser standard; however, rather than targeting a language designed for direct programming, target a generalized bytecode intended from the start to be a target of other language transformations. (Things being used for what they were designed for tend to be more efficient.)

The potential uses for C++ on the web seem enormous to me: prototyping native code in a dynamic environment, a sandboxed development environment for new C++ modules, live unit tests of code bases, rapid previews of code modules and segments, live testing during code reviews, finally a semi-standard cross-platform UI solution for C++, shared code between web and desktop versions, a strict typing language to add robustness to critical production web code, etc.

The list of uses almost seem self-evident to me. (Perhaps I like C++ too much.)

### Browser Caching Enhancements

I’m not an expert on browsing caching capabilities, but the above assumes either uploading the bytecode or compiling the C++ code on the web page load.

Let’s ensure the browser has sufficient logic to write out a compiled version of the C++ code to a cache (on a per-section basis, i.e. like O/OBJ object files). Visit the page the first time and wait while it compiles; visit a second time and it loads the compiled code from the cache (and probably only does some minimal linking). I mean literally the first visit to the server and the second visit: cache this on the server, not per-user.

Of course, some sort of “trusted user” mechanism would need to be in place if the compilation is occurring on the client-side and the caching being done on to a shared server-side. (Definitely a solvable problem – even if means something like invoking a PHP script in a password protected directory to write out the cached file; but it would be more fluid to have a standardized browser mechanism for this.)

Next, let’s make sure those caches work with CDNs (content delivery networks). There’s no reason to host the standard C++ library on my site and have that compile separately for every page that uses it. Instead, the script src tag references a library version hosted on a CDN where there will be a cached, compiled copy already available.

Also, once the idea of CDNs comes into place, the problem of “dependencies” becomes much simpler. Add a URL to the library on the CDN: never worry about compiling or installing any dependency. The URL would likely implicitly or explicitly contains the version number, compiler used, compile flags, etc. CDNs could even host the compilers they used to absolutely ensure no ABI (application binary interface) issues between their hosted code and locally hosted code.

### A Web-based Distributed Compiler/Make

Now simply think of the above in broader terms: cache on a module-by-module basis, add in dependencies between segments of code, mark everything with version numbers, cache on CDNs…and it’s not much of a stretch to think of how a full Make system can be hosted in the browser (i.e. via a script/bytecode, not directly in the browser source) and how CDNs start making the whole internet start to act like a large distributed compiler.

Because this is all distributed, it’s not really bloating what the web browser does. A web page might have only a couple dozen lines of custom code. The rest is pre-compiled code sitting out there on CDNs in the same way stock photography or massive databases are sitting out there hosted on high-end servers elsewhere.

With CDNs and caching, “compile-time” should become a non-issue for the end-user; download time is the only bottleneck – and heck, that’s already true for native apps today.

### A Bit of Client-Side Caching

Let’s also reserve a chunk of disk space for client-side caching of LLVM-to-native code caching. The browser can work with the VM’s JIT to cache optimized native code for the bottleneck sections of the bytecode. This should speed up the most frequently used apps to near natively-compiled code speeds (see current C# versus C++ performance benchmarks if you want a estimate of “near”).

For safety, simplicity, and security, let’s keep any cached native code local: don’t host anything that runs outside the safety of a sandbox on a CDN. (Sure, NaCl could run the native code in a sandbox as well, but let’s keep things simple.)

### Other General Languages and Domain-Specific Languages

Now repeat this process for languages other than C++. Anything that can be compiled to LLVM bytecode can safely run on the web. Anything language that has a compiler that can be compiled into LLVM bytecode can be run on the web.

Any language that has a compiler hosted somewhere on the web can be used (no worrying about if the user has
the right version of Python installed or not!). The barrier to entry for domain-specific languages drops notably.

<script compiler=’http://cdn.somesite123.org/hosted-compilers/python-2.7.0′ src=’mycode.py’ />

<script compiler=’http://cdn.somesite123.org/hosted-compilers/joelanguage-0.2.3′ src=’dslcode.txt’ />

### An Quick Aside on Code Reuse

I’m convinced that code reuse is dependent primarily on how easy it is to reuse the code.

Having an explosion of different languages on the web and of different language versions will not undermine code reuse. Rather, if including large, production code modules was as simply as adding support for JQuery via a CDN – then heck, a lot more code reuse would happen. Forget the build dependencies, install locations, etc. that can add serious overhead to using a third-party library (especially on Windows). All this disappears. Sure the specific locations for a particular compiler, library, etc. might not be standardized but because the same instance is shared across the web that itself making that single instance a standard: it’s going to be there for anyone who connected to the internet.

Standards make things easier to use: and a standard virtual machine does not create a standard for authoring code but rather creates a standard for using existing code. I conjecture that increasing the number of languages available on the web in this manner would actually drastically increase code reuse on the web.

### These are Not New Ideas

Obviously, nothing above is new. I’ve mentioned Java, .NET, NaCl, etc. and I’m sure others can point out much earlier projects exemplifying these ideas.

I think one of the major strengths of Unix was something Windows still has failed to realize: include a compiler – that is always there – with the operating system and far more people will build tools and applications for it. (To be clear: I’m using “operating system” here to mean the full default operating environment, not just the kernel.)

The browser is becoming in many ways like an OS. What I’m envisioning is creating a standard “tool-chain” that’s always there in the browser. Look how successful simply having a standard interpreter in the form of Javascript has become. Now imagine if you took the good ideas behind Java, .NET, CDNs, Make systems, etc. and provided that instead of just Javascript.

Again, nothing here is new. However, the idea of connecting all these ideas into a simple, usable, coherent, and standard system seems so tantalizingly within reach at this point in the evolution of the web. It’s agitating. It’s exciting.

The web browser as a build system: I can hardly imagine how the web and computing world would change.

### Postscript: Interested?

Think this is a great idea and have the discretionary funds to hire me to work on this?

I’m only half joking

Written by arthur

December 13th, 2011 at 9:50 am

## Dependent Base Classes

Originally, this post was to be about the wonderful new C++ swizzle operators I’ve added to GLGeom, but due to a gcc compilation issue with the swizzle operations delaying that, I thought I might post on that error instead.

Below is a chunk of code that does NOT compile on gcc 4.6.1 but did work fine on Visual Studio 2010 (cl.exe 16.00.30319.01).

### The Code

I’ll show the code first in case you want to try to track down the error on your own…

template <typename Type, typename Class, int N, int E0, int E1, int E2, int E3>
struct swizzle_base
{
typedef swizzle_base<Type,Class,N,E0,E1,E2,E3> base;

swizzle_base& operator= (const Class& that)
{
static const int offset_dst[4] = { E0, E1, E2, E3 };

Type t[N];
for (int i = 0; i < N; ++i)
t[i] = that[i];
for (int i = 0; i < N; ++i)
e[offset_dst[i]] = t[i];

return *this;
}

swizzle_base& operator= (const Type& t)
{
static const int offset_dst[4] = { E0, E1, E2, E3 };

for (int i = 0; i < N; ++i)
e[offset_dst[i]] = t;

return *this;
}

Type    e[N];
};

//===========================================================================//

template <typename T, typename P, int E0, int E1>
struct swizzle2 : public swizzle_base<T,P,2,E0,E1,0,0>
{
using base::operator=;
operator P () { return P(e[E0], e[E1]); }
};

See the error? There’s no way I would have

Spoilers ahead if you’re trying to work out the error on your own…

The two complaints are as follows:

• error: ‘base’ has not been declared
• error: ‘e’ was not declared in this scope

It seems that both “base” and “e” are effectively not known, even though they are declared in the parent class. The parent class however, is a template, so that introduces a huge set of potential reasons for why it’s not so simple for the compiler to figure out.

### The Fix

Two minutes on the FreeNode #c++ IRC channel got me a fix to the issue:

• Type out the base class name fully rather than using the base typedef
• Explicitly use “this->e” rather than “e” to inform GCC this is a member variable
template <typename T, typename P, int E0, int E1>
struct swizzle2 : public swizzle_base<T,P,2,E0,E1,0,0>
{
using swizzle_base<T,P,2,E0,E1,0,0>::operator=;
operator P () { return P(this->e[E0], this->e[E1]); }
};

### But Why?

Here’s the sad anticlimax: I’m not exactly sure.

Internet searches for “dependent base classes” turn up some interesting threads like this one on stackoverflow, but I still could not find a concrete answer for why a “conforming compiler” needs the more verbose and explicit information. To flip the question around: what’s an example of where VS 2010 will generate incorrect code or parse something incorrectly because it is able to determine that “base” and “e” are in the dependent base class?

It seems to me – and I’m not a compiler writer – that the compiler could figure this out without causing any issue elsewhere (I’m likely wrong – as I said: not a compiler writer…or, for that matter, a C++ language spec stickler).

Let’s say the compiler keeps the templates “full unevaluated” until the time of instantiation. If the compiler can parse that swizzle2 is a template class deriving from swizzle_base before it parses any of the members of swizzle2, then it can fully evaluate swizzle_base with real template parameters and realize the concrete instance has a member “e” and a typedef “base”. It then moves on to the derived class swizzle2′s internals and it knows about “e” and “base”: thus, no problem. I feel like this is somewhat self-evident: if you know and substitute in the concrete parameters to swizzle2 then the above code is valid C++.

Therefore, I have to assume the compiler is not allowed to simply “skip over” the class internals before evaluating the parent class template. I’ll assume the compiler stores it in some partially evaluated form. It knows swizzle2 has a base class but – what with partial specialization, etc. – it probably can assume next to nothing about the parent class. Therefore, the derived class needs to explicitly, or at least somewhat explicitly, let the compiler know in the derived class about anything it will be using from the parent class. It’s not willing to simply assume an unknown token is something coming from the parent class.

As an example, suppose swizzle_base did have a partial specialization and that did not define any member called “e”. But there was a global variable named “e”. That means in one instantiation e refers to a member and in another it refers to a global. Confusing certainly, but I can’t think of why that’d necessarily be invalid. Now when you throw the “this->” in front of the “e” that case is no longer possible. It has to be coming from the object itself and not a global.

That sort of makes sense to me.

But the question still irks me: would a compiler that waits until the point of instantiation to evaluate all a template be a C++11 conformant compiler? And if efficiency is a concern, then why not make two internal pools in the compiler: templates that can be partially evaluated in advance and those that “seem to need more information” that get evaluated at the point of instantiation?

I truly like C++ as a language, but man, little syntactical messes like this really do there best to make sure I never refer to it as an elegant language

By the way, anyone have any more informed knowledge on why GCC can’t compile the original code and remain a “conformant” C++ compiler?

Written by arthur

September 9th, 2011 at 6:06 pm

Posted in lxengine

## Improving the Project Structure

As mentioned previously, one objective of the GLGeom project was to practice a more organized and complete project structure.  The structuring has been working well enough that I’ve started reorganizing the LxEngine code to do the same.

Note that the GLGeom project structure is largely based on how the GLM project is organized, so thanks to the developers of the GLM project.

### Namespacing

A fair bit of reorganization is happening, but I’ll point out one here: namespacing.

The GLGeom code and now the LxEngine code use a namespacing structure as follows:

namespace lx0
{
namespace MODULE_TYPE
{
namespace MODULE_ns
{
namespace detail
{
}
}
}
using namespace lx0::MODULE_TYPE::MODULE_ns;
}

Here are the rationale:

• Everything is put in the lx0 namespace to avoid polluting the global namespace (good common practice)
• Everything module is organized into one of several  ”module types”
• This is borrowed from GLM, which in turn borrows it from the OpenGL extensions convention
• For example, GLGeom defines a “core” module type, which contains all the stable, code that if you’re using GLGeom, you’re likely using these classes and functions.  These all get pulled in if you include “glgeom/glgeom.hpp”. A “extension” module type which is for any module that is stable, but probably not used as commonly or doesn’t fit quite as symmetrically into the design: these are not included by default when glgeom.hpp is included and must be explicitly included via an #include “glgeom/ext/module_name.hpp” statement.  Lastly, GLGeom defines a “prototype” module type for all modules that are work-in-progress or are experimental in nature (and the API is likely to change frequently).  These too require explicit inclusion.
• LxEngine defines core, engine, subsystem, util, and prototype module types.  The core and engine module types are pulled into the main lx0 namespace by default, with the lower-level functionality existing in core.
• The module itself is given its own namespace.  This allows “everything” to still sit in the lx0 namespace while in certain contexts a particular module (and only that module) can be pulled into the global namespace.  This may be useful when writing a particular chunk of code using that module extensively.  For example, the serialization code in a project might have a “using namespace  lx0::subsystem::javascript_ns”.  The “_ns” suffix is used to avoid name collisions.
• A detail namespace is used to hide implementation details that need to be shared across a particular module but are not intended for use outside that module (i.e. another good common practice).
• Lastly on inclusion, the symbols in the module namespace itself are pulled into the outer namespace.  This puts everything public in that single namespace so that the client code isn’t a mess of long namespacing to access objects.  If there were a collision, the full namespacing does still exist to remove all ambiguity.

### C++0x on GCC

I’m unfortunately a Windows developer, but wanted to test GLGeom on Linux.  Only a couple tweaks were necessary to get the unit tests building with a gcc 4.7 svn build on Ubuntu 10.10:

• Add the -std=c++0x flag to the compiler definitions in the CMakeFiles.txt
• Include a few headers explicitly (that apparently are implicitly included by VS 2010 via inter-dependencies)
• Flip around a “a.distance < b.distance” comparison to “b.distance > a.distance”: why? I hate to say it, but might be a compiler bug.  That chunk of code was in a template using a lambda function, so maybe that’s a case that confused gcc?  Why else would one work and not the other?

…however, my Linux lack of expertise then kicked in: glgeom_unittest built properly, but wouldn’t run.  The make install on gcc 4.7 properly made “gcc” point to the new compiler, but the built executable could not find the right version of libstdc++.so.  Perhaps I need to read a bit more about how shared libraries work on Ubuntu and Linux in general.

Update: I installed Arch Linux on Virtual Box, which comes with gcc 4.6.  Compilation succeeded. However, the GLGeom unit tests failed.  Why?  At least one problem is GLGeom’s use of classes with constructors in unions (a proposed feature of C++0x, specifically called an “unrestricted union“), which I suspect is cascading into all the more general failures.  I need to spend some more time with gcc to determine why there’s a failure.  For example, glgeom::point3f contains a union of a struct of three named floats and glm::vec3 (that works correctly); however, a glgeom::ray3f then contains a member point3f and when the point argument to the ray3f constructor is passed as a initializer to the point3f member, the values don’t get written and the glm::vec3 constructor seems to zero out the point instead.  Back to the debugger.

Update 2: The fix was to add an explicit copy constructor to the classes using unrestricted unions.  This seems to make sense, though I’d like to feel more confident in understanding specifically whether Visual Studio or gcc’s C++0x implementation is more correct in this regard (i.e. VS not requiring the explicit copy constructor versus gcc requiring it).  I’m tending to think gcc is right: the default point copy constructor should be doing a field-by-field shallow copy, but the unrestricted union proposal says default constructors are implicitly deleted for unions – therefore, the field represented by that union should not be copied by point’s default copy constructor…right?

In any case, GLGeom now builds correctly with gcc 4.6 and all unit tests pass!

Written by arthur

May 16th, 2011 at 12:26 pm

Posted in lxengine

## Defining Custom Control Structures with C++ Macros

Disclaimer: I’m not promoting the below as a Good Ideatm in any manner…just showing what can be done in C++, if you so choose. Being “clever” like this in code is usually a bad thing that will confused other coders more than it will help them!  But it’s always nice to know about the alternatives that you didn’t choose when you make a decision.

### Problem

I want the viewer window to only refresh the image at most once every 50 times per second (i.e. every 20 milliseconds) in my ray tracer sample. The viewer window’s update() call is called on every iteration of the main Engine loop.  The time between calls to update() is not fixed and may be either greater or less than 20 milliseconds between calls.

In general this means that I want to have a block of code that executes at most once every N milliseconds, no matter how often that block of code is hit.

### Simple Solution

Simple, clean, nothing-fancy understandable solution:

// Called once per iteration of the main Engine loop
void Viewer:update()
{
static unsigned int last = 0;
unsigned int now = Engine::current_time_ms();
if (now > last + 20)
{
this->redraw_image();
last = Engine::current_time_ms();
}
}

Pretty simple. No real explanation needed, right?

(Ok, fine – this actually isn’t an ideal solution since a “static” variable is being used. In reality, if the ray tracer was every going to support multiple viewers, the “last” variable should be a member of the Viewer class – not a “static”.)

So the question is, how can we encapsulate the pattern employed here to be reusable?

### Being “Clever”…

(Note: as mentioned in the disclaimer at the top, being clever is often a bad idea!)

The clever method entails a class, a macro, and a lambda function:

void Viewer::update ()
{
timed_gate_block (20, {
this->redraw_image();
});
}

With the accompanying implementation being:

class timed_gate_block_imp
{
public:
timed_gate_block_imp (unsigned int delta)
: mDelta    (delta)
, mTrigger  (0)
{
}
void operator() (std::function<void()> f)
{
auto now = Engine::current_time_ms();
if (now > mTrigger)
{
f();
mTrigger = now + mDelta;
}
}

protected:
unsigned int mTrigger;
unsigned int mDelta;
};
#define timed_gate_block(d,e) \
static timed_gate_block_imp _timed_block_inst ## __LINE__ (d); \
_timed_block_inst ## __LINE__ ([&]() e )

### Why Bother With That Lambda std::function<>?

One alternative would be to encapsulate the timed block pattern like this:

void Viewer::update ()
{
static timed_gate gate(20);
if (gate()) {
this->redraw_image();
}
}

And the implementation:

class timed_gate
{
public:
timed_gate (unsigned int delta)
: mDelta    (delta)
, mTrigger  (0)
{
}
bool operator() ()
{
auto now = Engine::current_time_ms();
if (now > mTrigger)
{
mTrigger = now + mDelta;
return true;
}
else
return false;
}

protected:
unsigned int mTrigger;
unsigned int mDelta;
};

Two notable differences here:

First, a macro is not used.  This is good because it is clearer as nothing “unusual” is happening behind the back of the programmer.  This is bad though because it forces the gate to be given a name, which has to be immediately repeated; it’s more verbose, which can be a pain for a simple control structure (in this case, it’s probably warranted though, given how often a chunk of code like this would be used).

Second, the chunk of code is not passed as a std::function<> but rather the gate simply returns a boolean. This is good, right? No need to be passing around function wrappers (which aren’t entirely lightweight). But, there’s a subtle difference: because the gate does all it’s work before returning, mTrigger is updated before the code chunk actually runs. Therefore, in the macro version, the trigger will be hit no sooner than 20 milliseconds after the finish of the last trigger – but in this case the trigger will be hit no sooner than 20 milliseconds after the start of the last trigger.

### Conclusion

No deep conclusion to this one. I wanted to share the code as food for thought.

In reality, the need for a timed block of code like this is rare enough (at least in my use), that I think it warrants recoding every time rather than wrapping in a potentially opaque user control structure. If C++ natively supported introducing user control structures (i.e. other developers won’t be shocked to see a custom one), then maybe I might consider it. As is, it’s simpler to write it out every time.

Written by arthur

May 5th, 2011 at 6:29 pm

Posted in company

## Unnecessary Complexity in the Software Development Tool-Chain

C++ is still predominantly the best, or at least most used, language for performance or graphics intensive applications aimed at the consumer. I consider myself a C++ programmer and very much admire the practical and flexible language design. Yet, still I am beginning to wonder if the software development community could hugely benefit from a new primary language.   But if I think highly of the language, why do I suggest a new one might be necessary?

It’s the tool chain and the process more than the language itself.

First of all, no matter how good a language is, it’s useless if the tools are not available to use that language to accomplish your goals. (I think this statement can safely be made without explicit justification.)

Now consider if your goal is easy, rapid, globally collaborative development.  Are the best tools really there to suit this problem?  Yes, there are tools, but do they fit that goal as well as they could?

What if your goal is to eliminate as much build engineering overhead as possible before software engineering on a particular problem can begin?

Old isn’t necessarily bad, but the current tool chain for C++ development effectively is very similar to the days of early Unix development.  That’s fine, but does that tool chain really fit the ideal process of global development community working to continually contribute to open source software development?  Consider how those tools would do under a serious usability study of global collaborative development.

To save typing, I’ll assume the reader is at least a bit familar with web development (e.g. Javascript) and C++ development. Rather than write out a formal argument for my point, I’ll simply say this: consider how much overhead it takes a novice to throw together a JQuery-based webpage versus the overhead in building a large open source project.  Or, as another example, head to the discussion forums on any major third-party C++ SDK and count the percentage of posts about missing headers, linker errors, dependencies, library versions, or other configuration questions that have absolutely nothing to do with the purpose of the SDK itself – and then compare that to a Javascript-based library.  I know this is not  a fair comparison, but bear with me for a moment.

Now, just for fun, imagine that building that large open source project were as easy as viewing a webpage.  I’m not talking about development on that large project, let’s simply focus on the process of building the project.   Seriously consider it.   Overlook the size / initial download problem for a moment and just compared the steps involved in those two tasks.   A webpage usually “just works” if you have a modern browser and the web page was written by a decent developer.  It’s a zero step process: merely providing the name of the webpage is viewing it.  On the other hand, a large open source project build – well, the steps to do that could be almost anything…it very rarely “just works” and instead often learning new skills with every project.

Maybe I’m wrong about it, but imagine the influx of casual contributions to open source projects if large application builds were effectively zero-overhead to get up and run from source…but I’m getting ahead of myself.   For now, focus simply on the notion of building (rather than developing) a large open source project as being a zero-step process.

-

The next question: is there any technical reason why it couldn’t just work in the majority of cases (or at least to the same percentage as a well developed web site works)?

I don’t think there is.

-

I’m not going to pretend that there haven’t been lots of attempts to solve the broadly defined “write once, run anywhere” problem.  Java, of course.  .NET jumps to mind as a bit closer to “build anywhere” notion discussed here given it’s multi-language support.  I think .NET is conceptually fairly right on, but in practice it hasn’t happened.   Full virtual machines and runtime environments aside, tools like CMake exist to alleviate the build problem – but how effective are they?  CMake itself is yet another “overhead” item that needs to be learned which likely has nothing to do with the actual problem the developer is trying to solve.  It may be better, but it’s still another build engineering hurdle requiring knowledge from the user.

Or to invert issue completely, consider how HTML5 and technologies like WebGL are in a way approaching this fundamental “building C++ applications is hard” problem: they aim to make it easier to bring application graphics development to the simplicity of browser-based technology development rather than bring the simplicity of browser-based technology development to application graphics development. (I do realize the last statement has quite a few embedded assumptions, but I generally think it’s a true statement – or at least true enough to convey the crux of the point.)  Better yet, isn’t the development of Chrome OS implying the same general trend of pushing traditional low-level development closer to the convenience of browser-based apps rather than vice-versa?

Ok, so one more way of looking at this…

How difficult would it be to allow browsers to support compiled languages?  Technically, not very.   Chrome already compiles Javascript into native machine code – and caches the Javascript files associated with pages.   What technical difference is there really if that source file is C++?  Yes, C++ pointers, etc. would throw some wrenches into security and verifiability but otherwise, it’s just a different compiler component implementation.   Why not extend that embed the make system into ‘browser’?   And why not embed the actual source control system implementations (at least the ‘get’ aspect) into the browser?  The user would end up with a ‘browser’ that effectively views project files, can pull the relevant source to local caches, compile it, and run the application.   There’s still the first-run (i.e. priming the cache) problem – but that’s solvable too (how about pre-compiled caches of popular configurations could be hosted on the application’s website?).  Why not hide the entire build tool chain in an application ‘browser’ of sorts?

Java essentially attempted (and still attempts) to solve much of this.  Java certainly didn’t eliminate the problem, but that fact does not undo the theory behind the idea.   As noted previously, in some manners, Chrome OS effectively is hiding the application “build” in the browser.   But what about approaching the problem by making the existing build processes more like a browser rather than making more apps work from a browser?

-

Ok, so what of this problem?  Why is this more than just a rant that build engineering in C++ should be easier and just work like a browser?

I suspect that a very real source of the problem is that many experienced developers dismiss the “tool chain” issue simply because they are already so used to it. It’s not a lack of technical knowledge out there.  It’s a lack of motivation.

Sure – learning CMake may teach you about multi-platform programming and all other sorts of useful information – but to a novice programmer who wants to experiment with 3D graphics, does it really make sense that he needs at least some degree of expertise in build tools first?  No, it doesn’t.  Software should be about tackling the problem you are interested in primarily, and then secondarily be about understanding the periphery of the problem so you can improve your initial solution (i.e. in this case, learning more about multi-platform programming via CMake after the novice programmer has his experiment working on his own machine).

Encapsulating the compiler tool-chain in a browser doesn’t eliminate or ‘solve’ build engineering forever; instead, it would need to evolve by a standard like HTML.   What would be needed is a standard for project builds to deliver content flexibly, but reliably.   The issue isn’t so much with C++; it’s with the build tool-chain that’s become associated with the delivery of C++ content.

-

I generally try to avoid complaining about a problem without proposing a solution: the first step toward the solution might be for serious C++ developers to no longer be content with build engineering skills being a prerequisite to software engineering.

My gut feeling remains stuck on what seems obvious: on modern, powerful workstations, there’s no technical reason that the build process for large applications needs to be so complex.  And if there’s no technical reason, what is really preventing the solution?

Written by arthur

December 8th, 2010 at 12:44 pm

Posted in tools

In C++, a named function or method can be overloaded with different input arguments.  However, it cannot be overloaded solely by having a different return value.

Yet occasionally, this seems like it might be useful functionality.  This post looks at a way to use C++ to emulate overloading a function by return value.

A Disclaimer
I would refer to this bit of code as a bit “clever.”  I tend to dislike clever code.  Clever code, almost by definition, is not obvious.  In my experience, the ability to write code that is efficient but still simple and clear is the definition of a high quality programmer.
That said, the post will proceed with a clever way to overload a method by return value.
Example Scenario
For the example scenario we’ll use a core class from the LxEngine: variant.  The variant class is effectively a wrapper on JSON-like data.  A variant object can hold an int, a float, a string, an array, a map, or be undefined.
The objective is be able to write code like this, that just works:

variant myvec3variant = json_parse(“[ 1, 2, 3 ]“);
variant myvec4variant = json_parse(“[ 1, 2, 3, 1 ]“);

vec3f myvec3 = convert(myvec3variant);
vec4f myvec4 = convert(myvec4variant);

In other words, I want the compiler to automatically recognize the two calls to convert are requesting two different kinds of conversions.  I want the compiler to see the l-value of the first call is a vec3f and the l-value of the second call is a vec4f, and as a result call different conversion routines accordingly.

Per the disclaimer already mentioned, let’s ignore for the moment whether this is a good idea or not and simply look at the C++ code necessary to make the above code work.

Use an Intermediate Type
The solution to get this to work is actually quite simple.

1. Define the function to overload with different return values such that it returns an intermediate class object.
2. This class will be a wrapper on the input argument that has a templatized implicit conversion operator.
3. The conversion operator will then call an overloaded stand-alone function to do the conversion.

class variant_implicit_cast
{
public:
inline variant_implicit_cast (variant& v) : m_variant(v) {}

template <typename T>
inline operator T ()
{
T t;
detail::convert_from_variant(t, m_variant);
return t;
}
protected:
variant& m_variant;
};

inline variant_implicit_cast
convert (variant& v)
{
return variant_implicit_cast(v);
}

Specific conversions can be defined without changing the core definition from the variant header file.  The following example conversion could be defined in an entirely different header: there’s no dependency on this base convert function and this specific implementation.

namespace detail {
void convert_from_variant (vec3f& s, variant& v);
}

How It Works
This works because the compiler will see that assignment statement an attempt to implicitly convert from the intermediate class to the l-value class type.  This in turn invokes the templatized conversion operator.  The templatized conversion operator will then use its template type to invoke the correct overload of the conversion worker function.

These methods are all short, inlined methods. In a release build they should (ideally) not incur any overhead.

The Result?
A single named function that appears to be overload-able by return type.

The single name is a real boon since the client programmer simply knows that calling convert is always supposed to work.  It keeps the code concise and straightforward.

Furthermore, it avoids the need to modify the core class (the one being converted from) at all.  Defining implicit or explicit conversion operators/methods on the core class would work correctly, but then it would create a unnecessary dependency between two potentially unrelated classes.  In this model, the conversion worker routines (in the detail namespace) can be defined externally and independently of the core types.

Written by arthur

May 25th, 2010 at 5:42 pm