Where C++ Templates Fall Down

There are many articals out there detailing comparisons between C++ Templates and Java Generics (Google around and you’ll quickly find loads of them). Almost universally they view the erasure of Java generics as a fail-point. Even the ones written by “Java Gurus” seem to favour C++ templates as being more flexable. I recently stumbled upon a simple case that I’ve been doing in Java for ages, and while trying to implement the same concept in C++ discovered that it’s impossible.

The case is a generic Visitor model:

public interface Visitor<R, P> {
    R visitFirstNodeType(FirstNodeType node, P parameter);
    R visitSecondNodeType(SecondNodeType node, P parameter);
}

public abstract class AbstractNodeType {
    public abstract <R, P> R accept(Visitor<R, P> visitor, P parameter);
}

Looks simple enough, but try and implement it in C++ and you discover that you cannot use templates on a virtual method. At first this doesn’t make sense (even to a friend of mine who lives in C++). The simple reason is the way templates are handled: they are code generation shortcuts. Therefore the compiler needs to know the exact method to run, but by declaring the method virtual you are telling the compiler to deferre that descision until runtime. In Java the generics are “erased” (although they are kept in the signature), resolving to a single method implementation. This means that there is no problem trying to decide which method to run.

C++ can create the same structure without templates, but then you loose the type-safety contract that you defined in Java. It’s not really a fail-point on C++ (rather just a product of the implementation), but it is something that seems to be overlooked on the Java side of the fence. Generic erasure buys you fairly cool compile-time contracts.

EoD SQL Version 2.1-Alpha

EoD SQL doesn’t see very much development these days (although it sees plenty of downloads and usage). Mainly it’s very stable and does what it’s supposed to, so theres very little need to change things.

That said, there are a few issues that needed some attention, so theres the start of a new release (2.1). The new features and changes are as follows:

  • The engine now caches some of the more expensive construction data with SoftReferences, dramatically reducing the cost of creating a Query implementation
  • @Call methods may now return updatable DataSet objects by using the new “readOnly” annotation attribute
  • @Select methods now have a useful “fetchSize” hint that will be passed down to the JDBC driver
  • Much better error messages for invalid / unknown types and query parse errors
  • Improvements and updates to all of the JavaDocs
  • Some bug fixes and corrections

Like I said: theres not that much there, but the changes that do exist make life that little bit easier. If you have any ideas of features you’d like to see in this upcomming release, leave a comment or a feature request and we’ll have a chat about it (always on the lookout for good ideas)!

Why TraceMonkey is maybe the wrong way

With Firefox 3.5 finally out the door and in use I thought it was time to write something about it that’s been bugging me for a while. TraceMonkey (the new FF JavaScript engine) employs a semi-Hotspot approach to JavaScript compilation. Although the approach is quite different under-the-hood, the basic principal is the same: interpret the code until you know what to optimize. This makes abundant sense to me: why go through the expense of native compiling a method that only ever gets run once? However: there is a hidden expense to doing things this way: your compilation unit is often much smaller. Smaller compilation-units means more compiled code that needs to go back and forth to interpreted code. It also means you can’t optimize as heavily (although that optimization has it’s own massive expense).

Where am I heading with this? I think the TraceMonkey approach adds huge value to a language like Java where an application consists of hundreds and / or thousands of classes (both those in the application and all of it’s dependent APIs), in this case: you’re loading time would be abysmal if you pre-loaded all the possibly used code and native compiled it on start-up. However in the JavaScript world we are limited by the connection, and JavaScript applications (even really huge ones like those written with GWT) are relatively small. In this case it makes more sense to me to either compile everything up front; or run everything interpreted and run a massive compilation in the background (hot-swapping the implementations when you’re finished the compile).

It’s just a thought. Maybe FF should adopt v8 and see what happens?