Runtime polymorphism is typically the usage of function pointers to provide an implementation for functionality that is not known at compile time. It is also often used to simplify the structure of a program, by using common interfaces to expose specific functionality.
There are a number of purposes for runtime polymorphism:
- Providing access to a specific resource chosen at runtime using a generic interface (e.g. text mode or graphical mode user interface depending on whether graphical mode is available).
- Simplifying the relationships between algorithm implementations and data structures (e.g. collision detection for a collection of
Shape*, but each is actually an instance of
- Formatting data for a particular output device, without knowledge of the specific details (e.g. printer driver).
The cost of run time polymorphism is great. It can be reduced, but the best way to reduce the cost of runtime polymorphism is to avoid it all together. However this is not always an option. It is often the right tool to use when approaching dynamic functionality, as it can simplify code structure, reduce coupling and improve code localization.
The cost of dynamic dispatch
Dynamic dispatch can cost quite a bit, depending on your perspective.
I've prepared a sample application which can be compiled on Mac OS X using Xcode. The purpose of this tool is to benchmark a very contrived small function, and produce a baseline time.
The function body we are calling is identical in all cases, and looks like this:
Here is a table of the results. Non-dynamic C function dispatch was used as the baseline. The bigger the percentage, the slower it is.
|Method||-O0 %||-O1 %||-O2 %||-O3 %||-Os %|
|C Dispatch (Baseline)||100%||100%||100%||100%||100%|
|C Pointer Dispatch||95.56%||112.5%||104.6%||450%||442.1%|
|Base C++ Dispatch||125.7%||132%||116.9%||449.9%||590.6%|
|Derived C++ Dispatch||125.6%||132.1%||116.5%||544.2%||511.9%|
From this, we can draw a number of conclusions.
A standard function in C is a prime candidate for being inlined. In this case, at O3 and Os this occurs, and the benefits are amazing. The function takes almost 1/5th the time to execute in this case.
For larger functions, this won't be the case, but in terms of function dispatch, the key point I want to make is this is the only kind of function call that can be inlined with such a huge performance improvement. It is also the case that once a function is inlined, many other optimizations can be made, such as reducing the amount of data that may be copied during the function call.
Here is the code before it is inlined (
Here is the code after it is inlined (
I think the funny thing about this is that the compiler did inline the function, but failed to reduce the loop to a simple addition - e.g. rather than adding 1 several thousand times, why not just add several thousand one time. I've seen GCC do this optimization before, so I'm not sure why it isn't doing it here.
C Pointer Dispatch
This method can not be inlined since the pointer must be dereferenced in order to call the function, thus the function is not defined at compile time.
There is a anomaly for
-O0 which I cannot explain. Even thought the statistics are about +/- 10%, this result was consistently about 5% less than the baseline.
Here is the code generated for
The interesting thing about this method of dispatch, is that it it is the most basic kind of dynamic dispatch, and it can't be optimized very much, the improvement between
-O3 is minimal. It also has a clear relationship to C++ dynamic dispatch in terms of performance cost and the ability for the compiler to provide optimizations.
C++ Virtual Dispatch
This is the most common kind of dynamic dispatch available to programmers who use C++. It is essentially the same as a function pointer, but occasionally it can become more complex depending on the class hierarchy. However, in this case, it is incredibly simple. There was no noticeable difference between a pointer to the base class and a pointer to the derived class, until we use
-Os. I cannot explain this behavior without looking at the generated asm code, but suffice to say, that little optimization can be done by the compiler.
The performance of the virtual dispatch cannot be improved much, and the compiler is unable to inline the code. I find this at least a little bit fascinating, because in the entire program, there is only one implementation of the virtual function of the same name. In this case, should it not be possible for the compiler to assume that no other implementation exists?
Unfortunately, due to the way GCC compiles code, we cannot be sure until link time whether there are multiple implementations of the same base interface, and at this point, no further optimizations can be done. However, some compilers such as LLVM do provide this facility, and it may be possible to provide the context necessary for this kind of analysis.
The fascinating thing about virtual functions is that once a function is marked as virtual, it will then generally require a dynamic dispatch. Therefore, it is important to only make a function virtual when needed, and to try to keep the number of virtual functions to a minimum if performance is a concern.
Here is the code for virtual dispatch at
Objective-C is fascinating. I won't go over the tiny details here, suffice to say that it requires (as far as I know) some sort of hash table search. Objective-C dispatch is built on top of standard C dispatch and functions, and so is in the worst case 12 times slower than standard C dispatch at
-O3. This shouldn't come as much of a surprise since the standard C dispatch at
-O3 was inlined.
With Objective-C, it is neither possible nor desirable to inline function calls. The purpose of this is to provide a highly dynamic language which allows for a multitude of possibilities in terms of the API and generally what is possible in terms of dynamic dispatch. In this case, the basic Objective-C dynamic dispatch is about 2.5 times slower than dynamic C++ dispatch, but provides a number of interesting features which are generally not available when using C++ dynamic dispatch:
- Classes can add and remove methods at runtime.
- Classes can handle the case when a specific method does not exist for a particular object, and forward this invocation to somewhere else.
- Invocations can be saved for delivery across threads, processes or network transports.
- Classes can be dynamically introspected, and functions called dynamically.
- Classes don't need to adhere to a specific type (but they can guarantee specific interfaces are available, which in Objective-C is called a protocol).
Generally, Objective-C provides an incredible amount of flexibility at runtime, but you do pay for it performance wise, and with Objective-C, you can't back out of it - i.e. there is no non-virtual Objective-C dispatch available. However, as Objective-C is built on top of C, you can fall back to standard C functions as needed for implementation details.
Here is the code for Objective-C dispatch at
There are many options available to programmers to design robust and efficient programs. It is important to learn where to use dynamic dispatch, especially in performance critical code paths. There is a lot of good advice out there in books and on the internet, and it is worthwhile taking the time to benchmark your program. Although from the benchmarks the performance cost may appear minimal, if you are dealing with thousands of objects, these small costs can really add up.
The topic of dynamic dispatch and inter-procedural optimisation is a big one and ever increasing in scope. One interesting area of research is the Java HotSpot compiler, which from what I've read, can provide optimistic optimisation of dynamic dispatch. Even thought this option is available in terms of improving speed, it is fairly complex in terms of implementation, and is not going to give completely consistent results.