When .NET was originally designed, it was decided that methods would be non-virtual by default. There were several reasons for this, one of which is that non-virtual methods are often significantly faster than virtual methods. In addition to the cost of v-table lookup itself, virtual methods cannot normally be inlined. Since .NET tends to trend towards using lots of small methods, a non-inlined method can end up spending more time on function call overhead than on the method itself. You can see some of the effects of this inlining in our article, On Abstractions and For-Each Performance in C#.
Over the last few years, idiomatic C# has been changing. Where it used to be unusual to see large interfaces, it is now quite common to have “shadow interfaces” that exactly match whole classes. This began with WCF, which encouraged the practice though it wasn’t strictly required. With the rising importance of DI frameworks, it is not unusual to see shadow interfaces for all non-DTO classes in a project.
There are ways to “devirtualize” methods, essentially treating them as if they were non-virtual in specific situations. The Java HotSpot virtual machine is well known for having this ability. In Java all methods are virtual by default, so the need to address this performance issue arose much sooner in its history.
Back in March, .NET Core quietly started taking on the challenge of devirtualization. The Simple Devirtualzation feature addresses three basic scenarios:
- Calling virtual methods on a sealed class.
- Calling virtual methods on a sealed method.
- Calling virtual methods when the type is definitely known (e.g. just after construction).
Interface devirtualziation also has some basic support, but there are restrictions. For example:
Disallow interface devirt if the method is final but the class is not exact or final, since derived classes can still override final methods when implementing interfaces.
It should be noted that simply marking classes as “sealed” isn’t necessarily enough to gain the benefits from devirtualized interfaces. If you are using a DI framework to hide which concrete class is being used at runtime, then the JIT compiler probably won’t be able to determine what type is being used.
The reason this isn’t an issue in Java is that its devirtualization technology works completely differently. It will speculatively devirtualization interface calls based on runtime metrics, re-JITing the methods that are most frequently called. Special guard clauses are included in case the concrete type changes and the devirtualization needs to be unwound.
Looking ahead
The above features are available in .NET Core 2.0, but there is a lot more to be done. Here are some highlights from the devirtualization roadmap:
Structs are notoriously bad when it comes to interface calls, as they are not only virtual but also require boxing the value. So several items focus on eliminating both the virtual call and the boxing where possible. A major component of this is First Class Structs, which is an advanced JIT concept outside of this report.
Better type tracking in the JIT itself. Apparently there are many cases where the JIT knows the concrete type in one place, but fails to pass that information along so the JIT has to resort to more generic machine code.
Speculative devirtualization is also being considered. Based on the summary, this won’t work like it does in Java. Rather, it will make a determination based on the list of known overrides during the JIT process. (Presumably this will be of use in cases where an interface is only implemented by a single class, which happens a lot in the aforementioned DI scenarios.)
A special case of this is the guarded devirtualization of EqualityComparer<T>.Default. Since the vast majority of time IEqualityComparer<T> calls are made to the default implementation (either IEquatable<T> or Object.Equals as appropriate), they feel that making this case faster is worth the effort if it can be done without slowing down the cases where a non-default IEqualityComparer<T> is being used.