BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News JEP 230: A New Microbenchmark Suite for JDK 12

JEP 230: A New Microbenchmark Suite for JDK 12

Lire ce contenu en français

The OpenJDK Microbenchmark Suite (JEP 230), based on the Java Microbenchmark Harness (JMH), is a new feature in JDK 12. The goals of JEP 230 are: to provide stable and tuned benchmarks; include an initial set of approximately 100 benchmarks (imported from the jmh-jdk-microbenchmarks project, a suite of JMH benchmarks); and improved ease of use to create new benchmarks and search for existing ones.

The Microbenchmarks Suite is not a separate JDK utility such as java, javac, jdeps and jconsole. Instead, the source code is co-located with the JDK source code. As stated in JEP 230 proposal:

The building of the microbenchmark suite will be integrated with the normal JDK build system. It will be a separate target that is not executed during normal JDK builds in order to keep the build time low for developers and others not interested in building the microbenchmark suite.

Microbenchmarking, the art of measuring the performance of small sections of Java code, if not implemented correctly, may produce inaccurate and/or misleading results. There are many things to consider in writing a correct microbenchmark. In Chapter 5 of Optimizing Java, the authors, Ben Evans, James Gough, and Chris Newland, discussed the challenges of writing microbenchmarks:

We cannot truly divorce the executing Java code from the JIT compiler, memory management, and other subsystems provided by the Java runtime. Neither can we ignore the effects of operating system, hardware, or runtime conditions (e.g., load) that are current when our tests are run.

Brian Goetz, Java language architect at Oracle, discussing the anatomy of flawed benchmarks stated:

The scary thing about microbenchmarks is that they always produce a number, even if that number is meaningless. They measure something, we're just not sure what.

Very often, they only measure the performance of the specific microbenchmark, and nothing more. But it is very easy to convince yourself that your benchmark measures the performance of a specific construct, and erroneously conclude something about the performance of that construct.

Aleksey Shipilëv, principal software engineer at Red Hat, responding to a performance question on Twitter stated:

Any nanobenchmark test that does not feature disassembly/codegen analysis is not to be trusted. Period.

Claes Redestad, principal member of technical staff at Oracle, spoke to InfoQ about the Microbenchmark Suite.

InfoQ: What was the inspiration to create the Microbenchmark Suite?

Claes Redestad: Microbenchmarking has been a part of OpenJDK development process for many years, and the Microbenchmark Suite is really only one step on a long road of integrating the use of microbenchmarks more closely into the OpenJDK development process.

Most (all?) of the microbenchmarks that were part of the initial push have been around for a while, and the microbenchmark harness these depend on, JMH, has been around for quite a few years now. The only thing really new is integrating it into the OpenJDK build system and the main OpenJDK repository.

At the time this JEP was conceived, the OpenJDK project was split across several repositories and forests, making it quite cumbersome to write tests (and microbenchmarks) out-of-tree. The original JEP proposal sought to add a new repository for these micros, but that effort came to a halt, in part due to disagreements about whether it was worth the trouble.

Since then, the OpenJDK has consolidated towards a single repository structure, many test suites that were developed separately have been consolidated into the main repository, etc. Many of the obstacles the project faced five years ago simply vanished.

The decision to finally move ahead with JEP 230 was ultimately motivated by the success of other efforts to integrate and co-locate functional tests into the main OpenJDK repository. As with co-located test suites, it doesn't mean that all benchmarking we do is now based on these co-located micros (far from it, in fact). But having it all integrated and available in a single repo is really nice when you're testing that new API that is only available in the branch you're working on.

InfoQ: Why isn't the Microbenchmark Suite its own utility (such as java, javac, jdpes, jconsole, etc.)?

Redestad: As JEP 230 only really provides the means to build and run microbenchmarks as an integral part of developing the OpenJDK itself, the suite doesn't really translate naturally into a tool suitable for inclusion into a JDK deliverable; sort of like how we don't bundle all other tests with our JDK binary downloads.

InfoQ: What is the best way for developers to get started with the Microbenchmark Suite, such as where to find the source code, etc?

Redestad: My guess is most Java developers might be looking to add microbenchmarks to their own projects rather than contribute to the OpenJDK. So while the microbenchmark suite might be a source of inspiration for that, I would recommend reading up on the JMH first. It provides quite a few examples, and it's pretty easy to set up a project and start poking at things. Aleksey Shipilëv has done a really good job maintaining this project over the years, and there's a good corpus of resources out there.

If you're looking to build, test and maybe even contribute to the OpenJDK itself, start with https://openjdk.java.net/, check out the sources from http://hg.openjdk.java.net/jdk/jdk, and read the testing docs at http://hg.openjdk.java.net/jdk/jdk/raw-file/96d290a7e94f/doc/testing.html.

InfoQ: What else would you like to share with our readers about the Microbenchmark Suite?

Redestad: One way to contribute to the OpenJDK is to actually build and run these microbenchmarks in your own CI and report any regressions found. There's a wide range of hardware and system configurations out there, and we might not be running every one of the available benchmarks on every check-in on hardware just like yours, so there's a chance that you might find a problem we'd not be able to detect.

InfoQ: What's on the horizon for the Microbenchmark Suite?

Redestad: Right now I'm looking for feedback while encouraging more OpenJDK developers to pick it up, use it and maybe even improve on it. I'm delighted that we've already seen new microbenchmarks added, as well as some really nice external contributions to the feature set itself, like adding support for building native libraries (https://bugs.openjdk.java.net/browse/JDK-8219393).

I hope we can improve on the details enough that it becomes as straightforward and natural to add and run microbenchmarks as we develop new features as it is to add new functional tests.

InfoQ: What are your current responsibilities, that is, what do you do on a day-to-day basis?

Redestad: My main responsibility is helping the many OpenJDK developers move performance in the right direction. Contributing to JEP 230 is one such thing. Triaging regressions detected in our nightly testing is another.

On a daily basis, I try to contribute fixes and improvements wherever I can. I've had a lot of fun in the past few years reducing startup and footprint overheads of the OpenJDK, including reworking and improving the internal lambda runtime to bootstrap in just a small fraction of the time it took in JDK 8.

Other new features in JDK 12 to complement the Microbenchmark Suite include: Shenandoah, a new experimental garbage collector (JEP 189); enhanced switch expressions (JEP 325); and a new JVM constants API (JEP 334).

Resources

Rate this Article

Adoption
Style

BT