1. Overview
The JVM is one of the oldest yet powerful virtual machines ever built.
In this article, we have a quick look at what it means to warm up a JVM and how to do it.
2. JVM Architecture Basics
Whenever a new JVM process starts, all required classes are loaded into memory by an instance of the ClassLoader. This process takes place in three steps:
- Bootstrap Class Loading: The “Bootstrap Class Loader” loads Java code and essential Java classes such as java.lang.Object into memory. These loaded classes reside in JRE\lib\rt.jar.
- Extension Class Loading: The ExtClassLoader is responsible for loading all JAR files located at the java.ext.dirs path. In non-Maven or non-Gradle based applications, where a developer adds JARs manually, all those classes are loaded during this phase.
- Application Class Loading: The AppClassLoader loads all classes located in the application class path.
This initialization process is based on a lazy loading scheme.
3. What Is Warming up the JVM
Once class-loading is complete, all important classes (used at the time of process start) are pushed into the JVM cache (native code) – which makes them accessible faster during runtime. Other classes are loaded on a per-request basis.
The first request made to a Java web application is often substantially slower than the average response time during the lifetime of the process. This warm-up period can usually be attributed to lazy class loading and just-in-time compilation.
Keeping this in mind, for low-latency applications, we need to cache all classes beforehand – so that they’re available instantly when accessed at runtime.
This process of tuning the JVM is known as warming up.
4. Tiered Compilation
Thanks to the sound architecture of the JVM, frequently used methods are loaded into the native cache during the application life-cycle.
We can make use of this property to force-load critical methods into the cache when an application starts. To that extent, we need to set a VM argument named Tiered Compilation:
-XX:CompileThreshold -XX:TieredCompilation
Normally, the VM uses the interpreter to collect profiling information on methods that are fed into the compiler. In the tiered scheme, in addition to the interpreter, the client compiler is used to generate compiled versions of methods that collect profiling information about themselves.
Since compiled code is substantially faster than interpreted code, the program executes with better performance during the profiling phase.
Applications running on JBoss and JDK version 7 with this VM argument enabled tend to crash after some time due to a documented bug. The issue has been fixed in JDK version 8.
Another point to note here is that in order to force load, we’ve to make sure that all (or most) classes that are to going be executed need to be accessed. It’s similar to determining code coverage during unit testing. The more code is covered, the better the performance will be.
The next section demonstrates how this can be implemented.
5. Manual Implementation
We may implement an alternate technique to warm up the JVM. In this case, a simple manual warm-up could include repeating the creation of different classes thousands of times as soon as the application starts.
Firstly, we need to create a dummy class with a normal method:
public class Dummy { public void m() { } }
Next, we need to create a class that has a static method that will be executed at least 100000 times as soon as application starts and with each execution, it creates a new instance of the aforementioned dummy class we created earlier:
public class ManualClassLoader { protected static void load() { for (int i = 0; i < 100000; i++) { Dummy dummy = new Dummy(); dummy.m(); } } }
Now, in order to measure the performance gain, we need to create a main class. This class contains one static block that contains a direct call to the ManualClassLoader’s load() method.
Inside the main function, we make a call to the ManualClassLoader’s load() method once more and capture the system time in nanoseconds just before and after our function call. Finally, we subtract these times to get the actual execution time.
We’ve to run the application twice; once with the load() method call inside the static block and once without this method call:
public class MainApplication { static { long start = System.nanoTime(); ManualClassLoader.load(); long end = System.nanoTime(); System.out.println("Warm Up time : " + (end - start)); } public static void main(String[] args) { long start = System.nanoTime(); ManualClassLoader.load(); long end = System.nanoTime(); System.out.println("Total time taken : " + (end - start)); } }
Below the results are reproduced in nanoseconds:
With Warm Up | No Warm Up | Difference(%) |
1220056 | 8903640 | 730 |
1083797 | 13609530 | 1256 |
1026025 | 9283837 | 905 |
1024047 | 7234871 | 706 |
868782 | 9146180 | 1053 |
As expected, with warm up approach shows much better performance than the normal one.
Of course, this is a very simplistic benchmark and only provides some surface-level insight into the impact of this technique. Also, it’s important to understand that, with a real-world application, we need to warm up with the typical code paths in the system.
6. Tools
We can also use several tools to warm up the JVM. One of the most well-known tools is the Java Microbenchmark Harness, JMH. It’s generally used for micro-benchmarking. Once it is loaded, it repeatedly hits a code snippet and monitors the warm-up iteration cycle.
To use it we need to add another dependency to the pom.xml:
<dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>1.19</version> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>1.19</version> </dependency>
We can check the latest version of JMH in Central Maven Repository.
Alternatively, we can use JMH’s maven plugin to generate a sample project:
mvn archetype:generate \ -DinteractiveMode=false \ -DarchetypeGroupId=org.openjdk.jmh \ -DarchetypeArtifactId=jmh-java-benchmark-archetype \ -DgroupId=com.baeldung \ -DartifactId=test \ -Dversion=1.0
Next, let’s create a main method:
public static void main(String[] args) throws RunnerException, IOException { Main.main(args); }
Now, we need to create a method and annotate it with JMH’s @Benchmark annotation:
@Benchmark public void init() { //code snippet }
Inside this init method, we need to write code that needs to be executed repeatedly in order to warm up.
7. Performance Benchmark
In the last 20 years, most contributions to Java were related to the GC (Garbage Collector) and JIT (Just In Time Compiler). Almost all of the performance benchmarks found online are done on a JVM already running for some time. However,
However, Beihang University has published a benchmark report taking into account JVM warm-up time. They used Hadoop and Spark based systems to process massive data:
Here HotTub designates the environment in which the JVM was warmed up.
As you can see, the speed-up can be significant, especially for relatively small read operations – which is why this data is interesting to consider.
8. Conclusion
In this quick article, we showed how the JVM loads classes when an application starts and how we can warm up the JVM in order gain a performance boost.
This book goes over more information and guidelines on the topic, if you want to continue.
And, like always, the full source code is available over on GitHub.