Java and Docker: setup parameters for speed and memory

Updated: 2024-01-09

In this post, we look at some properties that you have to configure to avoid surprises and errors when you deploy your Java application in Docker.

JVM allocation

Memory and resources: you decide or somebody will do for you

In a production environment, it's NOT recommended to start your Java dockerized application without setting any parameters.

If you start using, e.g.:

java -jar MyApplication

  • do you know how many resources are allocated?
  • do you know how many processes your application will use?
  • how much memory is assigned and how much can be used by your container?
  • which garbage collector is impacting the performance of your container? ... and you should care a lot about your garbage collector!

If you don't set any parameter to your container, your application will decide for you according to some heuristics (a.k.a. some algos created by unknown people).

A JRE - Docker configuration example

Before going into the details, this is an example of JRE configuration for a Docker container.

We tell Java that the maximal available memory for the HEAP is 75% of the available memory for the container.

Note that we don't set explicitly the Garbage Collector, this has to be done according to the type of application.

Note that we don't set any Minimal RAM with MinRamPercentage (read until the bottom).

ENTRYPOINT [ "java", \ 
    "-XX:InitialRAMPercentage=75", \ 
    "-XX:MinRAMPercentage=75", \ 
    "-XX:MaxRAMPercentage=75", \ 
    "-jar","my-app.jar"] 
 
EXPOSE 8080 

Now we can define the resources in our container (e.g. Docker):

resources: 
  limits: 
    cpus: '2' 
    memory: '2G' 

Why we don't use Xmx and Xmx to allocate the memory?

When we deploy in containers is better practice to define the memory in the configuration of your container manager (docker, kubernetes, custom).

Modern Java allows you to set the available HEAP on the base of the container allowing you to better manage the memory and avoiding situations in which the 'declared' available memory for your JRE is bigger than the available memory in your container.

Why we cannot allocate all the memory of the container to the Java Heap?

You gave plenty of memory to your Java container, but you got these errors ... (or a similar one):

[ 1234.123123 ] Memory cgroup out of memory: Kill process 1234 (java) 
[ 1234.123125 ] Killed process 1234 (java) total-vm:8123123kB 
# There is insufficient memory for the Java Runtime Environment to continue.  
# Cannot create GC thread. Out of system resources. 

Java doesn't use only the HEAP memory for your application, it requires memory to store other information:

  • Threads
  • Class loading
  • JIT compilation
  • GC Overhead
  • etc.

Docker and Java can be dangerous

For this reason we cannot assign all the container memory to the Heap, we have to leave some space for the activities of the JVM (and other processes).

In our case 75% worked without issues, if your instance has memory issues you should probably modify this parameter.

The choice of the Garbage Collector has a big impact too (again!). SerialGC uses few MB of memory, modern GC like G1 and Z could require hundreds of MB of memory!

Which Garbage Collector is using my JVM in the Docker container? SerialGC probably!

The JDK uses an heuristic to determine which is the 'best' garbage collector for your instance.

In a typical situation, a small Java application / web service / micro service that runs with 0.8 processors and 512MB:

docker container run --rm -it --cpus=0.8 --memory=512m eclipse-temurin:21-jre-alpine java -XX:+PrintFlagsFinal --version | grep -E "GCThreads|Use.*GC\b"

the instance will run with the dear old SerialGC:

uint ConcGCThreads                            = 0                                         {product} {default} 
uint ParallelGCThreads                        = 0                                         {product} {default} 
bool UseAdaptiveSizePolicyWithSystemGC        = false                                     {product} {default} 
bool UseDynamicNumberOfGCThreads              = true                                      {product} {default} 
bool UseG1GC                                  = false                                     {product} {default} 
bool UseMaximumCompactionOnSystemGC           = true                                      {product} {default} 
bool UseParallelGC                            = false                                     {product} {default} 
bool UseSerialGC                              = true                                      {product} {ergonomic} 
bool UseShenandoahGC                          = false                                     {product} {default} 
bool UseZGC                                   = false                                     {product} {default} 

Nothing wrong with SerialGC (only 1 processor, waiting time) if your application is not an online videogame or an e-banking solution!

... adding 5G of memory ... still SerialGC

You can play with the parameters. If we use 1.0 processors with 5(!)GB of memory ... we are still using SerialGC.

Set the Garbage collector manually

If you want to avoid surprises you should set the Garbage Collector manually in your configuration, e.g.: -XX:+UseSerialGC

2 processors are better than 1

If you want to avoid pause time because of the garbage collector or because the container suspends your virtual processor allocated it's recommended to use at least 2.0 processors.

False friends: Xms=Xmx, MinRAMPercentage

-Xms=-Xmx: initial available heap = maximal available heap

Many years ago, before containers and friends was typical to start a Java application with the logic: -Xms=-Xmx. Most of the devs thought that this was done to set up the minimal and maximal allocated memory for a Java instance at the same level, reducing the garbage collection activity and increasing the performance.

In reality, this says to the JVM that the initial heap size has to be like the maximal heap size. You can run java -X and see the official definition.

  1. initial doesn't mean minimal, the heap size can be resized
  2. The heap size is not the allocated memory (requires to be touched), if you really want to allocate the memory you need to use -XX:+AlwaysPreTouch

This means that -Xms1024m -Xmx1024m can run with 250 MB allocated. If you need to define the minimum memory you have to use -XX:AdaptativeSizePolicy to avoid the Garbage Collector resizing the memory.

MinRamPercentage: MAXIMUM heap size

The MinRamPercentage used for the modern JVM sets the maximum heap size in percent of the available RAM if the available RAM is less than 200 MB.

Find your own way

In this post there are some examples, the goal is to remember that some parameters are required even if not mandatory.

Every project is different, you have to find what is fitting the best for your application.
Example: a low data application with low interactivity fits well the serialGC, an interactive application with 10'000 users requires more memory and a different GC (e.g. Shenandoah, G1).

Deep dive

This is only an introduction, the tuning of the JVM can be very complicated and if you want to learn more you can find more qualified resources online.

Just remember that you should give some parameters to your container and check these parameters if your application has performance or memory issues.

Cool links on the subject:

Oracle description of the configuration options
Microsoft explanation: Containerize your Java applications
JVM ergonomics with containers
Secrets of Performance Tuning Java on Kubernetes
Confusion about parameter -XX:MinRAMPercentage
Memory footprint of a Java process (Devoxx video)


WebApp built by Marco using SpringBoot 3.2.4 and Java 21, in a Server in Switzerland without 'Cloud'.