(go) Run Java in Docker: To prevent failure, you need to know

Source: Internet
Author: User
Tags cpu usage jboss wildfly docker run docker machine

Transferred from: https://mp.weixin.qq.com/s?__biz=MzA5OTAyNzQ2OA==&mid=2649693848&idx=1&sn= 4e9ef7e2a9d41b39985899b6ad146298&chksm= 889321fbbfe4a8ed58d09e6bcf2f9c2603859c331489c0a8a56b8050e601438415b1398fc1f6&mpshare=1&scene=1& srcid=0419cxipfjtpccfp1fdn9css&key= 7bae48d5a88e60c5ec76efea9c2ee66bec96c6478a3e8f6ce4cfe8531486f2f75c26ece7cd8aff63cdca6913da9eea809fe3561f22dc0bcbab552a20c F3c66bceb7faee2afc241d1742d38d396573d90&ascene=0&uin=mtc0nzu1nq%3d%3d&devicetype=imac+macbookpro11 %2c3+osx+osx+10.12.2+build (16c68) &version=12010210&nettype=wifi&fontscale=100&pass_ticket= 68wphvpsnwdasg%2bttici8kyswyiyyttah%2fpgk04mucw%3d

Many developers will (or should) know that when we set the JVM's GC, heap size, and runtime compiler parameters for Java programs running in Linux containers (Docker, Rkt, Runc, LXCFS, and so on), they do not get the desired results. When we run a Java application in a "Java-jar mypplication-fat.jar" way without setting any parameters, the JVM adjusts to its many parameters to achieve optimal performance in the execution environment.

This blog will provide developers with a simple way to demonstrate what they need to know when running Java applications inside a Linux container.

We tend to think that containers can define the total number of CPUs and the memory of virtual machines as virtual machines. Containers are more like isolation of a process-level resource (CPU, memory, file system, network, and so on). This isolation is dependent on the functionality of a cgroups provided in the Linux kernel.

However, some applications that can gather information from the runtime environment already exist before the Cgroups feature appears. Executing command ' top ', ' free ', ' ps ' in the container, including the JVM that is not optimized, is a Linux process that is subject to high limits. Let's check it out.

Problem

In order to demonstrate the problems I have encountered, I use the command "Docker-machine create-d virtualbox–virtualbox-memory ' 1024x768 ' docker1024" A docker daemon with 1GB of memory was created in the virtual machine, followed by the command "Free-h" in 3 Linux containers, making it only 100MB of memory and swap. The result shows that the total memory of all containers is 995MB.

Even in kubernetes/openshift clusters, the results are similar. I also executed a command in a cluster of 15G memory to make the Kubernetes pod have 511MB memory limit (command: "Kubectl run mycentos–image=centos-it–limits= ' memory=512mi '"), Total memory is displayed as 14GB.

To find out why this is the result, you can read this blog post "Memory inside Linux containers–or why don ' t free and top work in a Linux container?" (https://fabiokung.com/2014/03/13/memory-inside-linux-containers/)

We need to know that the Docker parameters (-M, –memory, and –memory-swap) and the Kubernetes parameter (–limits) will let the Linux kernel kill the memory of a process when it is out of bounds, but the JVM is not aware of this limitation at all. When this limit is exceeded, bad things happen!

To simulate a scenario where a process will be killed when the memory limit is exceeded, we can run wildfly application in a container by ordering "Docker Run-it–name mywildfly-m=50m jboss/wildfly" Server and limit the memory size to 50MB for it. While this container is running, we can execute the command "Docker stats" to see the limitations of the container.

But after a few seconds, the container wildfly will be interrupted and output information: * * * Jbossas process received KILL signal * * *

Through the command "Docker inspect Mywildfly-f ' {{json. State} ' "can see that the container was killed because Oom (out of memory) occurred. The "state" in the container is logged as oomkilled=true.

How this will affect Java applications

Create a virtual machine with 1GB of memory in the Docker host (the "Docker-machine create-d virtualbox–virtualbox-memory ' docker1024" has been created before using the command), and to limit the memory of a container to 150M, it seems to be enough to run the spring Boot application with parameters-xx:printflagsfinal and-xx:printgcdetails set in Dockerfile. These parameters allow us to read the initialization parameters of the JVM and get the details of the operation of the Garbage Collection (GC).

Try it:

$ docker run-it--rm--name mycontainer150-p 8080:8080-m 150M rafabene/java-container:openjdk

I also provided an Access interface "/api/memory/" to use a string object to load the JVM memory, simulating a large amount of consumed memory, which can be called to try:

$ curl/HTTP/' Docker-machine IP docker1024 ': 8080/api/memory

This interface will return the following information "allocated more than 80% (219.8 MiB) of the max allowed JVM memory size (241.7 MIB)".

Here we have at least 2 questions:

    • Why does the JVM allow the maximum content of 241.7MiB?

    • If the container has already limited the memory to 150MB, why does it allow Java to allocate memory to 220MB?

First, we should re-understand the definition of "maximum heap size" as described in the JVM ergonomic page, which will use 1/4 of the physical memory. The JVM does not know that it is running in a container, so it will be allowed to use the maximum heap size of 260MB. By adding the parameter-xx:printflagsfinal when the container is initialized, we can check the value of this parameter.

$ docker Logs Mycontainer150|grep-i Maxheapsize

Uintx maxheapsize: = 262144000 {Product}

Second, we should understand that when the "-M 150M" parameter is set on the Docker command line, the Docker daemon restricts RAM to 150M and swap to 150M. As a result, a process can allocate 300M of memory, explaining why our process has not received any exit signals from kernel.

More information on the differences between memory limits (–memory) and Swap (–memory-swap) in Docker commands can be found here.

is more memory a solution?

If the developer does not understand the problem, it may be assumed that there is not enough memory available for the JVM in the running environment. The usual solution is to provide more memory for the running environment, but in fact, this is a false understanding.

If we increase the memory of Docker machine from 1GB to 8GB (using the command "Docker-machine create-d virtualbox–virtualbox-memory ' 8192 ' docker8192"), and create containers from 150M to 800M:

$ docker run-it--name mycontainer-p 8080:8080-m 800M rafabene/java-container:openjdk

Using the command "Curl Http://X51X:8080/api/memory" at this point does not return the result because the computed maxheapsize size in a JVM environment with 8GB of memory is 2092957696 (~ 2GB). You can use the command "Docker logs mycontainer|grep-i maxheapsize" to view.

The application will attempt to allocate more than 1.6GB of memory, and when the container limit (800MB of RAM 800MB Swap) is exceeded, the process will be killed.

It is obvious that it is not a good way to run a program in a container by increasing the memory and setting the parameters of the JVM. When running Java applications in a container, we should set the maximum heap size (parameter:-XMX) based on the needs of the application and the limits of the container.

What's the solution?

Modify the Dockerfile to specify the extended environment variables for the JVM. The contents of the amendment are as follows:

CMD java-xx:+printflagsfinal-xx:+printgcdetails $JAVA _options-jar Java-container.jar

Now we can use the JAVA_OPTIONS environment variable to set the size of the JVM heap. 300MB seems to be enough for the application. Later you can view the log and see that the value of the heap is 314572800 bytes (300MBi).

Under Docker, you can use the "-e" parameter to set the environment variable to switch.

$ docker run-d--name mycontainer8g-p 8080:8080-m 800m-e java_options= '-xmx300m ' rafabene/java-container:openjdk-env

$ docker Logs Mycontainer8g|grep-i Maxheapsize

Uintx maxheapsize: = 314572800 {Product}

In Kubernetes, you can use "–env=[key=value" to set the environment variable to switch:

$ kubectl Run MyContainer--image=rafabene/java-container:openjdk-env--limits= ' memory=800mi '--env= ' JAVA_OPTIONS= '- xmx300m ' "

$ kubectl Get Pods

NAME Ready STATUS Restarts

mycontainer-2141389741-b1u0o 1/1 Running 0 6s

$ kubectl Logs Mycontainer-2141389741-b1u0o|grep Maxheapsize

Uintx maxheapsize: = 314572800 {Product}

Can we improve it again?

Is there a way to automatically calculate the value of a heap based on container limits?

In fact, if your base docker image is used by FABRIC8, it can be implemented. Mirror FABRIC8/JAVA-JBOSS-OPENJDK8-JDK uses a script to calculate the memory limit of the container and uses 50% of the memory as the upper limit. That is, 50% of the memory can be written. You can also use this image to turn on/off debugging, diagnostics, or anything else. Let's take a look at the Dockerfile of a spring boot application:

From fabric8/java-jboss-openjdk8-jdk:1.2.3

ENV Java_app_jar Java-container.jar

ENV Ab_off True

EXPOSE 8080

ADD target/$JAVA _app_jar/deployments/

That's it! Now, regardless of the memory limit of the container, our Java app automatically adjusts the heap size in the container instead of setting it up against the host.

Summarized so far, the Java JVM is not aware that it is running in a container-some resources are limited in memory and CPU usage. Therefore, you cannot let the JVM itself set the maximum heap value that it considers to be optimal.

One solution is to use FABRIC8 as the base image, which realizes that the application is running in a restricted container, and that you can automatically adjust the maximum heap value without doing anything.

A memory limitation that attempts to provide Cgroup functionality to the JVM in a container (i.e. Docker) environment has already begun in JDK9. Related information can be viewed: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/5f1d1df0ea49

3-day burn-brain kubernetes training camp

This training includes: kubernetes Overview and architecture, deployment and core mechanism analysis, advanced article--kubernetes working principle and Code Analysis, click the image below to view specific training content.

Click to read the original link to register directly.

(go) Run Java in Docker: To prevent failure, you need to know

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.