The JVM heap size setting is a deep dive. It requires both a high degree of understanding and implementation of the architecture and a deep understanding and mastery of the internal language mechanism.
First, you need to have a preset and monitor the JVM heap size. See this article to select five suggestions (5 tips for proper Java heap size) for the appropriate Java heap size ), in fact, this article mainly popularized some basic JVM settings knowledge, emphasizing the knowledge points and general experience that need to be understood, and did not provide specific and feasible operation methods in practice. In fact, each system is different, just as patients vary from person to person, you need to find the heap size that suits you based on your system and your own economic capabilities.
The heap is mainly divided into two types: the STANDBY state and the old ecology. The newly created object is in the STANDBY state. If this object reference is held by the container or static or other objects, or is often used, anyway, it's not the kind of thing that gets lost when it is used up, So JVM will gradually copy and paste it to the old ecosystem. If you use the cache, most of the cached objects will be in the old ecosystem, for example, jdonframework or jivejdon has a cache by default. It is a memory-based computing mode, that is, memory state management. Therefore, the size settings of the two regions of the heap are more important, the following is an example of jivejdon settings:
In the production process, we need to monitor the size of the region and the old ecosystem. Depending on the traffic and CMS settings, especially the size of the old ecosystem changes frequently, Psi-probe is used for monitoring.
In the initial stage, the JVM size is configured according to the idle state: the old ecology =. Of course, it is also related to the idle expiration time setting in the cache. If the cache is idle for objects in the cache for a long time, it will be cleared, similar to the httpsession mechanism.
If the cache expiration time is set too short and the utilization rate of the old ecosystem is not high, for example, the old ecosystem of 1.3g only uses 300 m, of course, it can also delay the CMS startup ratio, for example, CMS garbage collection starts at 70%, which is close to G. However, when a peak access occurs, it may be too late to collect data (OME: outofmemoryerror ).
If the cache expiration time is too long, the old ecosystem will be full, and frequent CMS startup will be futile, increasing the system pause response time and increasing the system latency.
In this way, the cache duration or cache size needs to be constantly fine-tuned in a long production environment according to the old JVM ecosystem. Probe plays a key role, mainly in graphical and intuitive display, it is easier than JDK's jconsole and other tools.
The relationship between heap size settings and cache is discussed above. Because our application is based on DDD and the entity contains the State resident memory, the heap size is crucial to the application system.
The trade-off between the heap size is also related to two architecture indicators: latency and throughput. We always want to achieve high throughput through low latency. The larger the heap, the higher the throughput, but not the frequent triggering of garbage collection, otherwise, it will cause a pause and increase the delay. This is a conflict. Of course, we have set the new generation of garbage collection to concurrent collection, and the old ecosystem will adopt CMS.
First, we need to obtain the exact latency and throughput metrics of our system: it is necessary to use javamelody to monitor the HTTP request response time. In fact, it is to monitor the latency of important system indicators, the JVM's fine-tuning score will eventually reflect this indicator.
Another fine-tuning metric is the throughput. We can use jsvnstat + SNMP to obtain the daily throughput. For example, if it is 400 m, the size of the old JVM ecosystem must be at least 400 m, the cache duration is 24 hours.
We have discussed how to observe the size of the heap, especially the size of the old ecology. Next we will talk about the observation of more details inside the heap. In particular, we are most concerned about which class occupies the most memory in the old ecology, or it is prone to memory leakage and Memory Leak Causes ome.
There are many tools, including online sampling jprofiler, visualvm, and offline sampling. in the production environment, after problems are found through probe and javamelody, such as the increasing old ecology, garbage collection seems to be ineffective, the response latency increases constantly. In this case, it is more appropriate to perform offline sampling.
There are several tools for offline sampling. See this article: Java heap dump trigger and analysis. The key is to use jmap-heap: format = B PID on the production server to obtain heap. bind file, which can consume CPU resources during sampling, which has a significant impact on system latency.
Download heap. Bin and use the sampling and analysis tool. We recommend that you use the eclipse mat. Other tools do not seem to be updated with JDK updates, which may cause minor problems.
As for how to find the culprit that causes the sudden increase of the old ecology or even the ome, refer to this article: hprof: Memory Leak analysis tutorial.
Conclusion: When we use the above tools and ideas to master and monitor the JVM of our system, the key to O & M of the application system will be mastered, by constantly fine-tuning the provision of uninterrupted services with almost no downtime, high availability will become a reality.
JVM Series II: JVM heap size suggestions