OpenFire Cluster Inquiry _ Pressure measurement situation

Source: Internet
Author: User

I. (Test time: 20151220-14:00-17:00)

Windows environment

The first test, running in the eclipse environment, was found. 4w connections ran out of about 1G of memory.

In addition: The service is available, the pressure test client forced shutdown, resulting in a large number of sessions need to clean up, this time will cause a denial of service phenomenon, the new connection cannot be established.

Session Close is locked in place?

Two. (Test time: 20151223-night 18:00-19:20)

Linux 64-bit virtual machine-open cluster (15 connections per second)

Pressure measurement 5W, small, but can use, to 5.9w, suddenly completely stuck. Unable to establish new connection

Telent 127.0.0.1 5222 No response

When viewing memory through the cluster is normal, no high occupancy, the GC is less than 1G.

Three. (Test time: 20151223-night 18:00-19:40)

Linux 64-bit virtual machine-no cluster (40 connections per second, 2 Tsung)

2W connections Start plugging, PSI login takes 3 minutes to connect.

After the 4w connection, the PSI login takes 6 minutes, each message is sent over, and comes back in 1 minutes and 30 seconds.-2 minutes waiting for the landing process took 15 mins.

PSI cannot login after 5.9w connection, ibid.

Telent 127.0.0.1 5222 No response

Note: Linux Adjustable network parameters

Four. (Test time: 20151224-8:20-11:00)

Linux 64-bit virtual machine-no cluster (30 connections per second, 2 Tsung)

2W connections Start plugging, PSI login takes 1 minutes to connect. 10 point to peak 10w connection.

At this point telent 127.0.0.1 5222 is connected properly.

Conclusion: After optimizing the network configuration, 10w concurrent machine is running normally.

Five. (Test time: 20151224-16:20-11:00)

Windows environment 64-bit-open cluster (15+7 connections per second)

5w Connection is OK. 1 seconds to log in. To 8w concurrency, the advent of the Dayton. Login spents 15 seconds. 9w connection. Using 1.2G memory (after full GC)

However, when a tsung does not initiate a new connection, the access event becomes faster. CPU usage is high, more than 70%.

OPENJDK virtual machine not.

six. (Test time: 20151224-18:20-21:00)

Linux Environment 64-bit-open cluster (9+9 connections per second)

3w connection is ok. 1 seconds login. 10w connection. Using 1.2G memory (after full GC)

Seven. (Test time: 20151225-8:40-12:00)

Linux Environment 64-bit (using Oracle JDK)-open cluster (20 connections per second)

6w connection is ok. 1 seconds to sign in. 10w connection. Using 1.2G memory (after full GC), CPU, memory is normal. It's a good idea to use Oracle's hotspot.

Eight. (Test time: 20151228-17:30-20:30)

Linux Environment 64-bit (using Oracle JDK)-open cluster (20 connections per second)

Three Tsung simultaneous test, 10w connection. Use 1.2G memory, 16w connection, use 2G memory. Only the 3g memory is allocated, and the GC frequency increases, and the phenomenon of the card appears.

Nine. (Test time: 20160117-9:30-12:30)

Inux Environment 64-bit (using Oracle JDK)-open cluster (100 connections per second)

1. Serialization optimization

2.tsung No chat transactions, just login

Three Tsung simultaneously tested, 18w connected. Using 1.4G memory,


10. (Test time: 20160119-18:00-21:30)

Linux Environment 64-bit (using Oracle JDK)-open cluster (90 connections per second)

Start 3 servers, all open 4G of memory. Of these, 2 are running on the same physical machine.

Starting 9 Tsung clients, Tsung each allocated 1g of memory, running 3 CentOS is already the limit. The memory is basically eaten. For example:

After running 1个小时50分钟, there are approximately 35w connections, and suddenly there is no response. PSI is also unable to make a new connection request.

Eleven. (Test time: 20160120-09:30-11:30)

1 Windows environment 64-bit, 1 Linux environment 64-bit (5G memory), open cluster (100 connections per second)

The front runs normally, when the number of Linux users reaches 14w, the system is stuck, all in the GC, and affects the other device. None of the two units can log in properly.

Subsequent policies: adjust the memory size and modify the Hazelcast eviction strategy. Run again on Linux and trace the GC

Twelve. (Test time: 20160121-18:00-19:30)

1 Windows environment 64-bit, 1 Linux environment 64-bit (5G memory), open cluster (120 connections per second)

window to 5g,linux to 5G, both machine users have steadily increased, to reach 19w+. such as:

Memory usage, such as:

However, Linux will appear Oldgen area suddenly full of conditions. modified to JDK1.7 also occasionally (virtual machine problem??).

Twelve. (Test time: 20160124-11:00-12:30)

1 window Environment 64-bit (16G), 2 Linux Environment 64-bit (6G memory-virtual machine), open cluster (125 connections per second)

6G memory allocation 5G OpenFire. Obviously not enough. Running for 1.5 hours, Linux memory is full and the GC is invalid.

13. (Test time: 20160124-18:15-20:20)

3 Linux Environments 64-bit (8G memory-virtual machine), open cluster (100 connections per second)

Memory allocation 6G OpenFire. Running 2 hours, the user reached 50W (9 sets of tsungclient, there are 2 only 5w connection, one only 3w connection)

OpenFire Cluster Inquiry _ Pressure measurement situation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.