How to use JMeter to simulate more than 50,000 concurrent users

Source: Internet
Author: User
Tags blazemeter

From the perspective of load testing, this article describes what it takes to do a smooth 50,000-user concurrency test.

You can see the record of the discussion at the end of this article.

Quick overview of the steps

    1. Write your script

    2. Using JMeter for local testing

    3. Blazemeter Sandbox test

    4. Use a console and an engine to set the number of Users-per-engine

    5. Set up and test your collection (one console and 10-14 engine)

    6. Use the Master/slave feature to reach your maximum cc target

Step 11: Write your script

Before you begin, make sure you get the latest version from JMeter's Apache community jmeter.apache.org.

You will also want to download these add-ons as they can make your work easier.

There are many ways to get a script:

    1. Use Blazemeter's Chrome extensions to document your scenario

    2. Use the JMeter HTTP (S) test script Recorder to set up a proxy so that you can run your tests and record everything.

    3. Build all manually from scratch (possibly functional/QA test)

If your script is the result of a record (like step 1&2), keep in mind:

    1. You need to change specific parameters such as username & password, or you might want to set a CSV file, with the values inside each user can be different.

    2. In order to complete such requests as "add to Cart" and "login", you may want to use regular expressions, JSON path extractor, XPath extractor, to extract such token strings, form build ID and other features

    3. Keep your scripts parameterized and use configuration elements, such as default HTTP requests, to make your work easier when switching between environments.

Step 2: Use JMeter for local testing

Debug your scripts by using view result tree features in 1 iterations of 1 threads, debugging samples, virtual samples, and open log viewers (some jmeter errors will be reported inside).

Traverse all scenarios (including true or false responses) to ensure that the script behaves as expected ...

After successful use of a thread test--increase it to 10 minutes 10 to 20 threads to continue testing:

    1. If you want each user to be independent--is that so?

    2. Have you received any errors?

    3. If you are doing a registration process, then look at your background-is the account created according to your template? Are they independent?

    4. From the summary report, you can see the statistics on the tests-are they a bit of a use? (Average response time, error, hit ratio per second)

Once you have the script ready:

    1. Clean up the script by removing any debugs and virtual samples, and delete your script listener

    2. If you are using a listener (such as "save a response to a file"), make sure you are not using any path! , and if he is a listener or a CSV dataset configuration--Make sure you're not using the path you're using locally-as long as the file name (just like your script in the same folder)

    3. If you use your own proprietary JAR file, make sure it is also uploaded.

    4. If you are using more than one thread group (not the default one)-make sure that this value is set before uploading it to blazemeter.

Step 3:blazemeter Sandbox test

If your first test at that time-you should review this article about how to create tests in Blazemeter.

Set the sandbox's test configuration to user 300, 1 consoles, 50 minutes.

This configuration of the sandbox allows you to test your scripts in the background and make sure that everything on the blazemeter is running intact.

To do this, first press the gray button: Tell the JMeter engine I want Full Control! -To get complete control of your test parameters

Usually you will encounter the problem:

    1. Firewall-Make sure your environment is developing a CIDR list of blazemeter (they will be updated in real time) and put them on a whitelist

    2. Make sure you have all your test files, such as: CSVs, JAR, JSON, user.properties, etc... can be used

    3. Make sure you're not using any path

If you still have a problem, look at the error log (you should be able to download the entire log).

The configuration of a sandbox can be this:

    • Engine: Is able to make the console (a console, 0 engines)

    • Threads: 50-300

    • Capacity increase: 20 minutes

    • Iteration: Keep testing going

    • Duration: 30-50 minutes

This allows you to get enough data during a capacity increase (in case you have problems), and you will be able to analyze the results to ensure that the script executes as expected.

You should look at the Waterfall/webdriver tab to see if the request is normal and you should not have any problems with this (unless you intentionally).

You should stare at the Monitoring tab, the period of memory and CPU consumption-this is the number of users you try to set each engine in step 4.

Step 4: Use 1 consoles and one engine to set the number of users per engine

Now we can be sure that the script works perfectly in blazemeter-we need to figure out how many users to put into an engine.

It would be great if you could make the decision with the data from the user sandbox.

Here, I'll give you a way to calculate this number without looking back at the sandbox test data.

To set up your test configuration:

    • Number of Threads: 500

    • Capacity increase: 40 minutes

    • Iteration: Permanent

    • Duration: 50 minutes

Use a console and an engine.

Run the tests and monitor your test engine (through the Monitoring tab).

If your engine does not reach the 75% CPI usage and 85% of the memory usage (one-time spikes can be ignored):

    • Adjust the number of threads to 700 once in the test

    • The number of commit threads until the number of threads reaches 1000 or 60% of CPU or memory usage

If your engine is over 75% CPU usage or 85% of the memory usage (one-time spikes can be ignored:

    • See how many concurrent users you have at that point for the first time you reach 75% points.

    • Run a test instead of increasing the capacity of your previous 500 users

    • This time, put the capacity boost into a real test (5-15 minutes is a good start) and set the duration to 50 minutes.

    • Ensure that there is no more than 75% CPU usage or 85% memory usage over the course of the entire test ...

For security reasons, you can reduce the number of threads per engine by 10% .

Step 5: Install and test the cluster

We now know how many threads we get from an engine, and at the end of that chapter we will know how many users a cluster can provide to us.

A cluster is a logical container with one console (only one) and 0-14 engines.

Even if you can create a test case that uses more than 14 engines--but actually create two clusters (you can notice an increase in the number of consoles) and clone your test case ...

Each cluster has a maximum of 14 engines, which are based on Blazemeter's own testing to ensure that the console can control the pressure of these 14 engines on the massive new data processing.

So in this step, we'll use step 4 tests, and just modify the number of engines to increase it to 14.

Run the test as long as the final test is full. When the test is running, open the Monitor tab and verify that:

1. No engine exceeds cpu75% 's occupancy and memory 85% occupancy limit;

2. Locate your console tab (you can find its name by clicking Logs tab->network Information one time, viewing the console's private IP address)-it should not reach the upper limit of cpu75% occupancy and memory 85% occupancy.

If your console reaches that limit-reduce the number of engines and rerun until the console is under that cap.

At the end of this step, you will find:

1. Number of users per cluster;

2. The hit rate per cluster.

Review the other statistics in the Aggretate table and find the local results chart to get more information about your cluster throughput.

Step 6: Use the Master/slave feature to reach your maximum cc target

We are at the last step.

We know that the script is running, and we also know how many users an engine can support and how many users a cluster can support.

Let's take a look at the hypothesis:

    • One engine supports 500 users

    • One cluster can user 12 engines

    • Our goal is to test 50,000 users

So in order to accomplish this, we need 8.3 clusters.

We can use 8 clusters of 12 engines and a cluster of 4 too engines-but it would be better to distribute the load as follows:

We use 10 engines per cluster instead of 12, so each cluster can support 10*500 = 5K users and we need 10 clusters to support 50,000 of users.

This gives you the following benefits:

    1. No maintenance of two different test types

    2. We can add 5K users by simply replicating existing clusters (5K is more common than 6K)

    3. We can keep increasing as long as we need it.

Now we are ready to create the final 50,000 user-level Master/slave test:

    1. Change the name of the test from "My prod test" to "My prod test-slave 1".

    2. Let's go back to step 5 and change the standalone under Advanced test properties to slave.

    3. Press the Save button--now we have a master and one of the 9 slave.

    4. Return to your "My prod test-slave 1".

    5. Press the Copy button

    6. Next repeat steps 1-5 until you have created 9 slave.

    7. Go back to your "My prod test-salve 9" and press the Copy button.

    8. Change the name of the test to "My prod test-master".

    9. Change the slave under Advanced test properties to master.

    10. Check all the slave (My prod test-salve 1..9) that we just created and press save.

Your 50,000 user-level Master-slave test is ready. Run 10 Tests with 5,000 users per test by pressing the Start button on master.

You can modify any of the tests (salve or master) so that they come from different regions, have different scripts/csv/and other files, use different network emulators, different parameters, etc.

You can find a report of the resulting aggregated results in a new tab page in a master report called "Master Load Results", and you can view each test result independently by opening a single report.

How to use JMeter to simulate more than 50,000 concurrent users

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.