This article describes what it takes to do a smooth 50,000 user concurrency test from a load test perspective.
You can see the discussion record at the end of this article.
Quick summary of steps
Write your script
Using JMeter for local testing
Blazemeter Sand Box Test
Use a console and an engine to set the number of Users-per-engine
Set up and test your collection (a console and 10-14 engines)
Use the Master/slave feature to achieve your maximum CC target
Step 11: Write your script
Before you begin, make sure you have the latest version from the Apache community jmeter.apache.org in JMeter.
You will also want to download these add-on plug-ins, because they can make your job easier.
There are many ways to get a script:
Use Blazemeter's Chrome extensions to keep track of your scenarios
Use the JMeter HTTP (S) test script logger to set up an agent so you can run your tests and record everything.
All manually built from scratch (possibly a functional/QA test)
If your script is the result of a record (like step 1&2), keep in mind:
You need to change specific parameters such as username & password, or you might want to set up a CSV file with a value for each user that can be different.
To complete such requests as add to cart, login, and more, you may want to use regular expressions, JSON path extractor, XPath extractor to extract such token strings, form build IDs, and other elements
Keep your scripts parameterized and use configuration elements such as the default HTTP request to make it easier for you to switch between environments. Back to top Step 2: Use JMeter for local testing
Use the view result tree elements in the 1 iterations of 1 threads, debug samples, virtual samples, and open log viewer (some JMeter errors will be reported inside) to debug your script.
Traverse all scenarios (including true or false responses) to ensure that the script behaves as expected ...
After a successful thread test has been successfully used-increase it to 10 minutes from 10 to 20 threads to continue testing:
If you want each user to be independent--is that so?
Have you received any errors?
If you are doing a registration process, look at your background-the account is created according to your template? Are they independent?
From the summary report, you can see the statistics on the tests-are they a bit of use? (Average response time, error, per second hit rate)
Once you have the script ready:
Clean up the script by removing any debugging and virtual samples, and delete your script listeners
If you use a listener (such as "save response to a file"), make sure you are not using any path! And if he is a listener or a CSV dataset configuration-Make sure you don't use the path you use locally-as long as the filename (as if it were in the same folder as your script)
If you use your own proprietary JAR file, make sure it's also uploaded.
If you use more than one thread group (not the default)-Make sure you set this value before uploading it to blazemeter. Back to top step 3:blazemeter sandbox test
If your first test--You should review this article about how to create tests in Blazemeter.
Set the sandbox's test configuration to 300, 1 consoles for 50 minutes.
This configuration of the sandbox allows you to test your scripts in the background and make sure everything on the blazemeter is in good working order.
To do this, first press the gray button: Tell the JMeter engine I want Full Control! -To gain complete control of your test parameters
The usual problems you will encounter:
Firewalls-Make sure your environment is developed for Blazemeter CIDR lists (they will be updated in real time) and put them in the whitelist
Make sure you have all your test files, such as: Csvs, JAR, JSON, User.properties, and so on. can be used
Make sure you don't use any path
If you still have a problem, look at the error log (you should be able to download the entire log).
The configuration of a sandbox can be as follows:
Engine: Is able to make the console (a console, 0 engines)
Threads: 50-300
Capacity upgrade: 20 minutes
Iterations: Keep Testing
Time: 30-50 minutes
This allows you to get enough data during a productivity upgrade (in case you have a problem), and you will be able to analyze the results to make sure that the script executes as expected.
You should look at the Waterfall/webdriver tab to see if the request is normal and you shouldn't have any problems at this point (unless you deliberately).
You should be staring at the Monitoring tab, watch memory and CPU consumption-this is the number of users you tried to set up for each engine in step 4. Back to top Step 4: Use 1 consoles and one engine to set the number of users per engine
Now we can be sure that the script will work perfectly in the blazemeter-we need to figure out how many users to put into an engine.
If you can make this decision with the data in the sandbox, that would be great!
Here, I'll give you a way to figure out this number without looking back at the sandbox test data.
To set your test configuration:
Number of Threads: 500
Capacity upgrade: 40 minutes
Iteration: Permanent
Duration: 50 minutes
Use a console and an engine.
Run the test and monitor your test engine (through the Monitoring tab).
If your engine is not up to 75% CPI usage and 85% memory usage (a one-time peak can be ignored):
Adjust the number of threads to 700 before testing
The number of committed threads until the number of threads reaches 1000 or 60% of CPU or memory usage
If your engine is over 75% CPU usage or 85% of memory usage (a one-time peak can be ignored:
See how many concurrent users there are at that point for the first time you reach 75%.
Running a test instead of increasing the capacity of your previous 500 users
This time, put the productivity boost into the real test (5-15 minutes is a good start) and set the length to 50 minutes.
Ensure that there is no more than 75% CPU usage or 85% memory usage throughout the testing process ...
For security reasons, you can reduce the number of threads per engine by 10%. Back to top Step 5: Install and test the cluster
We now know how many threads we get from an engine, and at the end of the chapter we will know how many users a cluster can provide us.
A cluster is a logical container that has one console (only one) and 0-14 engines.
Even if you can create a test case that uses more than 14 engines--but it actually creates two clusters (you can notice the number of consoles increased) and clones your test case ...
Each cluster has up to 14 engines and is based on Blazemeter's own testing to ensure that the console can control the pressure of these 14 engines for a large number of new data processing.
So in this step, we'll use step 4 tests and just modify the number of engines to increase it to 14.
Run the test all the length of the final test. When the test is running, open the Listening tab and verify that:
1. Not one engine exceeds the cpu75% occupancy and memory 85% occupancy limit;
2. Locate your console tab (you can find its name by clicking on the logs tab->network information, viewing the console private IP address)--it should not reach the upper limit of cpu75% occupancy and memory 85% occupancy.
If your console reaches that limit-reduce the number of engines and rerun until the console is under that limit.
At the end of this step, you will find:
1. Number of users per cluster;
2. The hit rate per cluster.
Look at the other statistics in aggretate table and find local results statistics to get more information about your cluster throughput. Back to top Step 6: Use the Master/slave feature to achieve your maximum CC target
We've come to the last step.
We know that the script is running, and we know how many users an engine can support and how many users a cluster can support.
Let's make a hypothesis:
One engine supports 500 users
A cluster can be user 12 engines
Our target is 50,000 user testing.
So in order to do this, we need 8.3 clusters.
We can use a cluster of 8 12 engines and a cluster of 4 engines-but it would be better to scatter the load as follows:
Each cluster we use 10 engines instead of 12, then each cluster can support 10*500 = 5K users and we need 10 clusters to support 50,000 users.
This will benefit from the following:
No maintenance of two different test types