Background
Want to choose a Web Service performance test tool that can realistically simulate a large number of users to visit the site when the request to obtain the server's current request processing capacity (number of Requests/sec).
Take the server as an example, each user with a separate login token, do a variety of operations, such as brush messages, send messages, see friends Circle and so on.
You want the performance test tool to meet the following requirements:
1. test script ability, preferably Python/ruby, etc. most commonly used
2. Each concurrent instance can use different parameters
3. CLI start test, which is important for automated testing
4. Session support, which is the response of the first request, can be used for subsequent request parameters.
6. The concurrency of a single node is high.
7. Distributed support, not limited by the computing power of a single node.
Performance Test Tool contestant: Gatling
http://gatling.io/
Gatling is a Scala-based high performance server performance testing tool that is used primarily for testing server workloads and analyzing and measuring various performance metrics for servers. Gatling is primarily used to measure HTTP-based servers, such as Web applications, restful services, and so on, in addition to the following features:
- Supports Akka Actors and Async IO to achieve high performance
- Enables real-time generation of HTML dynamic lightweight reports, making reports easier to read and analyze data
- Support for DSL scripts to make test scripts easier to develop and maintain
- Supports recording and generating test scripts, making it easy to generate test scripts
- Support for importing Har (Http Archive) and generating test scripts
- Support Maven,eclipse,intellij and so on to develop
- Support for Jenkins for continuous integration
- Plug-ins are supported to extend their functionality, such as the ability to extend support for other protocols
- Open Source Free
Sample test Scenario:
Http://gatling.io/docs/2.1.7/advanced_tutorial.html
object Search { val feeder = csv("search.csv").random // 1, 2 val search = exec(http("Home") .get("/")) .pause(1) .feed(feeder) // 3 .exec(http("Search") .get("/computers?f=${searchCriterion}") // 4 .check(css("a:contains(‘${searchComputerName}‘)", "href").saveAs("computerURL"))) // 5 .pause(1) .exec(http("Select") .get("${computerURL}")) // 6 .pause(1)}
Statistics Chart:
Ngrinder
The official website is very card, really very card ...
http://naver.github.io/ngrinder/
Ngrinder is a very easy to manage and use performance testing system based on Grinder development.
It consists of a controller and multiple agents that connect it, the user can manage and control the test through the Web interface, and view the test report, and the controller distributes the test to one or more agents to execute. Users can set up to execute the script concurrently using multiple processes and threads, and in the same thread, repeatedly executing test scripts to simulate many concurrent users.
Ngrinder's test is based on a Python test script that, after the user writes a test script according to a certain rule, the Controller distributes the script and other required files to the agent, which is executed with Jython. And in the execution process, collect the running condition, response time, test the target server running condition and so on. and save these data to generate run reports for later viewing.
One of the features of Ngrinder is that it is very easy to use, easy to install, can be used out of the box, test users can easily start testing tasks. Of course, if you want to perform some of the more complex scenarios of performance testing, you need testers to have some knowledge of python.
Sample test Scenario:
Http://grinder.sourceforge.net/faq.html#simulating-users
## testRandomise.py#import randomimport stringclass TestRandomise: def __init__(self, filename): self._users = [] infile = open(filename, "r") for line in infile.readlines(): self._users.append(string.split((line),‘,‘)) infile.close() def getUserInfo(self): "Pick a random (user, password) from the list." return random.choice(self._users)## Test script. Originally recorded by the TCPProxy.#from testRandomise import TestRandomisetre = TestRandomise("users.txt")class TestRunner: def __call__(self): # Get user for this run. (user, passwd) = tre.getUserInfo() # ... # Use the user details to log in. tests[2002].POST(‘https://host:443/securityservlet‘, ( NVPair(‘functionname‘, ‘Login‘), NVPair(‘pagename‘, ‘Login‘), NVPair(‘ms_emailAddress‘, user), NVPair(‘ms_password‘, passwd), ))
Statistics Chart:
Locust
http://locust.io/
Locust is an open source load testing tool. Use Python code to define user behavior, or to simulate millions of users.
Locust is a very simple to use, distributed, User load testing tool. Locust is primarily a load test for websites or other systems that can test how many users a system can handle concurrently.
The Locust is completely time-based, so a single machine supports thousands of concurrent users. Compared to many other event-driven applications, Locust does not use callbacks, but uses lightweight processing gevent.
Sample test Scenario:
Http://docs.locust.io/en/latest/quickstart.html#example-locustfile-py
from locust import HttpLocust, TaskSetdef login(l): l.client.post("/login", {"username":"ellen_key", "password":"education"})def index(l): l.client.get("/")def profile(l): l.client.get("/profile")class UserBehavior(TaskSet): tasks = {index:2, profile:1} def on_start(self): login(self)class WebsiteUser(HttpLocust): task_set = UserBehavior min_wait=5000 max_wait=9000
Statistics Chart:
Other tools not participating in the comparison
Because there is no scripting capability or CLI, no comparison is added
- Jmeter
- Apachebench (AB)
- Tsung
Locust author's complaints about JMeter and Tsung:
Http://my.oschina.net/u/1433482/blog/464092#OSC_h4_3
We have studied existing solutions that are not in line with the requirements. Like Apache JMeter and Tsung.
JMeter is UI-based, easy to get started with, but basically does not have the ability to program. Second jmeter based on threading, it is almost impossible to simulate thousands of users.
Tsung is based on Erlang, which simulates thousands of users and is easy to scale, but it's XML-based DSL that describes scenarios with weak capabilities and requires a lot of data processing to know the test results.
Compare compare accounts x Tools matrix
]
Conclusion
Obviously, the preferred all-rounder is Gatling, and the concurrency model of the Akka actor is the originator of Erlang from the concurrency language.
If you want to expand the performance testing tool yourself, then locust this small and fine tool can be considered.
Ngrinder tool is the Korean version of Line Open source, and has opened a Chinese forum, by South Korean engineers to answer the Chinese developers. But there are two problems, one is the official website is too card, and the other examples are fragments incomplete.
All students refer to the above comparison, their own picking bar.
Reprint Address: https://testerhome.com/topics/3003
Comparison of Web Service performance test Tools