Doing API-related work is challenging, and keeping the system stable and robust at peak times is one of the reasons why we do a lot of stress tests in Mailgun.
For so long, we've tried a variety of ways, from simple apachebench to more sophisticated custom test kits. But this post is a way to use Python for "fast rough" and very flexible stress testing.
When using Python to write HTTP clients, we all like to use the Requests library. This is what we recommend to our API users. Requests is very powerful, but there is a drawback, it is a modular per-thread A call to something, it is difficult or impossible to use it to quickly generate tens of thousands of levels of requests.
treq on Twisted Introduction
To solve this problem we introduced the Treq (GitHub library). Treq is an HTTP client library that is affected by requests, but it runs on twisted and has a typical twisted power: it is asynchronous and highly concurrent when dealing with network I/O.
Treq is not limited to stress testing: it is a good tool for writing high concurrent HTTP clients, such as web crawler. Treq is elegant, easy to use and powerful. This is an example:
>>> from treq import get
>>> def done (response):
... print response.code
... reactor.stop ()
>>> get ("http://www.github.com") .addCallback (done)
>>> from twisted.internet import reactor
200
Simple test script
Below is a simple script that uses Treq to bomb a single URL with the largest possible number of requests.
#! / usr / bin / env python
from twisted.internet import epollreactor
epollreactor.install ()
from twisted.internet import reactor, task
from twisted.web.client import HTTPConnectionPool
import treq
import random
from datetime import datetime
req_generated = 0
req_made = 0
req_done = 0
cooperator = task.Cooperator ()
pool = HTTPConnectionPool (reactor)
def counter ():
'' 'This function gets called once a second and prints the progress at one
second intervals.
'' '
print ("Requests: {} generated; {} made; {} done" .format (
req_generated, req_made, req_done))
# reset the counters and reschedule ourselves
req_generated = req_made = req_done = 0
reactor.callLater (1, counter)
def body_received (body):
global req_done
req_done + = 1
def request_done (response):
global req_made
deferred = treq.json_content (response)
req_made + = 1
deferred.addCallback (body_received)
deferred.addErrback (lambda x: None) # ignore errors
return deferred
def request ():
deferred = treq.post ('http://api.host/v2/loadtest/messages',
auth = ('api', 'api-key'),
data = {'from': 'Loadtest <test@example.com>',
'to': 'to@example.org',
'subject': "test"},
pool = pool)
deferred.addCallback (request_done)
return deferred
def requests_generator ():
global req_generated
while True:
deferred = request ()
req_generated + = 1
# do not yield deferred here so cooperator won't pause until
# response is received
yield None
if __name__ == '__main__':
# make cooperator work on spawning requests
cooperator.cooperate (requests_generator ())
# run the counter that will be reporting sending speed once a second
reactor.callLater (1, counter)
# run the reactor
reactor.run ()
Output result:
2013-04-25 09:30 Requests: 327 generated; 153 sent; 153 received
2013-04-25 09:30 Requests: 306 generated; 156 sent; 156 received
2013-04-25 09:30 Requests: 318 generated; 184 sent; 154 received
The numbers in the "Generated" category represent requests that have been prepared by the Twisted reactor but have not yet been sent. This script ignores all error handling for simplicity. Adding information about the timeout status to it is left as an exercise for the reader.
This script can be used as a starting point, and you can expand and improve it to customize the processing logic for specific applications. It is recommended that you use collections.Counter to replace ugly global variables when you improve. This script runs on a single thread, if you want to squeeze the maximum number of requests through a machine, you can use the technical means like mulitprocessing.
May you enjoy the stress test!