"Translation" uses go to process millions of data requests per minute

Source: Internet
Author: User
Tags ruby on rails elastic load balancer
This is a creation in Article, where the information may have evolved or changed.

The original author for Malwarebytes Company's chief architect Marcio Castilho, the original address of the blog--http://marcio.io/2015/07/ handling-1-million-requests-per-minute-with-golang/

Objective

At present Malwarebytes company is undergoing significant development, and since I joined this Silicon Valley company a year ago, one of my primary responsibilities is to provide products to some fast-growing security companies and all companies that need infrastructure, These products are used by millions of users every day. I have worked for several different companies in the anti-virus and anti-malware industry for 12 years, and I know how complex these systems are ultimately because of the large amount of data that is being processed every day.

Interestingly, in the last 9 years or so, all of the back-end Web developments I've been involved with have been done almost always through Ruby on rails. Don't get me wrong, I like Ruby on rails and I believe it's a great environment, but after you start thinking and designing the system in Ruby's way for some time , if you can use multithreading, parallelism, fast execution, and low memory overflow, you will overlook the efficiency and simplicity of the software architecture. I am a C, C++,delphi and C # Developer with years of experience, and I am just beginning to realize how we should use the right tools to reduce the complexity of our work.

As the chief architect, I disagree with those sites that argue with developing languages and frameworks. I believe that efficiency, productivity, and code maintainability depend more on how you simply go about architecting your solution.

Problem

As our anonymous telemetry and analytics system works, our goal is to be able to handle the massive number of post requests from millions of terminals. The Web processor will accept a JSON document with a large collection of payload, which needs to be written to the Amazon S3 system so that our caching service system ( Map-reduce) to process its data later.

Traditionally, we will consider creating a worker layer architecture that leverages such things as:
-sidekiq
-Resque
-Delayedjob
-Elasticbeanstalk Worker Tier
-RabbitMQ
-Wait.

and configure a cluster for both the Web front end and the workers layer, so that we can improve the working capabilities that our backend needs to handle.
Because at the stage of our discussion, we saw the potential of the go language in dealing with large traffic systems, so we knew in the beginning that it should be done in the go language. I have been using the go language for more than two years and have been online for some systems, but none of them can reach this load.

We started to create structures to define the load of Web requests received through the post call, and to create a method for the struct to upload to our S3 storage container.

     typePayloadcollectionstruct{windowsversionstring    ' JSON: ' Version 'Tokenstring    ' JSON: ' token 'payloads []payload' JSON: ' Data '}typePayloadstruct{//[redacted]}func(P *payload) UploadToS3 () Error {//The Storagefolder method ensures that there is no name collision in        //Case we get same timestamp in the key nameStorage_path: = Fmt. Sprintf ("%v/%v", P.storagefolder, time. Now (). Unixnano ()) Bucket: = s3bucket B: =New(bytes. Buffer) Encodeerr: = json. Newencoder (b). Encode (payload)ifEncodeerr! =Nil{returnEncodeerr}//Everything we post to the S3 bucket should is marked ' private '        varACL = S3. PrivatevarContentType ="Application/octet-stream"       returnBucket. Putreader (Storage_path, B,Int64(B.len ()), ContentType, ACL, S3. options{})}

The primary go Routines implementation method

At first, we took a very low implementation of the post processor and just tried to parallelize the processing task into a simple go routines:

funcPayloadhandler (w http. Responsewriter, R *http. Request) {ifR.method! ="POST"{W.writeheader (http. statusmethodnotallowed)return}//Read the body into a string for JSON decoding     varContent = &payloadcollection{} err: = JSON. Newdecoder (IO. Limitreader (R.body, MaxLength)). Decode (&content)ifErr! =Nil{W.header (). Set ("Content-type","Application/json; Charset=utf-8 ") W.writeheader (http. Statusbadrequest)return}//Go through each payload and queues items individually to being posted to S3     for_, Payload: =RangeContent. Payloads {GoPayload. UploadToS3 ()//<-----DON ' T do this} w.writeheader (http. Statusok)}

Relatively mild load, this may work very well, but in the face of large-scale load, this proved not very feasible. At first we expected a large number of requests, but we did not anticipate that there would be such an order of magnitude after the first production version of the system deployment, and we completely underestimated the volume of traffic.

In several ways to implement go routines, the above approach is poor. There is no way to control how many go routines I have created, and because I receive a 1 million post request per minute, the code quickly crashes down.

Try to improve

We need to find a different way. From the beginning, we talked about how we should keep the request. The processor's lifecycle is very short and the process is created in the background. Of course, you have to do this in Ruby on rails, or you'll block all currently working Web processors, whether you're using Puma , Unicorn,passenger (let's not get bogged down in JRuby's discussion). We could have used common solutions such as RESQUE,SIDEKIQ,SQS and so on, and there were many ways to achieve this.

So the second iteration is to create a buffered channel so we can queue up some tasks and upload them to the S3 system. Because we are able to control the maximum number of queues in the queue, and we have enough RAM memory to queue these tasks, So we think it's a good idea to cache tasks in the channel queue.

varchan Payloadfunc init() {    make(chan Payload, MAX_QUEUE)}func payloadHandler(w http.ResponseWriter, r *http.Request) {    //...    // Go through each payload and queue items individually to be posted to S3    forrange content.Payloads {        Queue <- payload    }    //...}

We use the following similar approach to remove tasks from the channel and process them:

func StartProcessor() {    for {        select {        case job := <-Queue:            job.payload.UploadToS3()  // <-- STILL NOT GOOD        }    }}

To tell you the truth, I didn't know what we were thinking about when the night was full of Red Bull. This approach does not replace anything for us, and we use buffer queues to achieve such a flawed concurrency simply by delaying the initial problem (the system goes down when millions of requests arrive). Our sync processor can only upload one request load to the S3 system at a time, and because the request comes in much faster than the single processor's ability to upload to S3, our buffer channel quickly reaches its upper limit and blocks the request processor to arrange more tasks.

We're simply avoiding the problem, but slowly, the system will eventually go down. After we release this defective version, the latency rate keeps increasing at a constant rate.

Better to improve

We decided to use a common pattern when we use the Go channel to create a two-layer channel system, one for the task queue, and another to control the number of workers working in parallel on the task queue.

The idea is to parallelize the work uploaded to S3 at a sustainable rate that will not weaken the machine's processing power and will not cause connection errors with the S3. So we chose to create a job/worker pattern for workers who are familiar with languages like java,c#. The Golang way of the thread pool is through the channel.

var(maxworker = os.) Getenv ("Max_workers") Maxqueue = os. Getenv ("Max_queue"))//Job represents the job to be runtypeJobstruct{Payload Payload}//A buffered channel that we can send work requests on.varJobqueueChanJob//worker represents the worker that executes the jobtypeWorkerstruct{WorkerpoolChan ChanJob JobchannelChanJob quitChan BOOL}funcNewworker (WorkerpoolChan ChanJob) Worker {returnworker{Workerpool:workerpool, Jobchannel: Make(ChanJOB), quit: Make(Chan BOOL),    }}//Start method starts the run loop for the worker, listening for a quit channel in//Case we need to stop itfunc(W Worker) Start () {Go func() { for{//Register the current worker into the worker queue.W.workerpool <-W.jobchannelSelect{ CaseJob: = <-w.jobchannel://We have received a work request.                    ifERR: = job. PAYLOAD.UPLOADTOS3 (); Err! =Nil{log. Errorf ("Error uploading to S3:%s"Err. Error ())} Case<-w.quit://We have received a signal to stop                    return}        }    }()}//Stop signals the worker to stop listening for work requests.func(W Worker) Stop () {Go func() {W.quit <-true}()}

We modified the Web request processor to create the job structure with the requested payload embedded in it and passed it into the Jobqueue channel for workers to take out.

funcPayloadhandler (w http. Responsewriter, R *http. Request) {ifR.method! ="POST"{W.writeheader (http. statusmethodnotallowed)return}//Read the body into a string for JSON decoding    varContent = &payloadcollection{} err: = JSON. Newdecoder (IO. Limitreader (R.body, MaxLength)). Decode (&content)ifErr! =Nil{W.header (). Set ("Content-type","Application/json; Charset=utf-8 ") W.writeheader (http. Statusbadrequest)return}//Go through each payload and queues items individually to being posted to S3     for_, Payload: =RangeContent. Payloads {//Let's create a job with the payloadWork: = Job{payload:payload}//Push the work onto the queue.Jobqueue <-Work} w.writeheader (http. Statusok)}

When the Web server starts, create a dispatcher and then call Run () to create the worker pool and start to listen for incoming tasks in the Jobqueue channel.

dispatcher := NewDispatcher(MaxWorker)dispatcher.Run()

The following is the implementation code for dispatcher:

typeDispatcherstruct{//A Pool of workers channels that is registered with the dispatcherWorkerpoolChan ChanJOB}funcNewdispatcher (maxworkersint) *dispatcher {pool: = Make(Chan ChanJob, Maxworkers)return&dispatcher{workerpool:pool}}func(d *dispatcher) Run () {//Starting n number of workers     forI: =0; i < d.maxworkers; i++ {worker: = Newworker (D.pool) worker. Start ()}GoD.dispatch ()}func(d *dispatcher) dispatch () { for{Select{ CaseJob: = <-jobqueue://A job request has been received            Go func(Job Job) {//Try to obtain a worker job channel , which is available.                //This would block until a worker is idleJobchannel: = <-d.workerpool//Dispatch the job to the worker job channelJobchannel <-Job}}}

We provide the maximum number of workers to be instantiated and added to the worker pool. We use the Amazon Elasticbeanstalk scenario in our project through a Docker go environment, and we've been trying to follow 12-factor Methodology to configure our production environment. We read these values from the environment variables so that we can control the maximum number of workers in the task queue. So we can quickly adjust these values without redeploying the clusters.

var (    MaxWorker = os.Getenv("MAX_WORKERS")    MaxQueue  = os.Getenv("MAX_QUEUE"))

Direct results

After we released this version we quickly saw that our delay rate was reduced to a negligible number and our ability to process the request increased significantly

After a few minutes our elastic load balancer (Elastic load balancers) is fully functioning. We see our Elasticbeanstalk application processing nearly 1 million requests per minute. We usually have a few hours in the morning, Traffic spikes are more than 1 million per minute.

The number of servers required after we deployed the new code dropped from 100 to 20 units.

After we correctly configure the cluster and auto scaling settings, we can reduce the number of servers to only 4 Ec2,c4_large instances (these are server instances of the AWS Cloud). If the CPU has a 5-minute load of more than 90%, the elastic scaling setting will generate a new instance.

Summarize

In my textbook, Simple always wins, we could have a lot of queues, backstage workers, complex scheduling to design a complex system, but instead we decided to use Elasticbeanstalk The automatic scaling capability and Golang provides us with a simple and efficient concurrency mode that is available out of the box.

Not a cluster of four servers per day, which may not be as good as my MacBook Pro when it comes to processing these 1 million post requests per minute that will be written to the Amazon S3 storage container.

There are always the right tools for working with you, and sometimes when your Ruby on Rails system needs a powerful web processor, jump out of the ruby ecosystem and think of a few simple options to choose from.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.