Golang implementing a hot-load configuration

Source: Internet
Author: User
Tags php language haproxy
This is a creation in Article, where the information may have evolved or changed.

Today continues to optimize the Bigpipe project, the core goal is to resolve the restart process loss of traffic problems.

Background

Bigpipe, as a message middleware, appears to provide a convenient asynchronous HTTP call function for PHP programs. However, the PHP language is not a resident process model, and when it requests a bigpipe failure, the most retries must be answered back to the user as soon as possible. Therefore, the availability of bigpipe services is very important.

Bigpipe is written using Golang, using channel-by-layer buffering of traffic and data, and using the co-process concurrent processing data. Because Bigpipe undertook a number of business, often make some changes to the configuration file, then you must restart the bigpipe.

Thinking

In the initial version, Bigpipe provides graceful exit functionality, which is to stop the external HTTP service first before exiting, and then clean up the remaining data processing in the process and then exit, so as not to lose the received request.

Graceful exit There is a problem, is to stop the external HTTP service, so that there is no new traffic flow, it is possible to buffer in memory of the remaining traffic processing clean. Because of this design, the client is unable to access the Bigpipe for a period of time after the HTTP stop service, and the service is completely unavailable.

The initial idea was to deploy multiple bigpipe with load balancing such as Lvs/haproxy, so that LVS would automatically forward traffic once the HTTP port was closed. However, the disadvantage is that bigpipe must be deployed more, and Lvs/haproxy does not guarantee that the traffic instantaneous switch to the normal node, the total loss of some traffic, which requires the client to support the retry logic, in short, is not a perfect solution.

Another idea is to still deploy multiple equivalent bigpipe cluster, before bigpipe to deploy a self-developed lightweight proxy service, which supports forwarding retries between multiple bigpipe, but this is not only to bring greater operational costs, in fact, still do not face the nature of the problem, The farther on the wrong road.

Scheme

You have to let Bigpipe support configure hot loading, which is not very easy to implement, and I'll say where it's hard to go.

First, during the load of the new configuration, the HTTP external service cannot be stopped, so I decided that the HTTP module itself does not support unloading (a simple configuration such as HTTP listener address, read-write timeout, etc.), and it always maintains external service.

However, the request processing modules, such as the need to load a new configuration, during the reload of these modules, HTTP received requests must be buffered, so as to achieve 0 loss of traffic, so I redesigned the module structure, the HTTP interface layer and the business processing layer to add a buffer layer, Designed to support the flow buffering during thermal loading.

To simplify the design, this buffer layer is always present, regardless of whether or not the hot load feature is used.

Another important point of change is that I used the global singleton pattern in the previous configuration file and assumed that it would not change once it was loaded. However, in Golang such a multithreaded concurrency model, to support the hot load configuration, you can not let the configuration itself become a singleton, otherwise the module is accessing the singleton configuration content loaded into a new, the module will crash.

Therefore, the normal design idea for configuring hot loading is that the old module uses the old configuration, the new module uses the new configuration, the configuration file no longer saves the Singleton, but the resolution succeeds, and the copy is passed into each module for saving.

Once the configuration file is reloaded into memory, the next thing to do is similar to graceful exit, where the HTTP module pauses forwarding traffic to the internal module, but it still receives external traffic and caches it.

Next, each old module begins to consume the remaining traffic and eventually destroys itself.

When all the old modules exit, the new configuration is passed to each module, the new module instance is started, and the HTTP module resumes forwarding traffic to the internal module, and the program resumes running.

However, the only way to complete these designs does not solve the whole problem , the most tricky is the log and stats module, the former responsible for the logs, the latter responsible for the program count, they need a hot load configuration, such as: operations want to change the log output directory or log level changes.

These two modules are special, they are called by the other modules, and is a concurrent call, the equivalent of "to the aircraft in the sky for the engine", very difficult. By design, log and stats should be destroyed and rebuilt after all old modules have been destroyed. But the problem is, the HTTP service module has not been destroyed, it is still in real-time operation of the log and stats library, then how to restart the 2 modules?

Here I use the Atomic library, both the log and the stats modules are singleton patterns, and the objects are saved as pointers. In the case where the program continues to access 2 modules, to destroy the Singleton and rebuild it, the pointer must be atomic, but Golang provides pointers to the atomic operation:

12 Atomic . Storepointer Atomic . Loadpointer

With these 2 APIs, I can perform an instantaneous conversion by manipulating a single pointer on an atom.

Of course, the log and stats statistics printed by the HTTP module will not be valid until the time between the destruction and the rebuild, but this time is usually short to negligible.

Take the log library as an example, the corresponding log operation function is also first obtained by atomic the log pointer, if there is the actual operation, otherwise do nothing:

Go
1234567891011121314151617181920 //Single casevar Gloggerunsafe.Pointer = Nilfunc GetLogger() *Logger {return (*Logger)(Atomic.Loadpointer(&Glogger))}func FATAL(formatstring, v ...Interface{}) {if Logger := GetLogger(); Logger != Nil {Userlog := FMT.Sprintf(format, v...)Logger.Queuelog(Log_level_fatal, &Userlog)}}func ERROR(formatstring, v ...Interface{}) {if Logger := GetLogger(); Logger != Nil {Userlog := FMT.Sprintf(format, v...)Logger.Queuelog(Log_level_error, &Userlog)}}

This is what I encountered during the implementation of the Bigpipe hot load, and I hope to help you design a hot load.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.