DORA-RPC Dragon Boat Festival upgrade, Master also added a lot of features in the beta version.
Several companies have been using the project line for more than a year, of course, they are based on this as a prototype for a certain transformation. Recent upgrades have improved a lot, such as the removal of unique functions resulting in a doubling of performance, the discovery that the server has individual functions will be repeated execution of the problem is now fixed, but also increased the asynchronous task to get back results. At present Groupclient is also in the transformation, a little party will groupclient the function of integration into the client instead of two clients.
In the past, there has been no in-depth introduction, here is a brief description of some of the current features:
This open source is re-encapsulated using the swoole extension for PHP back-and-forth separation, and when our project becomes huge we need to isolate some functionality as a public service, and we want to make our complex systems easy to maintain through API services in this way. Better evaluate complex projects and make cross-project group calls clearer.
Framework thinking:
- Provides only the most basic set of server and client, can be used in any development framework through simple integration.
- The data structure is divided into three layers, each layer has MSG and code, the first layer (the bottom) represents the current client communication situation, the second layer represents the current API in the server execution situation is abnormal or delivery situation, the third layer on behalf of the code execution if the business code generated abnormal automatically recorded here. The use of this method is due to the complexity of RPC itself, which provides this way to judge the real state of the system.
- In order to support the asynchronous task to get the function later, the development package is upgraded, the main reason is that the service end processing of the asynchronous task will send the processing results to the client, if the current client to the server to release new tasks and execute recive, before the client is unable to determine whether the returned content is this request, At the same time due to the particularity of the long link, the last request if the exception is not executed recive, will result in the previous result was received by this request. In order to prevent the above problems, the GUID generation function is determined by the GUID to complete the collection of asynchronous task results and the return data authentication of non-this request.
- Because of the particularity of our business code, we cannot guarantee that our business code will not fail for some special reason, and many failures will cause the process to terminate, for which our business code is executed on the task, not on the worker. In this way we can do a lot of special functions. However, the task disadvantage is that many swoole async features are not supported.
- Service group, in many cases we are a public pool, since the same with the world we will be affected. It is also common to have different business deployments on different servers, and we provide service grouping capabilities. Specifies which groups a server belongs to, and the group can find the correct server configuration for the connection to work.
Task mode:
The number of tasks issued can be divided into:
- Concurrent multiple tasks issued
- Single task issued
The task waiting result has multiple modes:
- The task is blocked by the current process waiting to be processed and get the return data
- The asynchronous task does not wait for the result of the task processing, but only returns the success, and the task does not return after the server has finished running.
- The dispatch asynchronous task does not wait for the task in the server processing, the post succeeds returns the success immediately, but after the service completes the service end returns the processing result to the client, the client through a function unifies all previous asynchronous task result obtains returns to the current business process.
Through the above way we can multiply our interface response speed and server utilization capacity. Even without the use of complex multithreading, we can achieve our goals.
Network communication:
- Clients can work under PHP-FPM and CLI to maintain long links
- The network time spent on each return request is around 0.002~0.004 (also depends on the intranet network)
- The link remains unchanged after the request has been processed, reducing the handshake time and the number of ports consumed by the client Team server.
- If the data is large, support package compression gzip can be easily changed to other compression methods if special needs
- Serialized PHP comes with serialize if necessary, can be replaced directly by itself
- The connection and receive results default timeout of 3 seconds, set within the const file, and if a time-out occurs, the client automatically uses a different configuration to retry the specified number of times and returns the JSON description error reason. If necessary, you can increase retry monitoring within the retry logic to monitor the working status of each server.
- If there is an IP in this request that has failed once, it will automatically be masked within this time, and if all configurations have been tried or exceeded, the retry count will return to failure. The CLI fails once again after the API is issued again will be canceled before all masking configuration continues to retry.
- Before the RPC single-threaded client and the virtual machine under the test QPS probably in 200~500 times per second communication, but each issued a task can be a lot of QPS can be in 2k single application server
- The current communication architecture does not assume the reverse proxy service, because the client connection to the reverse proxy consumes a certain amount of time. In addition, the reverse proxy server, although high performance but itself is a "single point", for distributed services this way or direct connection is more convenient.
Service discovery:
- When we need to increase the number of servers to reduce our normal process is to update the client's configuration to achieve the new server increase, but this approach is still a lot of shortcomings if a server fails, we can not find in the short term, after the discovery still need to manually modify the configuration or to the operation of the configuration center area to remove, Through the service discovery we can automatically remove, increase the server, as long as our control logic is done very well later we can make this process completely automatic.
- This framework provides the simplest implementation of service discovery, using Redis as storage to record the IP and port and grouping of all the services.
- After launching the application server and telling him the Redis list that the service discovered, the server automatically escalated the IP and port and packet of the current service to multiple Redis for registration and periodically refreshed its time-out. If there are individual nodes that are not escalated after the timeout period, they are automatically removed.
- The client periodically obtains the latest configuration from multiple Redis and updates the contents of the local configuration file.
Post-Support:
The next step is to support distributed logging, configuration synchronization, etc.
Finally DORA-RPC is a very open source, the intention is to give Swoole community users as a reference to make excellent RPC and micro services.
At the same time, I hope that you will participate in the development of this project, any effective participants are recorded in the wiki.
Project Address: Https://github.com/xcl3721/Dora-RPC
Recent activity: http://www.huodongxing.com/event/5337738177600 share SOA combat experience, free tickets