Use Node. js to develop a reverse proxy server for memcache Protocol

Source: Internet
Author: User

Memcache is a common key-value cache solution, and its Protocol is also used in the nosql database tokyo tyrant.

In actual projects, for load balancing and other considerations, clients such as php and java need to access multiple memcache and map a request for a specific key to a specific memcache. In this way, you need to configure multiple IP addresses on each client and implement the map algorithm, which is not easy to manage and maintain. Recently, I learned about node. js, so I decided to use node. js to build a reverse proxy for node. js. Implements the same protocol as memcache for clients such as php and java. Forward client requests to several backend memcache instances based on the request key values.

Node is developed based on v8 and libevent. The main idea is to use single thread + event loop to Implement Asynchronous io servers. This mode is similar to nginx, which is faster than traditional multi-process (such as apache prefork mode) or thread (such as tomcat and other app server.

There are many complicated factors to consider when implementing pure asynchronous I/O servers, and debugging is difficult if errors occur. Using node can free you from tedious memory processing and other underlying work, and focus on your core models and application logic.

It is very easy to use node to implement a socket server. The following code can implement a simple echo server:

 
 
  1. Server = net. createServer (
  2. Function (socket ){
  3. Socket. on ('data', function (data ){
  4. // This is socket.
  5. This. write (data );
  6. });
  7. }
  8. ). Listen (port_number );

Because I want to receive and distribute memcache requests based on the request key value, I need to parse the received data. Then, the request key determines which memcache is used and writes the request to the memcache connection. When the memcache connection receives data, it writes the data back to the client socket, and the basic process of the proxy ends.

 
 
  1. socket.on('data',function(data){  
  2.         var request=mk_request(data);  
  3.         var mc=create_memcache_conn(request.key);  
  4.         mc.write(request);  
  5.         mc.on('data',function(data){  
  6.                 socket.write(data);  
  7.             });  
  8.     }); 

It is worth noting that due to the asynchronous feature of node, it is possible to receive less data than one request, or more data than one request, or contain several requests, the remaining data must be added to the data received next time to ensure that there is no problem in receiving requests. The program is modified to the following structure:

 
 
  1. Function process_request (socket, request ){
  2. Var mc = create_memcache_conn (request. key );
  3. Mc. write (request );
  4. Mc. on ('data', function (data ){
  5. Socket. write (data );
  6. });
  7. }
  8. Socket. on ('data', function (data ){
  9. Do {
  10. // Merge the remaining data with the new data
  11. Data = new Buffer (this. remain_data, data );
  12. // If the mk_request does not parse a complete request, false is returned.
  13. Var request = mk_request (data );
  14. This. remain_data = data. slice (request = false? 0: request. length );
  15. Process_request (this, request );
  16. } While (request );
  17. });

However, the test results are still problematic. This is because the client may send multiple requests to the server at a time. When the server uses multiple memcache connections to process these requests and writes them back to the client in the ondata of the memcache connection, it cannot be ensured that the write-back sequence is consistent with the request sequence, resulting in an error. Therefore, a queue is required to ensure the order of data written back.

 
 
  1. Function process_request (socket, request ){
  2. Var mc = create_memcache_conn (request. key );
  3. Mc. write (request );
  4. Mc. queue. push ({req: request, res: false });
  5. Mc. on ('data', function (data ){
  6. // Receiving memcache responses are the same as receiving client requests, and may receive half of the requests at a time.
  7. Data = new Buffer (this. remain_data, data );
  8. // If mk_response does not parse a complete response, false is returned.
  9. Var response = mk_response (data );
  10. If (response ){
  11. // After receiving a response, add the response to the queue
  12. For (var I = 0; i0 ){
  13. If (socket. queue [0]. res ){
  14. Socket. write (socket. queue. shift (). res. buffer );
  15. } Else {
  16. Break;
  17. }
  18. }
  19. This. removeListener ('data', arguments. callee );
  20. }
  21. });
  22. }

In addition, the mk_request and mk_response functions need to be implemented, and the implementation is based on the memcache protocol. For my implementation, refer to the article: memcache protocol Chinese Version http://www.ccvita.com/306.html), memcache protocol is relatively simple.

In the final implementation, I also added the memcache connection pool function because it takes a long time to establish a memcache connection. We recommend that you maintain a persistent connection with the server in the memcache documentation, to speed up efficiency.

After joining the connection pool, I did a simple stress test. On a general laptop, there are dozens of processes concurrently, and there are no problems with the number of times per second. You can adjust the number to w.

Complete code please see my github Source: https://github.com/wwwppp0801/nodeproxy

Reference:

Memcache protocol (http://www.ccvita.com/306)

Node. js documentation: http://nodejs.org/docs/v0.4.7/api

Original article: http://blog.webshuo.com/2011/05/13/%E7%94%A8node-js%E5%BC%80%E5%8F%91memcache%E5%8D%8F%E8% AE % AE %E7%9A%84%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86%E6%9C%8D%E5%8A%A1%E5%99%A8/

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.