Go: Implement a simple service-side push scenario

Source: Internet
Author: User

Originally from: http://blog.csdn.net/jiao_fuyou/article/details/17090355

Client and server interaction has push and pull two ways: If the client pull, it is usually Polling, if it is a service-side push, the general is comet, the current more popular comet implementation is long Polling.

Note: If you do not know the meaning of the relevant nouns, you can refer to: Browser and the Server to continue to synchronize the practice of the introduction.

Let's take a look at polling, which is actually what we normally call polling, roughly as follows:

Polling

Because the server does not actively tell the client whether it has new data, the polling is poor in real-time. Although the problem can be mitigated by speeding up polling frequency, the cost is not small: one will keep the load high and the bandwidth will be stretched.

Again, long Polling, if you use traditional lamp technology to achieve it, it looks something like this:

Long Polling

Instead of polling the server frequently, the client initiates a long connection to the server, and the server polls the database to determine if there is new data, and once the new data is discovered it responds to the client, and the interaction ends. The client processes the new data and then restarts a long connection, so it repeats itself.

In the above long polling scheme, we solve the polling in the client polling caused by the load and bandwidth problems, but there is still service-side polling, the pressure of the database can be imagined, at this time, although we may use the master-slave replication for the database, sharding and other technologies to alleviate the problem, But after all, it's just a palliative.

Our goal is to achieve a simple service-side push solution, but simply does not mean that simple, polling database is not acceptable, let's look at how to solve the problem. Here we give up the traditional lamp technology, instead using NGINX and LUA to achieve.

Modified Long Polling

The main idea of this scenario is this: using Nginx as a server to create a long connection through the LUA thread, once the database has new data, it will proactively notify Nginx, and the corresponding identity (such as a self-increment integer ID) in the Nginx shared memory, and then, Nginx will not poll the database, but instead polling the local shared memory, by the comparison of the identity to determine whether there is a new message, if any, to respond to the client.

Note: The tuning of the kernel parameters when the service side maintains a large number of long connections is available at: HTTP long connection 2 million attempts and tuning.

First, we simply write a little bit of code to implement polling (omitting the operation of querying the database):

Lua_shared_dict config 1m;server {    location/push {        Content_by_lua '            local id = 0;            Local TTL = +;            Local now = Ngx.time ();            Local config = ngx.shared.config;            If not config:get ("id") then                config:set ("id", "0");            the end While ID >= tonumber (config:get ("id")) does                local random = Math.random (ttl-10, TTL + ten);                If Ngx.time ()-now > random then                    ngx.say ("NO");                    Ngx.exit (NGX. HTTP_OK);                End                Ngx.sleep (1);            End            Ngx.say ("YES");            Ngx.exit (NGX. HTTP_OK);        ';    }    ...}

Note: In order to handle situations where the server does not know when a client disconnects, a timeout mechanism is introduced in the code.

Secondly, we need to do some basic work in order to operate Nginx shared memory:

Lua_shared_dict Config 1m;server {location/config {content_by_lua ' local config = ngx.shared.conf            Ig                if Ngx.var.request_method = = "GET" then local field = Ngx.var.arg_field; If not field then Ngx.exit (NGX.                Http_bad_request);                End Local content = config:get (field); If not content then Ngx.exit (ngx.                Http_bad_request);                End Ngx.say (content); Ngx.exit (NGX.            HTTP_OK);                End If Ngx.var.request_method = = "POST" then Ngx.req.read_body ();                Local args = Ngx.req.get_post_args (); For field, value in pairs (args) does if type (value) ~= "table" then Config:set (fie                    LD, value);                End End Ngx.say ("OK"); Ngx.exit (NGX.            HTTP_OK);    End ';  }  ...} 

If you want to write Nginx shared memory, you can do this:

shell> curl-d "id=123" Http://<HOST>/config

If you want to read Nginx shared memory, you can do this:

Shell> Curl Http://<HOST>/config?field=id

Note: In practice, you should add permission-determination logic, such as only qualified IP addresses, to use this feature.

When a database has new data, it is possible to write Nginx shared memory via a trigger, although it is often a more elegant option to write Nginx shared memory in the application layer through observer mode.

As a result, the database is completely turned over to do the master, although the system still exists polling, but has from samsara to consult others turn to poll themselves, efficiency is not comparable, corresponding, we can speed up the polling frequency without causing too much pressure, thereby fundamentally improve the user experience.

Suddenly think of another interesting service-driven approach, it might be a bit of a chatter: if DB uses Redis, then it can use the Blpop method provided by the server to implement the service push, so that even sleep is not necessary, but one thing to note is that once the Blpop method is used, Then the connection between Nginx and Redis will be maintained, from the point of view of Redis, Nginx is the client, and the number of ports available to the client is limited, which means that an nginx can only build more than 60,000 connections (Net.ipv4.ip_local_ Port_range), a little bit less.

...

Of course, the description of this article is only bucket, there are many techniques to choose from, such as Pub/sub,websocket,nginx_http_push_module and so on, the space is limited, here is not much to say, interested readers please consult themselves.

Go: Implement a simple service-side push scenario

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.