Pipelines and Multiplexers

Source: Internet
Author: User

The delay is intolerable. Modern computers can generate data at an alarming rate, and high-speed Internet (often with multiple parallel connections between important servers) provides great bandwidth, but this nasty delay means that the computer spends a lot of time waiting for data. One of several reasons why continuation-based programming is becoming more and more popular. Let's consider some of the rules of the program code:

String a = db. Stringget ("a");  String b = db. Stringget ("b");    

Follow these associated steps, which looks like this:

    [REQ1]                         # Client: Client library constructs a request 1         [C=>s]                    # Network   : Request 1 is sent to server              [Server]             # server: Server processing request 1                     [s=>c]        # Client: Client library parsing response Data 1 [ REQ2] [C=>s] [Server] [s=>c] [resp2]       

Now let's highlight the part that the client handles:

[REQ1]     [====waiting=====]                      [RESP1]                            [REQ2]                                 [====waiting=====]                                                  [RESP2] 

Keep in mind that this is not measurable, and if it is measured in time, it will always wait (time is spent waiting to be processed).

Pipeline

For this reason, many Redis clients allow you to take advantage of pipelines, handle sending multiple messages without waiting for each reply, and when the message comes in, the processing of the reply will be deferred. In. NET, an operation can be initialized and not completed, or it is encapsulated by the TPL through the Task/task<t> API after completion or after an error has occurred. Essentially,task<t> represents a "future value of T-type" (a non-generic Task is actually task<void>). You can choose one of the two:

    • . Wait () (Blocks execution until the task completes)
    • . ContinueWith (...) Or await (creates a continuation that executes asynchronously when the target task completes)

For example: The following is sample code for REDIS clients to take advantage of pipelines:

var apending = db. Stringgetasync ("a");  var bpending = db. Stringgetasync ("b");  var a = db. Wait (apending); var b = db. Wait (bpending);      

Note: I have used db here. Wait because he will automatically apply the Sync timeout configuration, if you like, you can also use apending.wait () or Task.waitall (apending, bpending), using the pipeline allows us to get two requests in the network immediately, Thus eliminating most of the delay. In addition, it can also help us reduce packet fragmentation: 20 requests for a separate send (waiting for each response) require at least 20 packages, but sending 20 requests in a pipeline requires only a few packages (even one package required).

Instant-ON, disposable

A special pipeline case is when we do not care about the response of the operation, allowing the code to continue executing and the queued operation to be processed in the background. This usually means that we can put concurrent work in a connection from a single call. We can use the flags parameter to implement:

Adjustable term db. Keyexpire (Key, Timespan.fromminutes (5), flags:CommandFlags.FireAndForget);  Value = (string) db. Stringget (key);  

The fireandforget tag causes the client library to queue up normally, but returns a default value immediately (Keyexpire returns a bool type, which returns false because the default value is false-but the value returned is meaningless and we should ignore it). The *async method also returns a completed task<t> as the default value (or a completed Task returned as void).

Multiplexing (multiplexing)

Using the pipeline processing technique is very good, but we often use the block code alone to fetch only a single value (or perhaps only perform some action, depending on the individual needs). This means we still have a problem: we spend a lot of time waiting for data to be transferred from the client to the server side. Now let's consider a busy application, which may be a Web service. This type of application is usually highly concurrent, and when you have 20 parallel applications requesting all the required data, you may want to rotate (spinning up) The 20 connections, or you can access a separate connection synchronously (which means that the last caller needs to wait for the first 19 to complete before it starts). Or, as a compromise, maybe a pool of 5 connections to rent-no matter what you do, there's a lot of waiting to be done. Stackexchange.redis does not need to do that; instead, it does a lot of work for you, by multiplexing a single connection so that you can effectively use your spare time. When a different caller accesses it at the same time, it automatically uses the pipeline to detach the access request, so the work is handled by the pipeline regardless of whether it is accessed in a blocking or asynchronous manner. So we can have 10 or 20 previous "Get A and B" scenarios (requests from different apps), and they get connected as soon as possible. Essentially, it fills the waiting time with other callers ' work.

Therefore, Stackexchange.redis will not provide (and will never provide) "blocking eject (blocking POPs)" (Blpop, Brpop, and Brpoplpush)-because this will allow a single caller to delay the entire multiplexer, And then blocks all callers. Stackexchange.redis needs to keep the work in order to verify the prerequisites for a transaction, which is why Stackexchange.redis encapsulates such a condition to manage Condition instances internally. More transactional information. If you want to "block Eject (blocking POPs)" Then I strongly recommend that you consider using the Publish/subscribe function:

Delegate {    string work = db. Listrightpop (key);    null) Process (work);}); "");

Note: The same purpose can be achieved without blocking operations:

    • Data is not sent via publish/subscribe; The Publish/Subscribe API is used only to notify workers to check more work

    • If there are no workers, then the new item is still in the buffer list and the work does not execute

    • Only one worker can pop a value; When the consumer is more than the producer, some consumers will be notified and then find nothing to do

    • When you restart the worker, you should assume that there is a backlog of work that can be handled

    • In addition, the semantics for blocking popup are the same

Concurrent

It should be noted that the pipeline/multiplexer/future-value, and so on, and the continuation-based asynchronous code is also done very well, for example:

Await DB. Stringget (key); if (value, flags:CommandFlags.FireAndForget);} value;  

Pipelines and Multiplexers

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.