Cloud Computing Design Model (IV)-consumer competition model

Source: Internet
Author: User
Cloud Computing Design Model (IV)-consumer competition model


Multiple concurrent users are allowed to process messages received in the same communication channel. This mode enables the system to process multiple emails at the same time to optimize throughput, improve scalability and availability, and balance workload.


Background and Problems


Applications running on the cloud can be predicted to process a large number of requests. Instead of synchronizing each request in the process, a common method is to process another service (consumer service) of the process asynchronously through a message transmission system to pass their applications. This policy helps to ensure that the application's business logic is not blocked and the request is being processed.

The number of requests may change significantly over time. A sudden burst of requests during user activity or gathering may result in unpredictable workloads from multiple tenants in the future. During peak hours, the system may need to process many hundreds of requests per second, while in other periods the number may be very small. In addition, the nature of the job to process these requests may be highly variable. The use of a single instance of the consumer service may cause the instance to become flooded with requests or the message transmission system can overload the incoming messages from the application. To handle this fluctuating load, the system can run multiple instances of the consumer service. However, these consumers must coordinate to ensure that each message is sent to only one single consumer. The workload also needs to be balanced across consumers to prevent an instance from becoming a bottleneck.

 

Solution


MQ is used to implement communication channels between applications and consumer service instances. Apply the POST request in the form of a message queue, and the consumer's service instance receives and processes the message from the queue. This method enables the consumer's service instance to process messages from any instance of the application in the same pool. Figure 1 shows the architecture.

 

Figure 1-use Message Queue distribution to improve to a service instance


This solution has the following advantages:
? It enables an inherent load balancing system to handle large changes in the number of requests sent by application instances. The queue acts as an application instance and consumer service instance, which helps minimize the number of application and service instances (the queue-Based Load Balancing mode described) the buffer between availability and responsiveness. Messages to be processed need to be processed for a long period of time and will not affect the processing of other messages processed by other instances of the consumer service at the same time.
? It improves reliability. If a producer communicates directly with the consumer instead of using this mode, but does not monitor the consumer, there is a high probability that the message may be lost or failed if the consumer cannot process it. Messages in this mode are not sent to a specific service instance. A failed service instance does not block a producer or message and can be processed by any processing service instance.
? It does not require complex coordination between consumers, or between producer and consumer instances. Ensure that each message is delivered at least once.
? It is scalable. The system can dynamically increase or decrease the number of messages of consumer service instances.
? It can improve elasticity if the message queue provides transaction read operations. If the consumer service instance can read and process the message as part of a transaction operation, and if the consumer service instance then fails, this mode ensures that the message will be returned to another instance consumer service that is picked up and processed in the queue.


Problems and precautions


Consider the following when deciding how to implement this mode:
? Order messages. The order in which the consumer service instance receives messages cannot be guaranteed and does not necessarily reflect the order in the created messages. Design the system to ensure that the information processing is idempotent, because it will help eliminate any dependency on the processing sequence of the message. For more information about idempotence, see Jonathan Oliver's blog idempotence model ??.


Note:

The Microsoft Azure service bus queue can be ensured by the sequence tool that uses the message first-in-first-out message. For more information, see use Session on message passing mode msdn.


? The permanent design service. If the system is designed as a failed service instance for detection and restart, it may be necessary to run the service ?? Instances are executed as idempotent operations to minimize the impact of more than one message being retrieved and processed.
? Detects harmful messages. Incorrectly formatted messages, or tasks that require access to unavailable resources, may cause service instance failure. The system should prevent such messages from being returned to the queue, but capture and store the details of such information elsewhere so that analysis can be performed as needed.
? Processing result. Service instances process a message completely separated from the application logic that generates the message, and they may not be able to communicate directly. If the service instance generation must return the logical results to the application, the information must be stored in one location, and the processing is completed when both the two can be accessed and the system must provide some indication, to prevent the application logic from retrieving incomplete data.


Note:

If you are using a azure workflow, you may be able to use the application logic that uses a dedicated email reply queue to return results. The application logic must be able to associate these results with the original message. In this case, more detailed asynchronous message primers are described.


? An extended information system. In a large solution, a message queue can be an overwhelming number of messages and become a bottleneck in the system. In this case, consider splitting the message system from the information of a specific manufacturer to a specific queue, or using load balancing to send messages across multiple message queues.
? The reliability of the email system is guaranteed. A reliable message transmission system must ensure that once an application is put into a queue, it will not be lost. This is important to ensure that all emails are delivered at least once.

When this mode is used



When using this mode:
? The workload is divided into asynchronous running tasks for an application.
? Tasks are independent and can be run in parallel.
? The working volume changes greatly and requires a scalable solution.
? This solution must provide high availability and be flexible if a task fails to be processed.

This mode may be inappropriate:
? It is not easy for applications to separate workloads into discrete tasks, or there is a high degree of dependency between tasks.
? The task must be synchronized, and the application logic must wait until the task is completed.
? The task must be executed in a specific order.


Note:

Some email systems support sessions, so that the producer groups messages and ensures that they are processed by the same receiver. This mechanism can be used with priority messages (if they are supported) to sort messages, in order from the producer to a single consumer.
 

Example

Azure provides storage queues and service bus queues, which can be implemented as an appropriate mechanism. The application logic can publish messages to a queue. Consumers can retrieve and process messages from the queue for tasks with one or more roles. For elasticity, a service bus queue enables consumers to use the peeklock mode when it retrieves messages from the queue. In this mode, the message is not deleted, but hidden from other consumers. After processing the email, the original user can delete the email. If the consumer fails, the peek lock times out and the message becomes visible again, allowing the consumer to retrieve it again.


Note:

For more information about using azure Service Bus queues, see service bus queues, topics, and subscriptions on msdn. For information about how to use the azure storage queue, see how to use the queue storage service on msdn.

The following code from the queuemanager class of the competingconsumers solution that can be downloaded shows how to use the queueclient instance to create a queue by using an event handler starting with a network or secondary role.

private string queueName = ...;private string connectionString = ...;...public async Task Start(){  // Check if the queue already exists.  var manager = NamespaceManager.CreateFromConnectionString(this.connectionString);  if (!manager.QueueExists(this.queueName))  {    var queueDescription = new QueueDescription(this.queueName);    // Set the maximum delivery count for messages in the queue. A message     // is automatically dead-lettered after this number of deliveries. The    // default value for dead letter count is 10.    queueDescription.MaxDeliveryCount = 3;    await manager.CreateQueueAsync(queueDescription);  }  ...  // Create the queue client. By default the PeekLock method is used.  this.client = QueueClient.CreateFromConnectionString(    this.connectionString, this.queueName);}

 

The following code snippet shows how an application creates and sends a batch of message queues.

public async Task SendMessagesAsync(){  // Simulate sending a batch of messages to the queue.  var messages = new List<BrokeredMessage>();  for (int i = 0; i < 10; i++)  {    var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() };    messages.Add(message);  }  await this.client.SendBatchAsync(messages);}

The following code shows how to consume a service instance to receive messages in an event-driven way in the queue. The receivemessages method of the processmessagetask parameter is used to reference the code that runs when a message is received. This code is run asynchronously.

private ManualResetEvent pauseProcessingEvent;...public void ReceiveMessages(Func<BrokeredMessage, Task> processMessageTask){  // Set up the options for the message pump.  var options = new OnMessageOptions();  // When AutoComplete is disabled it is necessary to manually  // complete or abandon the messages and handle any errors.  options.AutoComplete = false;  options.MaxConcurrentCalls = 10;  options.ExceptionReceived += this.OptionsOnExceptionReceived;  // Use of the Service Bus OnMessage message pump.   // The OnMessage method must be called once, otherwise an exception will occur.  this.client.OnMessageAsync(    async (msg) =>    {      // Will block the current thread if Stop is called.      this.pauseProcessingEvent.WaitOne();      // Execute processing task here.      await processMessageTask(msg);    },    options);}...private void OptionsOnExceptionReceived(object sender,   ExceptionReceivedEventArgs exceptionReceivedEventArgs){  ...}

Note the auto scaling feature, such as fluctuations in the queue length of a role instance that can be started or stopped in the sky. For more information, see auto scaling guide. In addition, there is no need to maintain a one-to-one process between the role instance and the worker. A single role instance can implement multiple working processes. For more information, see computing resource integration mode.

Msdn: http://msdn.microsoft.com/en-us/library/dn568101.aspx

 

 

 

 

Cloud Computing Design Model (IV)-consumer competition model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.