Use JMS to assign tasks to cluster applications (figure)

Source: Internet
Author: User

Decoupling and latency processing in a request-driven environment is one of the key strategies for creating robust and Scalable Distributed applications. Many services depend solely on clusters to ensure scalability. However, when new requirements increase the complexity of applications, they often encounter problems.
  
Although the server cluster is a basic technology that promotes scalability, it may become inefficient when all the processing is completed simultaneously. Throughput may increase, but responsiveness may become irretrievable.
  
In this article, I have discussed asynchronous processing and explained how clever task management can improve the performance, availability, scalability, and manageability of your applications. We will create a general task allocation framework in a highly configurable manner, which can send any task to one or each server in your cluster. By using polymorphism and Java Message Service (JMS), our framework will implement the famous Command mode.
  
  Significance of decoupling in practice
  
When the server receives a client request, it usually needs to execute several separate tasks before returning a response. Decoupling means that not all tasks are executed at a time, but some tasks are put into the queue and processed asynchronously. Because queuing is usually a low-cost operation, synchronous requests will end faster.
  
  Advantages of decoupling
  
Processing Tasks in order and in parallel is generally more efficient than processing them randomly (when the client occasionally sends a request ). The positive impact is greater than that shown immediately. Theoretically, decoupling can improve the performance in the following aspects:
  
Robustness: improved because requests may be less dependent on failures.
  
Responsiveness: partial post-processing of the request reduces the time between receiving the request and returning the response.
  
Scalability: All decoupling processes may increase in complexity, but there is no risk of reducing responsiveness.
  
Availability: the server can handle the fault without knowing the cause of the fault.
  
When the subsystem is unavailable, it is easier to configure automatic retry.
  
Naturally, the differences between theory and practice vary in each application. However, it is clear that almost every implementation has at least some of the above advantages.
  
  Coupling Defects
  
Like most good things, decoupling also has disadvantages. One of the most serious disadvantages is that if you cannot ensure that you have enough hardware to clear busy processing queues, you may find that its availability is actually reduced. If you enter more asynchronous requests than your system can handle, the queue will grow very rapidly. You must pay attention to the design process, and automatic monitoring of queues is undoubtedly desirable. Another obvious problem is that in the request-driven environment, most processes are not very suitable for decoupling. In fact, most processes may be requested to return a response. Sometimes, it requires out-of-the-box ideas, and may even be the changes in the service methods you provide for your client.
  
  Which processes can be decoupled?
  
From a purely technical perspective, almost all processes can be decoupled. For example, you can put a list of purchased items and customer details into the queue, so that you can decouple an order transaction-asynchronous processing will be responsible for the remaining work. The disadvantage is that you cannot include any processing details in the response. Therefore, it is important to carefully verify the data in advance to ensure that no problem occurs.
  
One of the more popular implementations is to immediately put requests into the queue and then continuously poll the server to know when to get a response. Although this method is actually synchronous in nature and does not increase the request processing time, It is psychologically advantageous because a progress bar can be displayed during polling.
  
In addition to understanding the Complete Coupled business logic (this is a huge challenge), less concentrated processing, such as logging and sending email, is a good option to consider. When performance becomes more important, there is no reason for the client to wait for the completion of such tasks. Special emails are a good choice for decoupling. Let's have a better understanding.
  
  Case study: asynchronous email
  
Sending emails in a traditional way (as part of a synchronous request) may cause some problems. First, connecting to the email server requires a network round-trip, which may be slow, especially when the server is very busy. An overloaded email server can even temporarily make the email-Dependent Services unavailable.
  
  XA transaction support
  
Another obvious problem is that email servers are generally non-transactional in nature. When the transaction is rolled back, this can lead to inconsistent notifications-a message cannot be canceled after it is put into the queue. Fortunately, JMS supports transactions and can solve this problem by delaying message sending to commit underlying transactions.
  
When you consider accessing the database and transaction-aware JMS, you will need to use XA and two-phase commit (2 PC) transactions. You can use non-XA resources to simulate XA, but you may obtain inconsistent data. Enabling XA is only a configuration problem and usually requires no code modification. For more information, see the WebLogic documentation.
  
  Send email via JMS
  
To use JMS to send emails, We need to configure the JMS components (such as the JMS server, JMS queue, and connection factory ). We also need to write a Message-Driven Bean (MDB) to execute the actual sending of the mail. To send an email in our code, you need to create a JMS message containing the attributes and content of the email. Then, we send it to the processing queue.
  
This is a huge workload! Fortunately, BEA WebLogic JMS provides us with everything we need to create a framework that can decouple almost any process.
  
  Framework for asynchronous execution
  
It's time to take a look at some code. We will create a framework that supports asynchronous code execution on one or all servers in the cluster. It does take some effort to implement it, but once the framework is complete, asynchronous execution is no longer easy.
  
This idea is to write some classes that contain a public method with runable code and another method for initializing parameters-maybe the constructor. After encapsulated in a JMS object message, these pre-compiled class instances (command messages) are sent to the JMS queue configured on your server. At this point, the consumer extracts them and executes them asynchronously (see figure 1 ).
  


Let's take a look at all the parts of this framework one by one:
  
JMS queue: a jms queue should be configured on each server to receive command messages. You should also configure an error queue for storing duplicate fault messages.
  
JMS connection Factory: Two Connection factories should be configured to support the running of transaction behavior, one supports XA, and the other does not support XA.
  
CommandMessage: A simple Java interface that must be implemented by all command objects. It extends the java. io. Serializable interface, which is required to embed our commands in the JMS object message. Now, because we want to run commands without knowing the exact type of commands, we need to implement java. lang. runnable interface. Later, convert them to Runnable objects and execute their running methods. We ran the code without knowing the exact content we were running. This is the ideal polymorphism.
  
Command Execution Program (CommandExecutionManager): We will use an MDB to process the command. Instance pooling prevents repeated JMS initialization, which makes MDB a powerful message listener and is very suitable for this task. Writing Bean classes does not require a lot of work. We only need to write several lines of code in the onMessage method (see Listing 1 ).
  
In this way, the received message is passed to an ObjectMessage, the embedded command object is obtained, and its running method is executed. You can configure a retry counter by setting the resend limit of the queue to a value greater than 0 in the config. xml file. A running exception is thrown from your command object to trigger the resend action. In addition, you can control the retry frequency by configuring the resend delay.
  
A helper class used to send messages (TaskDistributor)
  
Technically speaking, this part is not completely necessary. You can manually queue JMS every time. However, this is a tedious process, and it is actually a helper that makes the framework so practical. The Helper is a common Java class with a static method for queuing command messages. You can write a separate method for processing different scenarios, but for the sake of conciseness, I chose to write a method that can handle the majority of cases:
  
Static void execute (CommandMessage cm, long delay, B oo lean runEverywhere, B oo lean persisted, B oo lean
EnableXA, int priority)
  
This static method has several parameters for precise execution control. Let's discuss these parameters one by one:
  
CommandMessage cm: A Command Message instance.
  
Long delay: indicates the time when the sending attribute is sent. It is set using the weblogic. jms. extensions. WLMessageProducer class. In this way, you can execute commands at night or at other convenient times. It is also acceptable to accept a Date object.
  
B oo lean runEverywhere: determines whether to send the message to be executed to a randomly selected accessor or all servers in the cluster.
  
B oo lean persisted: select the sending mode by using the setDeliveryMode method of the queue sender. Always maintain business-critical messages so that these messages will not be lost when the accessors crash. However, durability is always at the cost of performance loss, which should also be taken into consideration.
  
B oo lean enableXA: Select whether to use the XA-supported JMS connection factory. When this parameter is set to true, the queue will participate in the underlying transaction (if any) and will not queue the message before committing the transaction.
  
Int priority: determines the priority of the message JMS. Before sending the Message, call the setJMSPriority method of the javax. jms. Message class with the given value. Valid range: 0-

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.