Migration, failover, and scaling in the WebSphere MQ cluster

Source: Internet
Author: User
Tags app service definition failover

The impact of messages on SOA

In the first part of the message, I wrote about many long-standing best practices from the point-to-point messaging architecture to the service-oriented development requirements update message domain. Here, we will consider a case study to understand the migration, failover, and extension of the queue manager, and the impact on naming conventions, tools, management processes, and operations when considering these activities in the context of SOA.

First let's look at some of the terms:

The migrations in this discussion include any situation where the queue Manager is being rebooted, perhaps to update the underlying hardware or to move to a different platform. Migration will always involve building new queue managers, logically moving applications and queues to new queue managers, and eventually retiring the old queue manager.

Failover is a planned or unplanned shutdown of the primary system, and includes the task of putting the standby node online to take over the processing load. The supplemental operation returns the master node when the primary node recovers. A typical example is a disaster-recovery test that involves failover to a hot standby system, a test application, and then back to the main system.

The horizontal extension is defined as changing the number of concurrent instances of the input queues in the cluster to increase or decrease processing capacity. Horizontal scaling to accommodate growth is usually permanent. The expansion to accommodate the peak handling season is a cyclical process that first increases and then reduces capacity. The extension process may or may not involve building a new queue manager, and typically does not involve any changes to an existing instance.

Some aspects of service-oriented are best served by using the IBM WebSphere MQ cluster. Clustering can provide location independence, runtime name resolution, and concurrency required by SOA applications. For these reasons, the adoption of SOA is driving migration from point-to-point messaging networks to clustered environments.

Point to Point Paradigm

Most of the oldest WebSphere MQ studios encounter at least one hardware migration and experience enough growth to require expansion at some point, so there may be an established process for these activities. Hopefully, all teams have a disaster recovery plan, or even a team that just started using WebSphere MQ. What should these processes be? In a point-to-point world, migrations, failover, and scaling are often different and vary from one to another.

Migration: It is common to plan and implement migrations as a single event, in which all tasks are carried out in an unbroken sequence, including a large number of build and configuration activities on the new target queue manager. In this way, you can capture the current status of the retiring Queue manager and move it unscathed to the new host. Because it is a one-off activity, there is very little (if any) room for return.

Failover: The goal of failover is to switch between two functionally equivalent queue managers. Although they are (or at least should be) different queue managers with different names, the only significant difference between the two is conname for other parts of the network. Because failover is always expected to be returned, it is worthwhile to take the time to create automation or processes that promote consistent, reliable, and repeatable activities. Typically, failover involves an extension phase that is independent of the actual switching execution.

Scaling: Upgrading capacity is a more common extension than adapting to cyclical workloads. As a result, most extensions are planned and executed as one-off events, similar to migrations. The main difference is that after the event, the procedure is created to ensure that all instances of the queue manager remain synchronized and that changes applied to one instance are applied to all instances.

Point to point implementation

In this architecture, the queue Manager is the root context of the object name, and the process used for management and manipulation reflects this tendency. Because changing the queue manager name in a point-to-point network is disruptive, it is tempting to reuse the same name for the Queue Manager and channel during failover and migration. This is actually an inverse pattern; some ideas initially looked like the right idea, but later often proved to be a nightmare, which is one of them. Although there is a problem with Queue Manager name reuse, most of the migration plans I've seen still depend on it. As a result, it can be difficult or impossible to get two queue managers online at the same time, which can affect the planning and execution of migration tasks.

In the case of scalability, where the primary goal is concurrency, reusing the queue manager name causes more problems than it solves, so there is usually no temptation to create duplicate-named instances. In this case, the Queue Manager alias is typically used to achieve node equivalence, but the overall effect is still to parse the queue in the context of its queue tube.

Another aspect of the point-to-point architecture is that runtime configurations tend to remain fairly static. In fact, many processes and procedures assume that configuration is stable. For example, take the case of an object definition. In many studios, object definitions are stored in MQSC scripts. The initial version of the script contains local benchmarks for the queue manager, such as setting dead-letter queues, locking remote administration access, and optimizing channels. The application-specific objects are then added to their scripts or added to the main script. As new queues, themes, and other objects are added, the script is run again. This operation redefined existing objects and creates any new objects. A typical example resembles the following:

Listing 1

DEFINE qlocal (APP. FUNCTION. SubFunction. QA) +

DESCR (' APP service queue for QA ') +

Bothresh (5) +

Boqname (' APP. FUNCTION. Backout. QA ') +

CLUSTER (' Div_qa ') +

CLUSNL (") +

Defbind (notfixed)

REPLACE

The first time this definition was run, a local queue APP was created. FUNCTION. SubFunction. Qa. The REPLACE option ensures that the definition does not generate an error on subsequent runs. The key here is that all network maintenance activities are performed at build time. The exception here is failover. Because failover is designed in the system, scripts and supplemental return scripts that execute it are usually created in advance. Typically, there are MQSC scripts, but unlike include define statements, they include an alter statement that resembles the following:

Listing 2

ALTER CHANNEL (QM1. QM2) Chltype (SDR) +

CONNAME (' host.of.qm2 (1414) ')

Put (DISABLED)

RESET CHANNEL (QM1. QM2)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.