WAS cluster series (11): cluster creation: Step 9: Release the verification program, was Cluster

Source: Internet
Author: User

WAS cluster series (11): cluster creation: Step 9: Release the verification program, was Cluster
(1) install the lab package




Click OK to go to the following page:


Click "Next", for example:


Note that the following steps are important. You need to select the check boxes for clusters, servers, and modules, and then click applications. The options are as follows:

After the preceding steps are completed, the configuration interface is changed to the following, indicating that the configuration is successful, for example:
In the last step, click "finish", for example:

Click "save", as shown in the following figure:

Next, after synchronizing nodes, click "OK", for example:

(2) After the program package is started to complete configuration synchronization between nodes, the following interface is displayed. Click "start", as shown in:

After startup, you can see the success prompt, as shown in:

(3) generate plug-ins
The plugin is successfully generated, as shown in:

(4) After the propagation plug-in "generates the plug-in", complete the "propagation plug-in", as shown in:

The "propagation plug-in" is successful, as shown in:

(5) Verify the publishing process

Enter the access address: http: // 10.53.105.63/snoop

(The IP address is the IP address of the DM server)

Enter the IP addresses of other nodes in the cluster to view the details:

Http: // 10.53.105.66/snoop

(The IP address is the IP address of another node in the cluster)

We can see that the cluster program has been successfully deployed.

**************************************** ******************************** **************** 

Original works, from "Deep Blue blog" blog, welcome to reprint, reprint please be sure to indicate the source (http://blog.csdn.net/huangyanlong ).

Please leave a message or email (hyldba@163.com) indicating that there is an error. Thank you very much.

**************************************** ******************************** ****************


How to configure and release a cluster for a WebSphere cluster with two single nodes, one IHS node and one DM management Node

1. Add the two single nodes to dmgr using the command addNode. sh + host name + soap port to make them managed nodes.
2. Create a cluster.
3. Configure the data source to apply the scope to the cluster.
4. Publish an application and select a cluster to publish the application.
5. Modify the plug-in.

This is probably the case. If you have any difficulties, please mention it at 1.1. If you haven't done was, we suggest you find some information on the Internet.

How to build a cluster file system in a data center

We hope this will give you a preliminary understanding of this type of technology to better meet the needs of high-usage storage. There are many options to create a cluster and a data storage solution with high usage, but it takes some time to study the advantages and disadvantages of each choice. The choice of Storage Architecture and file system is crucial because most storage solutions have strict restrictions and you need to carefully design the work environment. Infrastructure some readers may want to assemble a set of servers that can access the same file system in parallel, while others may want to copy the memory and provide parallel access and redundancy. There are two ways to achieve multi-server access to the same disk, one is to make those servers can see that disk, the other is through replication. The shared disk structure is the most common structure in the fiber channel SAN and iSCSI fields. The storage system configuration is quite simple, so that multiple servers can see the same logical block device or LUN, but if there is no cluster file system, so when multiple servers want to use that logical block device at the same time, there will be confusion. This problem is related to the use of the cluster file system. We will introduce it in detail below. In general, the shared disk system has a weakness, that is, the storage system. However, this is not always the case, because it is difficult to understand the concept of shared disk by using the current technology. SAN, NAS devices, and Linux-based product hardware can copy all basic disks to another storage node in real time to provide a simulated shared disk environment. After the Basic module device is copied, those nodes can access the same data or run the same cluster file system. However, such replication exceeds the definition of a traditional shared disk. On the contrary, not sharing is the problem of shared disks. Nodes connected to different storage devices will notify the master server of changes when data is written to each module. Currently, the non-shared architecture still exists in file systems like Hadoop. Those file systems can intentionally create multiple data copies on many nodes to improve performance and redundancy. In addition, clusters that use their own storage devices for replication between different storage devices or nodes can also be non-shared. Design selection as we said, you cannot access the same module device through multiple servers. You have heard that the file system is locked, so it is strange that a common file system cannot implement this. At the file system level, the file system itself locks the file to prevent data errors. However, at the operating system level, file system boot programs can completely access basic module devices, which can freely roam between basic module devices. Most file systems will think that they are assigned a module device, and that module device is only owned by themselves. To solve this problem, the Cluster File System adopts a parallel control mechanism. Some cluster file systems store metadata in a partition of the shared device, while others use a centralized metadata server to store metadata. No matter which solution is used, all nodes in the cluster can see the status of the file system, thus ensuring safe parallel access. However, if you want to ensure the high utilization of the system and eliminate the single point of failure, the solution for using a centralized metadata server will be slightly inferior. Another note: the cluster file system requires rapid response when the node fails. If a node writes error data or stops communication about metadata changes for some reason, other nodes must be able to isolate it. Isolation can be achieved through multiple methods. The most common method is to use power-off management.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.