WAS cluster series (12): Build a cluster: Step 10: view the was cluster mechanism through the verification program.

Source: Internet
Author: User

WAS cluster series (12): Build a cluster: Step 10: view the was cluster mechanism through the verification program.

To better understand the working mechanism of the was cluster, let's refresh the test program several times on a node and pay attention to the following parameters: "Local address" and "Local port", for example:

First time:


Second:


Third time:


Fourth:


Make a summary of the above results, and you will find that the rules are clear at a glance, such:

Refresh count

Local address

Local port

Server

First Login

10.53.105.63

9080

Server1-WIN-PLDC49NNSAA

Second refresh

10.53.105.66

9081

Server2-WIN-PLDC49NNSAA

Third refresh

10.53.105.63

9080

Server1-WIN-IRS49CN78FE

Fourth refresh

10.53.105.66

9081

Server2-WIN-IRS49CN78FE

**************************************** ******************************** **************** 

Original works, from "Deep Blue blog" blog, welcome to reprint, reprint please be sure to indicate the source (http://blog.csdn.net/huangyanlong ).

Please leave a message or email (hyldba@163.com) indicating that there is an error. Thank you very much.

**************************************** ******************************** ****************




How to build a cluster file system in a data center

We hope this will give you a preliminary understanding of this type of technology to better meet the needs of high-usage storage. There are many options to create a cluster and a data storage solution with high usage, but it takes some time to study the advantages and disadvantages of each choice. The choice of Storage Architecture and file system is crucial because most storage solutions have strict restrictions and you need to carefully design the work environment. Infrastructure some readers may want to assemble a set of servers that can access the same file system in parallel, while others may want to copy the memory and provide parallel access and redundancy. There are two ways to achieve multi-server access to the same disk, one is to make those servers can see that disk, the other is through replication. The shared disk structure is the most common structure in the fiber channel SAN and iSCSI fields. The storage system configuration is quite simple, so that multiple servers can see the same logical block device or LUN, but if there is no cluster file system, so when multiple servers want to use that logical block device at the same time, there will be confusion. This problem is related to the use of the cluster file system. We will introduce it in detail below. In general, the shared disk system has a weakness, that is, the storage system. However, this is not always the case, because it is difficult to understand the concept of shared disk by using the current technology. SAN, NAS devices, and Linux-based product hardware can copy all basic disks to another storage node in real time to provide a simulated shared disk environment. After the Basic module device is copied, those nodes can access the same data or run the same cluster file system. However, such replication exceeds the definition of a traditional shared disk. On the contrary, not sharing is the problem of shared disks. Nodes connected to different storage devices will notify the master server of changes when data is written to each module. Currently, the non-shared architecture still exists in file systems like Hadoop. Those file systems can intentionally create multiple data copies on many nodes to improve performance and redundancy. In addition, clusters that use their own storage devices for replication between different storage devices or nodes can also be non-shared. Design selection as we said, you cannot access the same module device through multiple servers. You have heard that the file system is locked, so it is strange that a common file system cannot implement this. At the file system level, the file system itself locks the file to prevent data errors. However, at the operating system level, file system boot programs can completely access basic module devices, which can freely roam between basic module devices. Most file systems will think that they are assigned a module device, and that module device is only owned by themselves. To solve this problem, the Cluster File System adopts a parallel control mechanism. Some cluster file systems store metadata in a partition of the shared device, while others use a centralized metadata server to store metadata. No matter which solution is used, all nodes in the cluster can see the status of the file system, thus ensuring safe parallel access. However, if you want to ensure the high utilization of the system and eliminate the single point of failure, the solution for using a centralized metadata server will be slightly inferior. Another note: the cluster file system requires rapid response when the node fails. If a node writes error data or stops communication about metadata changes for some reason, other nodes must be able to isolate it. Isolation can be achieved through multiple methods. The most common method is to use power-off management.

Windows Cluster Construction Problems

Search for it on Baidu or GOOGLE. If someone has published it online or downloaded it from a website, it will be indexed by the search engine. If not, you can find the relevant forum, it is best to have a forum with a high popularity, register a member, post for help, and a master will help you.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.