Basic Introduction of Linux clusters in HPC architecture (1)

Source: Internet
Author: User

Now Linux clusters have become very popular in many fields. With the emergence of cluster technology and the increasingly adoption of open source software, it is now possible to build a supercomputer at a very small cost of traditional high-performance machines.

These two series of articles briefly introduce the concept of high-performance computing HPC using Linux cluster technology, and demonstrate how to build clusters and write parallel programs. This is the first article in two series, discussing the cluster type, purpose, HPC basics, Linux role in HPC, and the reasons for the increasing cluster technology. Part 2 introduces the knowledge of parallel algorithms and describes how to write parallel programs, how to build clusters, and how to perform Benchmark Testing.

Types of HPC Architecture

Most HPC systems useParallel. Many software platforms are designed for HPC, but first let's take a look at the hardware knowledge.

HPC hardware can be divided into three categories:

  • Symmetric multi-processor SMP)
  • Vector processor
  • Cluster

Symmetric multi-processor SMP)

SMP is one of the architecture used by HPC. Multiple Processors share the memory. In a cluster, this is also calledLarge-scale parallel processor massively parallel processor, MPP)They do not need to share the memory. We will introduce this in more detail later .) Compared with MPP, SMP generally costs more and has poor scalability.

Vector processor

As the name suggests, in Vector processors, the CPU is optimized to better handle Vector Array Operations. The performance of the vector processor system was very high. It was once dominant in the HPC architecture from 1980s to 1990s, but clusters have become more popular in recent years.

Cluster

Cluster is the most important HPC hardware in recent years:Cluster)Is a set of MPP. The processor in the cluster is usually calledNodeIt has its own CPU, memory, operating system, I/O subsystem, and can communicate with other nodes. Currently, common workstations are used in many places to run Linux and other open source software to act as nodes in the cluster.

Next you will see the differences between these HPC hardware, but first let's start from the cluster.

Cluster Definition

The term "cluster)" may have different meanings in different places. This article focuses on the following three types of clusters:

  • Fault migration Cluster
  • Server Load balancer Cluster
  • High-performance Clusters

Fault migration Cluster

The simplest failover cluster has two nodes: one is active and the other is standby, but it will always monitor the active nodes. Once the active node fails, the Standby node will take over its work, so that the key system can continue to work.

Server Load balancer Cluster

Server Load balancer clusters are usually used on very busy Web sites. They have multiple nodes to work on the same site, each new request to obtain the Web page is dynamically routed to a node with lower load.

High-performance Clusters

High-performance clusters are used to run time-sensitive parallel programs. They have special significance for the scientific community. High-performance clusters usually run some simulation programs and other programs that are very CPU-sensitive. It takes a lot of time to run these programs on common hardware.

Figure 1 illustrates a basic cluster. Part 1 of this series shows how to create a cluster and write programs for it.


Figure 1. Basic Cluster

 

Grid computingIs a more extensive term that is usually used to represent the use of loosely coupled systems to implement a service-oriented architecture SOA ). Cluster-based HPC is a special case of grid computing, in which nodes are tightly coupled. A successful and well-known project of grid computing is SETI @ home, which is a project that searches for alien intelligence, it uses approximately 1 million home PCs to analyze the data of the radio astronomical telescope during Screen Saver idle CPU cycles. Another similar success project is the Folding @ Home Project, which is used for protein Folding computation.

Common Use of high-performance Clusters

Almost all industries require fast processing capabilities. With the advent of cheaper and faster computers, more companies are interested in taking advantage of these technological advantages. There is no upper limit on the demand for computing processing power. Although the processing power is rapidly improving, the demand is still beyond the range provided by the computing power.

Life Science Research

Protein molecules are very complex chains and can actually be represented as countless 3D images. In fact, when a protein is put in a certain solution, they quickly fold into their own natural state. Incorrect folding can lead to many diseases, such as Alzheimer's disease. Therefore, studies on protein folding are very important.

One way scientists try to understand protein folding is by simulating it on a computer. In fact, it may take only one microsecond to fold the protein very quickly), but the process is very complicated. This simulation may take 10 years on a common computer. This field is only a small one in many industry fields, but it requires very powerful computing capabilities.

Other fields in the industry include pharmaceutical modeling, virtual surgical training, environmental and diagnostic virtualization, a complete medical record database, and human genetic projects.

Oil and Gas Exploration

The vibration chart contains detailed information about the internal characteristics of the Chinese mainland and the ocean. Analysis of the data can help us detect oil and other resources. Even for a small region, there are several terabytes of data to be reconstructed. This analysis obviously requires a lot of computing power. The demand for computing power in this field is so strong that most supercomputers are dealing with this kind of work.

Other geographic research also requires similar computing capabilities, such as systems used to predict earthquakes and multi-spectral satellite imaging systems used for security work.

Image Rendering

In engineering fields such as aerospace engine design) Manipulating high-resolution interactive images has always been a challenge in terms of performance and scalability, because it involves a large amount of data. The cluster-based technology has been successful in these fields. They split the rendering screen tasks into nodes in the cluster, each node uses its own graphical hardware to present its own screen images, and transmits these pixel information to a master node. The master node combines the information, finally, a complete image is formed.

The examples in this field are only the tip of the iceberg. More applications, including astronomical physics simulation, meteorological simulation, engineering design, financial modeling, securities simulation, and film stunt, all require a wealth of computing resources. We will not discuss the increasing demand for computing power.

How Linux and clusters have changed HPC

Prior to the emergence of cluster-based computing technology, typical supercomputers were vector processors. Because they all used dedicated hardware and software, the cost typically exceeded $1 million.

With the emergence of Linux and other free cluster open source software components and the improvement of Common Hardware processing capabilities, this situation has now undergone great changes. You can use a small cost to build a powerful cluster and add other nodes as needed.

GNU/Linux operating system Linux) has been widely used in clusters. Linux can run on a lot of hardware and has high-quality compilers and other software, such as parallel file systems and MPI implementations, which are free of charge on Linux. With Linux, you can also customize the kernel for your own task load. Linux is an excellent platform for building HPC clusters.

Understanding hardware: vector machines and clusters

It is very useful to understand HPC hardware and compare vector computing with cluster computing. The two are competing technologiesEarth SimulatorIs a vector supercomputer, and is still one of the 10 fastest machines ).

Basically, vector processors and scalar processors execute commands Based on clock cycles. What makes them different is that vector processors process vector-related computing capabilities in parallel, such as matrix multiplication ), this is very common in high-performance computing. To demonstrate this, assume that you have two double-precision arrays.AAndBAnd create the third array.XFor exampleX [I] = a [I] + B [I].

Any floating point operation, such as addition and multiplication, can be implemented through several steps:

  • Adjust Indexes
  • Add symbol
  • Perform an integer check on the result.

Vector processorsPipeline)The technology processes these steps in parallel internally. Assume that there are six steps in a floating-point addition operation, which is the same as the IEEE arithmetic hardware), as shown in Figure 2:


Figure 2. Six-level pipeline in IEEE arithmetic hardware

 


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.