IBM pureapplication System A boxed cloud computing systems

Source: Internet
Author: User
Keywords IBM

IBM pureapplication System (W1500 and W1700 v1.0 and v1.1) is a boxed cloud computing system with hardware and software to deploy and execute workloads in the cloud, with all the functionality required to add a private cloud environment to an enterprise data center. This article outlines the hardware contained in Pureapplication system and uses the system console to view individual components.

This article is part 1th of the three article series that describes the hardware and software fundamentals that Pureapplication System provides for hosting an application's Run-time environment:

Hardware: This article that you are reading describes the hardware that makes up pureapplication System. Virtualization Hardware: The best practice of using infrastructure as a service in IBM pureapplication system will show you how pureapplication System virtualized its hardware to achieve infrastructure as a service (IaaS). Runtime Environment: Managing the Application Runtime environment in IBM pureapplication system describes how virtualized hardware in Pureapplication system can be used to implement the application runtime environment that the workload is deployed to.

Each article is based on a previous article in order to fully explain this basic knowledge.

Pureapplication System Type

There are currently four types of pureapplication System:

W1500 Sgt Rack: Low rack W1500 with 32, 64, 96 or 128 intel®cpu cores. W1500 SCM Rack: High rack W1500 with 64, 96, 128, 160, 192, 224, 384, or 608 Intel CPU cores. W1700 Sgt Rack: Low rack W1700 with 32, 64, 96 or 128 power®cpu cores. W1700 SCM Rack: High rack W1700 with 64, 96, 128, 160, 192, 224, 384, or 608 power CPU cores.

Table 1 shows a quick comparison of these hardware types. The Management node abbreviations in table 1 are as follows:

Psm:puresystems Manager vsm:virtualization System Manager fsm:pureflex System Manager

Table 1. Pureapplication System Hardware Classification

W1500 Small rack W1700 small rack W1500 Large rack W1700 large rack rack 25u-1.3 m "25u-1.3 m" 42u-2.0 m "42u-2.0 m 19" node Chassis 1 Flex Chassis 1 Flex Chassis 3 Flex Chassis 3 Flex chassis processors Intel Xeon e5-2670 8 Core power7+ 8 nuclear Intel Xeon e5-2670 8 core power7+ 8 nuclear compute Nodes 2, 4, 6 or 8 2, 3 or 4 4, 6, 8, 10, 12, 14, 24 or 38 2, 3, 4, 5, 6, 7, 12 or CPU cores 32, 64, 96 or 128 32, 64, 96 or 128 64, 96, 128, 160, 192, 224, 384 or 608 64, 96, 128 , 160, 192, 224, 384, or 608 memory 0.5, 1.0, 1.5, or 2.0 TB RAM 0.5, 1.0, 1.5, or 2.0 TB RAM 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 6.0, or 9.5 TB RAM 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 6.0, or 9.5 TB RAM Storage node 1 V7000 controllers


1 V7000 Extensions 1 V7000 controllers


1 V7000 Extensions 2 V7000 controllers


2 V7000 Extensions 2 V7000 controllers


2 V7000 Extended storage drives 6 GB SSD


40 GB HDD 6 GB SSD


40 GB HDD 16 GB SSD


80 GB HDD 16 GB SSD


80 GB HDD storage capacity 2.4 TB SSD


24.0 TB HDD 2.4 TB SSD


24.0 TB HDD 6.4 TB SSD


48.0 TB HDD 6.4 TB SSD


48.0 TB HDD Management Node 2 PSM


2 VSM 2 PSM


2 FSM 2 PSM


2 VSM 2 PSM


2 FSM Network 2 IBM rackswitch 64 Gigabit Ethernet Switch power supply 4 Power Distribution Unit (PDU)

Each model is upgradeable within its type and can be upgraded to w1500-128, w1700-128, w1500-608, and w1700-608, respectively. Upgrades can be performed without power outages.

System hardware

Let's start with a look at the entire pureapplication system rack for each type of hardware.

Infrastructure diagram

You can view the real-time hardware view of a specific pureapplication System in the integrated console. To do this, open infrastructure Map. Select System Console > Hardware > Infrastructure map as shown in Figure 1. To access the hardware menu, you need to grant the user hardware administration role, as described in managing administrative access in IBM pureapplication System.

Figure 1. System Hardware Menu







The infrastructure map can be visualized as an interactive picture, as shown in Figure 2, and as a hierarchical tree of components.

Figure 2. W1500-96 Infrastructure Map-graphical view







Hardware infrastructure

Similar to the infrastructure map graphical view shown in Figure 2, Figure 3 illustrates the hardware component layout in the W1500 SCM Rack system rack.

Figure 3. IBM pureapplication System w1500-608 Hardware







As shown in infrastructure map and Figure 3, the W1500 SCM Rack System is a large rack (42U cabinets, 2.015 meters high, 644 mm wide, 1100 mm in depth, total weight 1088 kilograms under full load), and the following main components are included from top to bottom:

At the top of the rack switch (ToR): This is a pair of IBM System Networking rackswitch™g8264 64 Gb Ethernet switch. To view this information, go to System Console > Hardware > Receptacle Devices. The Network Device page also lists the network and SAN switches for each chassis. To view the details of your network configuration, go to System Console > System > Customer receptacle revisit. Storage node: This is a pair of IBM storwize®v7000 storage units, each of which is a pairing of a controller node with an extended node and is aggregated with a controller that manages two units as a single SAN. To view this information, go to System Console > Hardware > Storage Devices. Flex Chassis: The system contains three IBM Flex System™enterprise chassis Type 7893 chassis, with a height of 10U (numbered 3, 2, and 1, with number 1 chassis at the bottom). The chassis is equivalent to the expansion jack of the compute node. Putting a compute node into the chassis is like putting a drawer in a file cabinet. When you insert a node into the bay, the corresponding connectors and brackets in the node are grouped together. This design helps to replace the compute node during system operation. To view this information, go to System Console > Hardware > Flex chassis. Service notebook: A notebook computer connected to the system is stored in a 1U drawer between the chassis numbered 2 and 3 in the rack. IBM uses it to manage the system. Power Distribution Unit (PDU): The rack contains four PDUs, which are plugged into the external power supply respectively. These units assign power to the chassis's power modules, switches, and storage nodes in turn.

Each Flex chassis contains several components:

Compute nodes: Each enclosure contains 14 compute node bays, arranged in seven rows in two columns. Each carrier supports an Intel compute node. If you are using W1700, then two parallel bays can support a power compute node. See Figure 4. To view the computing node for the system, go to System Console > Hardware > Compute Nodes. Management nodes: Chassis 1 and Chassis 2 each use two bays to host management nodes: Virtualization System Manager (VSM): Hosted in node Tray 1, manages the virtual machine management program for compute nodes. If you are using W1700, then the node is Pureflex System Manager (FSM). See Figure 4 and Management node section. Puresystems Manager (PSM): Hosted in Node Bay 2, which hosts IBM workload Deployer (IWD). To view the management node for your system, go to System Console > Hardware > Management Nodes. Network switches: Each enclosure contains a pair of 66-port IBM Flex System Fabric EN4093 10Gb Scalable Switch Ethernet switch, which is used to connect its compute nodes. The chassis switch is connected to the top rack switch through a Gbps Ethernet backbone (4 GB Ethernet cable). SAN switches: Each enclosure contains a pair of 48-port IBM Flex System FC5022 16Gb SAN Scalable Switch Fibre Channel switches that connect its compute nodes to the shared storage of the system. To view the network and SAN switches for your chassis, go to System Console > Hardware > Receptacle Devices. Power module: Each chassis contains 6 power supplies, 3 on each side. The power supply is redundant, so even if one power supply module fails, the chassis and its compute nodes continue to work. Continue cooling equipment: These devices are 10 fans that control the temperature of the hardware.

Power model

The hardware in the W1700 SCM Rack is very similar to the hardware in the corresponding W1500 type chassis. Figure 4 shows the hardware component layout in the W1700 SCM Rack system rack.

Figure 4. IBM pureapplication System w1700-608 Hardware







This hardware is very similar to the hardware in W1500 SCM Rack, and the main difference is that W1700 contains the power compute node, not the Intel compute node. The Power compute node contains more than twice times the core and twice times more memory than the Intel compute node. The box where the hardware is placed is twice times wide, so it occupies two horizontal bays in the chassis, so that each chassis supports half of the power compute nodes. In addition to compute nodes, the storage and the network are the same.

Another difference is that the virtualization Management node is Pureflex System Manager (FSM), not virtualization System Manager (VSM).

Smaller racks

This hardware in the W1500 Sgt Rack is a subset of the hardware in the W1500 SCM Rack. Figure 5 shows the hardware component layout in the W1500 Sgt Rack system rack.

Figure 5. IBM pureapplication System w1500-64 Hardware







As shown in Figure 5, the W1500 Sgt Rack is a small rack (25U cabinet, 1.267 meters high, 605 mm wide, 997 mm in depth, weight 385 kilograms under full load) and contains the same main assembly types as the larger similar racks:

2 Top Rack Switches 1 Service Laptops 4 Power Modules 1 storage units (one controller/extended pair) 1 Flex chassis: 4 management nodes up to 10 compute node Bays

This hardware is very similar to the hardware in W1500 SCM Rack, but slightly different:

is shorter, slightly narrower than the cabinet (25U instead of 42U) 1 chassis, instead of three 10 compute nodes 1 storage units, instead of two 4 power modules, stacked horizontally between storage and service notebooks

The compute nodes in all W1500 are the same, and the way they are stored and networked is the same.

The hardware in the W1700 Sgt Rack (that is, the Power mini rack) is very similar to the hardware in W1500 Sgt Rack (that is, the Intel small rack). The main difference is that the Intel compute node is not included, and the space of a single enclosure can hold up to 5 power compute nodes.

Hardware elasticity

The basic theme of system hardware is that, in order to achieve flexibility, each node is often redundant to avoid a single point of failure. The system not only contains multiple compute nodes, but also includes two pairs of management nodes, two system network switches, two storage units, and four PDUs. The SCM Rack chassis system contains three Flex chassis. Each enclosure consists of a pair of network switches, a pair of SAN switches, and six power supplies. The network and SAN adapters in the compute node have multiple ports for increased bandwidth and resiliency.

The hardware also isolates the management of the system from the user workload. Management nodes-Puresystems™manager and virtualization System Manager or Pureflex™system Manager-are hosted in their own compute nodes. This isolates them from the user workload so that system management functions can be run in their own dedicated hardware. This also eliminates most of the management overhead of the standard compute nodes, making their resources available for user workloads. In the event of a failure, two of the management nodes respond to the failure, one of which is on standby and another pair.

System components

Let's explore the hardware components in more detail.

W1500 COMPUTE Node

Compute nodes are often referred to as integration technology elements (ITE), sometimes referred to as "blades", specifically for nodes that have not yet been dedicated to management nodes, and compute nodes are very compact computers. The W1500 system contains many Intel compute nodes, especially the IBM Flex system x240 compute nodes, which each contain the following components:

CPU: An Intel computing node contains a dual processor, 16-core chipset. These chips are two 8-core 2.6 GHz intel®xeon®e5-2670 115W processors, a total of 16 physical cores, which the hypervisor uses as 32 logical cores (that is, virtual machine management programs can run 16 concurrent threads in these 32 cores). Of these 32 logical cores, 28 cores are available for user workloads. Memory: An Intel compute node contains 256 GB ram:8 2x16 GB, 1333 MHz, DDR3, LP Rdimms (1.35 V). Storage: Compute node's SAN interface card is an IBM Flex System FC3172 2-port 8 GB FC Adapter Fibre Channel adapter. The node also contains two GB 2.5-inch hard drives, which are often ignored by the system. Network: Compute node Network interface card is a 4-port IBM Flex System CN4054 ten GB Virtual Fabric Adapter Ethernet Adapter. Shell: Intel calculates that the shell of the node is only half the width, which means that each compute node can be placed on a single chassis bracket, and two compute nodes can be positioned side-by-side on the adjacent bays (see Figure 3 and Figure 5).

Figure 6 illustrates these components in the compute node.

Figure 6. COMPUTE node Component




W1700 COMPUTE Node

The W1700 system contains many power compute nodes, particularly the IBM Flex system p460 compute node, which contains the following components:

The Cpu:power compute node contains a quad-core processor, 32-core chipset. These chips are four 8-core 3.61 GHz power7+ processors, a total of 32 physical cores, which are used by virtual machine managers as 128 logical cores (i.e. 128 concurrent threads). Of the 128 logical cores, 116 core lessons are used for user workloads. Memory: The Power compute node contains the ram:16 2x16 GB, 1066 MHz, DDR3, LP Rdimms (1.35 V). Storage: This is the same as the adapter contained in the Intel compute node, but it contains two adapters. The SAN interface card for compute nodes is two IBM Flex System FC3172 2-port 8 GB FC Adapter Fibre Channel adapters. The node also contains two GB 2.5-inch hard drives, which are often ignored by the system. Network: This is similar to the Intel compute node sheet, but it includes two adapters. The network interface card for the compute node is two IBM Flex System EN4054 4-port GB Ethernet adapters. Shell: Power calculates that the shell of the node is full width, which means it is twice times the width of the Intel compute node. Each power compute node occupies a pair of horizontally placed chassis bays (see Figure 4).

Compared to the W1500 compute node, the W1700 compute node contains two times the core and memory. Because it is also twice times the volume, the rack supports half the quantity.

Each compute node has access to the system resources shared by all compute nodes: storage and network.

Shared resources: Storing

The Pureapplication System High rack provides 6.4 TB solid-state drive (SSD) storage, as well as TB hard disk drive (HDD) storage, where 4.8 TB and 43.2 TB are available:

This memory is placed in a cluster of two IBM storwize V7000 storage units. Each cell contains a controller node (also known as an attachment) that is paired with an extended node. Each node contains two node containers, which are configured as Active/standby containers. The active container Controls access to node storage. The disks in the four federated nodes are the GB 2.5 "SSD and GB 2.5" HDD. The controller includes IBM system Storage®easy tier® Storage Management system software. The storage is organized into redundant array of independent disks (RAID), where 5 arrays are used for redundancy. Each storage unit contains 40 HDD and 8 SSD. Of the 40 HDD, one is reserved as a hot spare, and the remaining 39 hard drives are organized into three arrays of 13 disks, including 12 data segments and bands of a parity segment. The 8 SDD is a hot spare and a 7-disk array consisting of 6 data segments and a stripe of a parity segment. The compute node accesses storage as a SAN through a 2-port 8 GB Fibre Channel adapter.

Shared resources: Network

Pureapplication System's internal physical network is accessed through two top rack switches (ToR), two IBM System Networking rackswitch G8264 64-Port Ethernet switches, The maximum bandwidth between the switch and the external network is the e-Gbps. Their configuration is displayed on the Customer receptacle revisit page (please go to System Console > System > Customer receptacle Revisit). Here's how to use the port:

Port 41-56 (altogether 16 ports) on each switch is used to connect to the data center network: Each port is 10/1 GB Ethernet. The built-in connector type is copper, but each port can also be connected to a fibre optic connector, or a direct access connection (DAC). Each pair of ports in two switches should be link aggregation for high availability. Port 63 (on either switch) connects to the service notebook computer, and IBM uses this port to boot and manage the system. Port 64 (Link aggregation) is the management LAN port. The other ports on the system switch provide an Ethernet connection between the three rack network switches for application and management networks, and connect two ToR switches to each other.

Manage nodes

Puresystems Manager (PSM) hosts not only the workload deployer, which can be deployed, but also the management services of the system. These services are accessible through the following three interfaces:

Console: Integrated console, a Web GUI. REST API: A representational State transfer application programming interface. CLI: command-line interface.

These interfaces access the PSM through their IP address, which is the floating management IP address displayed on the Customer receptacle revisit page (System Console > System > Customer Receptacle Revisit). Opening a Web browser on this IP address will open the system's integrated console.

Virtualization Management Node-Virtualization System Manager (VSM, located on W1500) and Pureflex System Manager (FSM, on W1700)-manages virtual machine management programs. The Virtual machine management program on W1500 and W1700 is the same, but there are slightly different ways to work on both models. Two models of different chipsets, Intel® and Power run different virtual machine management software: VMware and PowerVM. VSM and FSM run the same hardware, but run different hypervisor management software VMware vcenter and PowerVM respectively.

Table 2 summarizes the differences between the two types of virtualization management nodes.

Table 2. Virtualization Management Node Comparison

W1500 Intel W1700 Power processor Intel Xeon power7+ virtual machine hypervisor software VMware vsphere Hypervisor (ESXi) IBM PowerVM Virtualization Management node Virtualization System Manager (VSM) Pureflex System Manager (FSM) virtual machine management software VMware vcenter Server PowerVM VMControl

Despite these differences, Puresystems Manager (PSM) uses the hypervisor management software in the same way in both models.

Management LAN Port

Managing LAN ports enables administrators to connect to the Puresystems Manager (PSM), including the integrated console. The Customer receptacle Revisit page specifies the management port, which is always the port 64 (link aggregation) on the top rack. This management port is a member of the Customer Management network and is listed in table 3. The top rack switch is configured with a VLAN ID for the customer management network in the VLAN domain Aggregate Port 64 configuration.

Table 3. Customer Management Network

Name Network name VLAN ID Customer Management CUSTMGMT designated customer

This customer management network can use any of the available VLAN IDs specified by the network administrator. VLANs need to be defined on the external network to support administrator access to the PSM.

Manage Network

The system requires three VLANs to manage its components from within. The Customer receptacle revisit is listed as a Internal receptacle VLAN. Each management VLAN is also listed with other VLANs on the Virtual NX page (System Console > Hardware > Virtual receptacle). Table 4 lists these management VLANs.

Table 4. Manage Network

Name Network name VLAN ID Mobility VMOTION 1358 Console Console 3201 Admin Merion 4091

To ensure that these three VLAN IDs are unique on the network, they should be kept on the network so that none of the other VLANs will use them. At the very least, network VLANs with these IDs will not be able to connect to the system because the top rack switch blocks traffic.

In addition to these system-wide management networks, each cloud group (pureapplication system virtualization feature) also requires its own management VLAN. These VLAN IDs should also be retained or at least blocked by the top rack switch.

Application Network

The customer receptacle Revisit page also enables administrators to define VLANs that client workloads use to communicate with each other and to associate each application VLAN with a port or link aggregation in the top rack switch. When you add or remove a configured VLAN from a switch, the system takes a few minutes to reconfigure its top rack and network switch, and the system will recognize the VLAN changes.

These application VLANs must be defined on the network to support the communication between some applications that are not running on the system (for example, the client GUI and enterprise databases) and some applications that run as workloads on the system.

Concluding

This article reviews the hardware contained in the Pureapplication System. Introduces the main hardware components, describes the details and relationships of individual components, and shows how to find them in the integration console. Having learned this information, you now have a better understanding of the hardware in Pureapplication System.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.