IBM Storwize V7000 Introduction (1)

Source: Internet
Author: User

First look at the definition of strowize V7000 from Redbook: An IBM Storwize V7000 system was a clustered, scalable, and midrange storage system, and an External Virtualization device.

The biggest selling point of V7000 is virtualization. So-called storage virtualization is the storage of different vendors into a pool for unified management, such as Storwize V7000 (SVC) can be followed by IBM DS8000,XIV,NETAPP,EMC and other manufacturers of equipment. Place all of the storage space for these devices in a large pool and copy service,data migration across devices. It is transparent to the front-end host when doing operations on the backend storage because the front-end host sees only one storwize V7000 (or SVC).

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M00/58/2B/wKioL1SrWYKRqhoNAAIBgTK0B-c284.jpg" width= "644" height= "399"/>

When it comes to virtualization, IBM, in addition to Storwize V7000, has an SVC (SAN Volume Controller). The code of both runs is the same, except one is as Server (SVC) and one is as storage (Storwize v7k). is a set of storwize V7000 system components, similar to ds3k/4k/5k, it also control enclosure and expansion enclosure.

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M01/58/2B/wKioL1SrWYPh34tcAAIlfRNqGyc799.jpg" width= "644" height= "386"/>

Looking at storwize V7000 hardware, Front view and Font view, the hardware looks like DS3512, but Storwize V7000 is called node canisters, and DS3512 is called Controller. Each storwize V7000 has one or two (optional) control enclosures, each control enclosure includes two node canisters, disks, and two PSUs.

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M02/58/2B/wKioL1SrWYSj_AIOAADfvcA6bjg528.jpg" width= "644" height= "301"/>

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M00/58/2B/wKioL1SrWYXRSrqcAAHFbEHTzsM718.jpg" width= "644" height= "372"/>

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M01/58/2B/wKioL1SrWYejUPeWAAOPZmCtXVU807.jpg" width= "644" height= "405"/>


Important Terminology:

Control Enclosure

A hardware unit that includes the chassis with a midplane for connection of node canisters, drives and power supplies with Batteries.

Node canister

"A controller similar to ds5k"

A hardware unit that includes the node electronics, fabric and service interfaces, serial-attached SCSI (SAS) Expansion PO RTS and direct connections to internal drives in the enclosure.

Expansion Enclosure

A hardware unit that includes the chassis with a midplane for connection of expansion canisters, drives, and power Supplie s without batteries.

Expansion canister

"ESM similar to ds5k expansion enclosure"

A hardware unit that includes the electronics to provide serial-attached SCSI (SAS) connections to the internal drives in The enclosure and SAS expansion ports for attachment of additional expansion enclosures.

Cluster

Both node canisters in a control enclosure.

Managed Disk (MDisk)

A SCSI Logical Unit (aka LUN) built from an internal or external RAID array.

Storage Pool

A Collection of MDisks providing real capacity for volumes.

SVC: Managed Disk Group (MDG)

Volume

What is the host operating system sees as a SCSI disk drive.

SVC: Virtual Disk (VDisk)


Cluster
Cluster consists of 2 to 8 node canisters, all configuration, monitoring and service processes are performed at the cluster level, and the configuration is copy on all node canisters in cluster. Cluster only assigns one IP instead of one IP per node. One node will be selected as the "Configuration node canister", which is the only node to activate Cluster IP. If this node fail, the new "Configuration Node" will be re-selected, and the new "Configuration node" will take over the IP.

I/O Groups: A set of node canister called I/O group.

Normally, a volume IO is handled by the same node in the I/O group. Like ds3k/4k/5k, a volume has only one preferred Owner (a control orb, Node1ornode2), and two node is working in failover mode, when a node is down, Another node can continue to work without impact on the host.

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px; border-top-width:0px; "border=" 0 "alt=" image "src=" http://s3.51cto.com/wyfs02/M01/58/2E/ Wkiom1srwmqws6bxaahvsw-fjca901.jpg "Width=" 646 "height=" 389 "/>

A node is an SVC (SAN Volume Controller), an X series server running Linux, providing virtualization, copy service. Two node to form a cluster. A cluster can have 1-4 node Paris. Each pair is an IO Group. The IO group is defined when configuring cluster. Each node can be in only one IO group.

Managed Disk (MDisk)

Host side will not see managed disk, instead, they see is logical disks, also known as virtual disks. Managed disks again composed Managed disk groups. The managed disk that makes up virtual disks must come from the same managed disk group. Each managed disk is divided into multiple extents (default 16MB), starting at 0 and ending at managed disk.

Quorum disk can be a mdisk or internal drive, a reserved area for cluster management that determines which half of cluster continues to read and write data. The Cluster system occupies very little space on the mdisk to automatically generate quorum disk. For redundancy, the quorum disk exists on three different mdisk, but only one quorum disk is active. The v7k,internal drive can be considered a quorum candidates. If there is more than one storage in an environment, to avoid losing quorum disk for a single storage, the quorum disk should be scattered across multiple stores.

Quorum disk is to decide which node will continue to work when a node offline in cluster. In this tie-break situation, the first group of nodes that access the quorum disk marks their ownership of the quorum disk And as a result continues to operate as the cluster. If the other group of nodes cannot access the quorum disk or finds it owned by another group of nodes it stops operating a s the cluster and does not handle I/O requests.

See how the next write request is done in V7000:

650) this.width=650; "title=" image "style=" border-right-width:0px;border-bottom-width:0px;border-top-width:0px; " Border= "0" alt= "image" Src= "http://s3.51cto.com/wyfs02/M02/58/2E/wKiom1SrWMuhSYmAAAGymg3uHoc561.jpg" width= "528" height= "378"/>

Preferred node was define when the vdisk was created, and in the case of Node1, all of V1 's read and write operations were performed by Vdisk.

By Node1 the write request to the Vdisk V1 (1), write data copies a copy to the Node2, and then returns a write completion operation (2) to the host. Node1 will then destage the cache data to disk.



Summary: When writing, node processing IO copies the data to another node in the IO group before returning a write completion to the host. When reading the data, as with other storage, preferred node checks that there is no data in the cache. If not, read from disk. If one node in the IO group fail, the other node will take over immediately, since write data is mirror between two node, so this node failure does not cause data loss. Surviving node will destage the data in the cache to disk and into Write-through mode, where all write data is written directly to disk without the cache.

This article is from the "star&storage" blog, make sure to keep this source http://taotao1240.blog.51cto.com/731446/1599615

IBM Storwize V7000 Introduction (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.