High Availability with the distributed replicated block Device

Source: Internet
Author: User

The 2.6.33 Linux? Kernel has introduced a useful new service calledDistributed replicated block Device(Drbd ). this service mirrors an entire block device to another networked host during Run Time, permitting the development of high-availability clusters for block data. lead e the ideas behind the drbd and its implementation in the Linux kernel.

The distributed replicated block device (drbd) provides a networked version of Data Processing ing, classified under the Redundant Array of Independent Disks (RAID) taxonomy as RAID-1. let's get started with a quick introduction to high availability (HA) and raid, and then need e the architecture and use of the drbd. introducing high availability

High AvailabilityIs a system design principle for increased availability.Availability,Or the measure of a system's operational continuity, is commonly defined as a percentage of uptime within the span of a year. for example, if a given system is available 99% of the time, then its downtime for a year is measured as 3.65 days. the value 99% is usually calledTwo nines.Compare this to five nines (99.999%), and the maximum downtime falls to 5.26 minutes per year. That's quite a difference and requires careful design and high quality to achieve.

One of the most common implementations for HA is redundancy with failover. in this model, for example, you can define multiple paths to a given resource, with the available path being used and the redundant path used upon failure. enterprise-class disk drives using strate this concept, as they provide two ports of access (compared to one access port for consumer-grade drives ).

As I write this, I'm sitting on a Boeing 757. each wing runner des its own jet engine. although the engines are extremely reliable, one cocould fail, and the plane cocould continue to fly safely with that remaining single engine. that's ha (via redundancy) and applies to applications and scenarios.

My first job was for a large defense company building geosynchronous communications satellites. at the core of these satellites was a radiation-hardened computing system that was responsible for command and telemetry (a satellite's User Interface), power and thermal management, and pointing (otherwise known as keeping telephone conversations and television content flowing ). for availability, this computing system was a redundant design, with two sets of processors and buses and the ability to switch between a master and a slave if the master was found to be unresponsive. to make a long story short, redundancy in systems design is a common technique to increase availability at the cost of additional hardware (and software ). redundancy in storage Linux kernel compression sion

The process for processing sion into the Linux kernel for drbd started back in July 2007. at that point, the drbd was at version 8.0. two-and-a-half years later, in December 2009, drbd entered the mainline 2.6.33 kernel (drbd version 8.3.7 ). today, the 8.3.8 drbd release is in the current 2.6.35 Linux kernel.

Not surprisingly, using redundancy in storage systems is also common, FIG in enterprise-class designs. it's so common that a standard approach-raid-exists with a variety of underlying algorithms, each with different capabilities and characteristics.

Raid Was first defined in 1987 at the University of California, Berkeley. traditional RAID levels include RAID-0, which implements striping loss SS disks for performance (but not redundancy), and raid-1, which implements CRASH ing loss SS two disks so that two copies of information exist. with raid-1, a disk can fail, and information can still be acquired through the other copy. other RAID levels include RAID-5, which includes block-level Striping with distributed parity codes pair SS disks, and raid-6, which includes block-level Striping with double distributed parity. although raid-5 can support failure of a single drive, raid-6 can support two drive failures (though more capacity is consumed through parity information ). raid-1 is simple, but it's wasteful in terms of capacity utilization. raid-5 and raid-6 are more frugal with respect to storage capacity, but they typically require additional hardware processing to avoid burdening the processor with the Parity calculations. as usual, trade-offs abound. figure 1 provides a graphical summary of these raid-0 and raid-1 schemes. figure 1. graphical Summary of RAID schemes for levels 0 and 1 Linux

RAID technologies continue to evolve, with a number of so-called nonstandard techniques coming into play. these techniques include Oracle's RAID-Z scheme (which solves raid-5's write-hold problem); netapp's RAID-DP (Diagonal parity), Which extends raid-6; and IBM's raid 1E (Enhanced), Which implements both striping (RAID-0) and flushing (RAID-1) over an odd number of disks. numerous other traditional and nontraditional raid schemes exist: see the links in resources for details.

Back to Top drbd operation

Now, let's look at the basic operation of the drbd prior to digging into the architecture. figure 2 provides an overview of drbd in the context of two independent servers that provide independent storage resources. one of the servers is commonly defined as the primary and the other secondary (typically as part of a clustering solution ). users access the drbd Block devices as a traditional local block device or as a storage area network or network-attached storage solution. the drbd software provides synchronization between the primary and secondary servers for user-based read and write operations as well as other synchronization operations. figure 2. basic drbd model of Operation

In the active/passive model, the primary node is used for read and write operations for all users. the secondary node is promoted to primary if the clustering solution detects that the primary node is down. write operations occur through the primary node and are saved med to the local storage and secondary storage simultaneously (see figure 3 ). drbd supports two modes for write operations called fully synchronous and asynchronous.

InFully synchronous Mode, Write operations must be safely on both nodes 'Storage before the write transaction is acknowledged to the writer. InAsynchronous mode, The write transaction is acknowledged after the write data is stored on the local node's storage; the replication of the data to the peer node occurs in the background. asynchronous mode is less safe, because a window exists for a failure to occur before data is replicated, but it is faster than fully synchronous mode, which is the safest mode for data protection. although fully synchronous mode is recommended, asynchronous mode is useful in situations where replication occurs over longer distances (such as over the wide area network for geographic disaster recovery scenarios ). read operations are saved med using local storage (unless the local disk has failed, at which point the secondary storage is accessed through the secondary node ). figure 3. read/write operations with drbd

Drbd can also support the active/active model, such that read and write operations can occur at both servers simultaneously in what's calledShared-disk Mode. This mode relies on a shared-disk file system, such as the Global File System (GFS) or the Oracle Cluster File System Version 2 (ocfs2), which has des Distributed Lock-management capabilities.

Back to Top drbd Architecture

Drbd is split into two independent pieces: A kernel module that implements the drbd behaviors and a set of user-Space Administration applications used to manage the drbd disks (see figure 4 ). the kernel module implements a driver for a virtual block device (which is replicated between a local disk and a remote disk using SS the network ). as a virtual disk, drbd provides a flexible model that a variety of applications can use (from file systems to other applications that can rely on a raw disk, such as a database ). the drbd module implements an interface not only to the underlying block Driver (as defined by the disk configuration item in drbd. conf) but also the networking stack (whose endpoint is defined by an IP address and port number, also in drbd. conf ). figure 4. drbd in the Linux architecture

In user space, drbd provides a set of utilities for managing replicated disks. You usedrbdsetupUtility to configure the drbd module in the Linux kernel anddrbdmetaTo manage drbd's metadata structures. a wrapper utility that uses both of these utilities isdrbdadm. This high-level administration tool is the one most commonly used (grabbing details from the drbd configuration file in/etc/drbd. conf). As a front end to the previusly discussed utilities,drbdadmIs the most commonly used to manage drbd.

Using the disk model, drbd exports a special device (/dev/drbdx) that you can use just like a regular disk. listing 1 extends strates building a file system and mounting the drbd for use by the host (though it omits other necessary configuration steps, which are referenced in the resources section ). listing 1. building and mounting a file system on a primary drbd Disk

# mkfs.ext3 /dev/drbd0# mkdir /mnt/drbd# mount -t ext3 /dev/drbd0 /mnt/drbd

You can use the virtual disk that drbd provides like any other disk, with the replication occurring transparently underneath. now, take a look at some of the major features of drbd, including its ability to self-heal.

Back to topdrbd Major Features

Although the idea of a replicated disk is conceptually simple (and its development relatively straightforward), there are inherent complexities in a robust implementation. for example, replicating blocks to a networked drive is fairly simple, but Handling Failures and transient outages (and the resulting synchronization of the drives) is where the real solution begins. this section describes the major features that drbd provides, including the variety of failure models that drbd supports. replication Modes

Earlier, this article inclued the varous methods for replicating data between nodes (two in particle-fully synchronous and asynchronous ). drbd supports a variation on each method that provides a bit more data protection than asynchronous at a slight cost in performance. theMemory (or semi-) Synchronous ModeIs a variation of both synchronous and asynchronous. in this mode, the write operation is acknowledged after the data is stored on the local disk and mirrored to the peer node's memory. this mode provides more protection, because the data is mirrored to another node, just in volatile memory instead of the non-volatile disk. it's still possible to lose data (for example, if both nodes failed), but failure of the primary node will not cause data loss, because the data has been replicated. online device Verification

Drbd permits the verification of local and peer devices in an online fashion (while input/output occurs ). this verification means that drbd verifies that the local and remote disk are replicas of one another, which can be a time-consuming operation. but rather than move data between nodes to validate, drbd takes a much more efficient approach. to preserve bandwidth between the nodes (likely a constrained resource), drbd doesn't move data between nodes to validate but instead moves cryptographic digests of the data (hash ). in this way, a node computes a hash of a block; transfers the much smaller signature to the peer node, which also calculates the hash; and then compares them. if the hashes are the same, the blocks are properly replicated. but if the hashes differ, the out-of-date block is marked as out of sync, and subsequent synchronization ensures that the block is properly synchronized. communication integrity

Communicating between nodes has the potential to introduce errors into the replicated data (either from a software or firmware bug or from any other error not detected by TCP/IP's checksum ). to provide data integrity, drbd calculates message integrity codes to accompany data moving between nodes. this allows the processing ing node to validate its incoming data and request retransmission when an error is found. drbd uses the Linux crypto Application Programming Interface and is therefore flexible on the Integrity algorithm used. automatic Recovery

Drbd can recover from a wide variety of errors, but one of the most insidious is the so-called "split brain" situation. in this error scenario, the communication link fails between the nodes, and both nodes believe that they are the primary node. while primary, each node permits write operations, without those operations being propagated to the peer node. this leads inconsistent storage in each node.

In most cases, split-brain recovery is already med manually, but drbd provides several automatic methods for recovering from this situation. The recovery algorithm used depends on how the storage is actually used.

The simplest approach to synchronizing storage after split-brain is when one node saw no changes occur while the link was down. in this case, the node that had changes simply synchronizes with the latent peer. another simple approach is to discard changes from one node that had the lesser number of changes. this permits the node with the largest change-set to continue but means that changes to one host will be lost.

The other two approaches discard changes based on the temporal states of the nodes. in one approach, changes are discarded from the node that switched to primary last. in the other, changes are discard from the oldest primary (the node that switched to primary first ). you can manipulate each of these nodes within the drbd configuration file, but their use ultimately depends upon the application using the storage and whether data can be discarded or manual recovery is necessary. optimizing Synchronization

A key aspect of a replicated storage device is an efficient method for synchronizing data between nodes. two of the schemes that drbd uses are Activity logs and the Quick-sync bitmap. the activity log stores blocks that were recently written to and define which blocks need to be synchronized after a failure is resolved. the quick-sync bitmap defines the blocks that are in sync (or out of sync) during a time of disconnection. when the nodes are reconnected, synchronization can use this bitmap to quickly synchronize the nodes to be exact replicas of one another. this time is important, because it represents the window during which the secondary disk is inconsistent.

Back to topconclusion

Drbd is a great asset if you're looking to increase the availability of your data, even on commodity hardware. it can be easily installed as a kernel module and configured using the available administration tools and wrappers. even better, drbd is open source, allowing you to tailor it to your needs (but check the drbd road map first to see whether your need is in the works ). drbd supports a large number of useful options, so you can optimize it to uniquely fit your application. resourceslearn

  • The drbd website provides the latest information on drbd, its current feature list, a road map, and a description of the technology. you can also find a list of drbd papers and presentations. although drbd is part of the mainline kernel (since 2.6.33), you can grab the latest source tarball at linbit.
  • High Availability is a system property that ensures a degree of operation. this property typically involves redundancy as a way to avoid a single point of failure. fault-tolerant system design is another important aspect for increasing availability.
  • The concept of raid was born at the University of California, Berkeley in 1987. raid is defined by levels, which specify the Storage Architecture and characteristics of the protection. you can learn more about the original raid concept in the seminal paper "a case for Redundant Arrays of Inexpensive Disks (RAID )."
  • Ubuntu provides a useful page for indexing and using drbd. This page contains strates configuration of drbd on primary and secondary hosts as well as testing drbd in a number of failure scenarios.
  • Drbd is most useful in conjunction with clustering applications. luckily, you can learn more about these applications and others (such as pacemaker, heartbeat, logical volume manager, gfs, and ocfs2) and how they integrate with drbd in the drbd-enabled applications section of the drbd manual.
  • This article referenced two shared-disk file systems-namely, the gfs and the ocfs2. both are cluster file systems that embody high performance and HA.
  • In the developerworks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators.
  • Stay current with developerworks technical events and webcasts focused on a variety of IBM products and IT industry topics.
  • Attend a free developerworks live! Briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends.
  • Watch developerworks On-Demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers.
  • Follow developerworks on Twitter, or subscribe to a feed of Linux tweets on developerworks.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.