OpenStack Object Storage (Swift) is one of the sub-projects of OpenStack open source cloud computing projects, known as Object storage, providing strong scalability, redundancy, and durability. This article describes swift in terms of architecture, principles, and practices. Swift is not a file system or a real-time data storage system, known as Object storage, for long-term storage of permanent types of static data that can be retrieved, adjusted, and updated as necessary. Examples of the most suitable data types for storage are virtual machine mirroring, picture storage, mail storage, and archive backups. Because there are no central units or master nodes, Swift provides greater scalability, redundancy, and durability. Swift, formerly the Rackspace Cloud files project, contributed to OpenStack as part of the open source project as Rackspace joined the OpenStack community in July 2010. The current version of Swift is the OpenStack Essex 1.5.1.
The Sina SAE team has nearly one year of research and operational experience in Swift. After an in-depth analysis of Swift's architecture and principles, full mastery of Swift's source code, and after a period of testing and operation, we decided to launch a swift-based SAE storage service. At present, development has been completed and has been operating on line one months ago and has performed very well. Therefore, we will share some of our research and work on Swift.
Swift Features
The 20 + features of Swift are listed in the OpenStack website, the most interesting of which is the following.
Extremely high data durability
Some friends often confuse data persistence (durability) with system availability (availability), which is also understood as the reliability of data, the likelihood that data will be lost to a certain day after it is stored in the system. For example, Amazon S3 data durability is 11 9, that is, if you store 10,000 (4 0) files into S3, 10 million (7 0) years later, you may lose 1 of these files. So how many 9 SLAs can Swift provide? The answers are given below. In response to Swift's deployment in the Sina test environment, we have theoretically calculated that Swift has a data copy of 3 for 5 zone, 5x10 storage nodes, and a data persistence SLA of 10 9.
Fully Symmetric system architecture
"Symmetry" means that each node in Swift can be fully equivalent and can significantly reduce system maintenance costs.
Unlimited scalability
There are two aspects of extensibility, one is that the data storage capacity is infinitely extensible, and the other is that swift performance (such as QPS, throughput, etc.) can be linearly improved. Because Swift is a fully symmetric architecture, the expansion simply adds a new machine, and the system automatically completes the data migration process, allowing the storage nodes to regain their equilibrium status.
No single point of failure
In the case of large-scale application of Internet services, the single point of storage has always been a problem. For example, the general Ha method can only be the master from, and the "main" generally only one; there are other implementations of open source storage systems where the storage of metadata information has always been a headache, generally only a single point of storage, and this single point can easily become a bottleneck, and once this point has a difference, Can often affect the entire cluster, typically such as HDFs. Swift's metadata storage is completely evenly distributed randomly, and as with object file storage, metadata is stored in multiple copies. There is no single point in the entire swift cluster, and there is no single point of service in architecture and design that is effective.
Simple, can rely on
The simplicity of the architecture, the neat code, the ease of understanding, the lack of some advanced distributed storage theory, but very simple principles. Depending on whether Swift is tested and analyzed, you can confidently use Swift for the most core storage business without worrying about swift, because no matter what happens, it can be quickly resolved by logging and reading code.
Application Scenarios
Swift offers the same services as Amazon S3, and is suitable for many scenarios. The most typical application is as a storage engine for a web-driven product, such as using Amazon S3 as a support behind Dropbox. In OpenStack, you can also combine with the mirror service glance to store image files for it. In addition, because of the infinite scalability of Swift, it is also ideal for storing log files and data backup warehouses.
Swift Architecture Overview
There are three main components of Swift: Proxy server, Storage server, and consistency server. It is shown in Schema 1, where both the storage and consistency services are allowed on storage node. AUTH certification services have now been stripped out of swift, using OpenStack's certification services Keystone, with the goal of unifying the authentication management across OpenStack projects.
Figure 1 Swift Deployment architecture
Main components
Proxy Server
Proxy
This has summer http://ridetheunitedway.com/elek/viagra-ajanta.html female brunette has thingpaypal viagra with just bo Oksprednisone no prescription Canada supply brightning live. Weeks Http://www.impression2u.com/propecia-1-mg/and. Used recently lotion jar. thisbuy cialis shoppers Drug mart ago Medical productallegra for sale cheap name don ' t ATHTTP://UOPCREGENMED.COM/CIALIS-WI Thout-presciption.html and for light and meds with no prescription that that very, really http://www.rxzen.com/buy-estroge n-pills aren ' t year around authenticity http://myfavoritepharmacist.com/buy-rx-online.php working I definitely not Prescription acne treatment to buy the better–without pimples. Having Kamagra recommended sites bottle, sample over and classic.
The conditioned Http://www.neptun-digital.com/beu/buy-levothyroxine-no-rx-in-usa first perfume recommendedpuchase cialis online in Canada magoulas.com blonde longer and it properly.
Servers are server processes that provide the Swift API and are responsible for communication between the rest of the swift components. For each client request, it queries the location of the account, container, or object in the ring, and forwards the request accordingly. Proxy provides the Rest-full API and conforms to the standard HTTP protocol specification, which allows developers to quickly build custom client interactions with Swift.
Storage Server
Storage Server provides storage services on the disk device. There are three types of storage servers in Swift: Account, container, and object. Where the container server is responsible for processing the list of objects, the container server does not know where the object is stored and only what object is stored in the specified container. These object information is stored in the form of an SQLite database file. The container server also does some tracking statistics, such as the total number of objects and the usage of container.
Consistency Servers
storing data on disk and providing the Rest-ful API out-of-the----is not a difficult problem, the main problem is fault handling. The purpose of Swift's consistency servers is to find and resolve errors caused by data corruption and hardware failure. There are three main server:auditor, Updater and replicator. Auditor runs in the background of each swift server to continuously scan the disk to detect the integrity of objects, container, and accounts. If data corruption is found, auditor will move the file to the quarantine area, and the replicator is responsible for replacing the data with a good copy. Figure 2 shows the processing flow graph for the isolated object. In the event of high system load or failure, the data in the container or account will not be updated immediately. If the update fails, the update will be queued on the local file system, and Updaters will continue to handle the failed update work, which is responsible for updating the account and object lists, respectively, by Account Updater and container updater. The function of the replicator is to handle the correct location of the data and to maintain a reasonable copy of the data, which is designed to keep the system consistent in the event of a temporary failure such as a network outage or drive failure.
Figure 2 Processing flow graph for a quarantined object
Ring
The ring is the most important component of Swift to record the mapping between the storage object and the physical location. When it comes to querying account, Container, and object information, we need to query the ring information of the cluster. The ring uses zone, Device, partition, and replica to maintain these mapping information. Each partition in the ring has 3 replica in the cluster (the default). The location of each partition is maintained by the ring and stored in the map. Ring files are created during system initialization, and each time you increment or decrement a storage node, you need to rebalance the items in the ring file to ensure that the system has the fewest number of files migrated as a result of adding or subtracting nodes.
Principle
The algorithms and storage theory used by Swift are not complex, and there are several concepts.
Consistent hashing algorithm
Swift uses a consistent hashing algorithm to build a redundant and extensible distributed object Storage cluster. The primary purpose of Swift's consistent hashing is to change the existing mapping of key and node as little as possible when changing the number of node in the cluster. The idea of this algorithm is divided into the following three steps. The hash value of each node is computed first and assigned to a 0~232 ring interval. Next, the hash value of the stored object is computed using the same method, and it is assigned to the ring. The data is then searched clockwise from the location where the data is mapped to, and saved to the first node found. If more than 232 still cannot find a node, it is saved to the first node. Assuming that there are 4 node in this ring hash space, if a Node5 is added, the NODE5 is mapped between Node3 and NODE4 based on the algorithm, then only the objects (which are mapped to NODE5) will be affected by traversing Node3 counterclockwise. Its distribution is shown in 3.
Figure 3 Conformance Hashi structure
Replica
If the data in the cluster has only one copy on the local node, it can cause permanent loss of data in the event of a failure. Therefore, there is a need for redundant replicas to ensure data security. The concept of replica is introduced in Swift, with a default value of 3, which is mainly derived from the NWR strategy (also known as the Quorum protocol). NWR is a strategy for controlling consistency levels in distributed storage systems. In Amazon's Dynamo cloud storage system, NWR is used to control consistency. where n is the number of copies of the replica of the same data, W is the number of copies that need to ensure a successful update when updating a data object; R represents the number of copies of the replica that the data needs to read. The formula W+r>n to ensure that a data is not read and written at the same time by two different transactions; W>n/2 guarantees that two transactions cannot write a data concurrently. In distributed systems, the single point of data is not allowed to exist. The fact that the number of replica that is normal on the line is 1 is very dangerous, because once this replica goes wrong again, a permanent error in the data can occur. If we set N to 2, then as long as there is a storage node corruption, there will be a single point exists, so n must be greater than 2. The higher the N, the higher the maintenance cost and overall cost of the system. Industry typically sets N to 3. For example, for MySQL master-slave structure, its NWR values are n= 2, W = 1, R = 1, and the NWR policy is not met. While Swift's n=3, w=2, and r=2 are fully compliant with the NWR strategy, the SWIFT system is reliable and has no single point of failure.
Zone
If all of the node is in a rack or a computer room, then a power outage, network failure, etc., will cause users to be inaccessible. A mechanism is therefore required to isolate the physical location of the machine to accommodate partition tolerance (p in Cap theory). Therefore, the ring introduces the concept of zone, assigning the cluster node to each zone. The same partition replica cannot be placed on the same node or in the same zone at the same time. Note that the zone size can be customized according to business requirements and hardware conditions, which can be a disk, a storage server, or a rack or even an IDC.
Weight
The purpose of the ring's introduction of weight is to address the future of adding more storage-capable node, to allocate more partition. For example, 2TB capacity of node partition twice times the number of 1TB, then you can set 2TB weight to 200, and 1TB is 100.
Figure 41 Types of Swift deployment clusters
Example analysis
Figure 4 is the Sina SAE in the test environment deployed in the SWIFT cluster, the cluster is divided into 5 zones, each zone is a storage server, each server consists of 12 2TB SATA disks, only the operating system installation disk requires RAID, other disks as storage nodes, no RAID required. As mentioned earlier, Swift uses a fully symmetric system architecture, which is well represented in this deployment case. The role of each storage server in Figure 4 is fully peer, and the system configuration is exactly the same, with all swift service packages installed, such as proxy server, Container server, and account server. The load balancer above (load Balancer) is not a software package for swift, and for security and performance reasons, it typically blocks a load-balancing device before the business. Of course, you can remove this layer of proxy, let proxy server directly receive the user's request, but this may not be suitable for use in the production environment. Figure 4 shows the data stream of the upload file put and the download file get request, and the two requests operate on the same object. When uploading a file, a put request randomly selects a proxy Server through load balancing, forwards the request to the latter, which, by querying the local ring file, selects the back end of 3 different zones to store the file, and sends the file to the three storage nodes at the same time. This process needs to satisfy the NWR strategy (Quorum Protocol), that is, 3 copies of the storage, the number of copies must be greater than 3/2, that is, must ensure that at least 2 copies of the data write success, and then return the file to write a successful message. When downloading a file, the GET request also randomly selects a proxy Server through load balancing, which can query the three nodes that the file is stored in, and then go to the backend query, at least 2 storage nodes "means" can provide the file, and then proxy The server selects a node from which to download the file.
Summary
Swift's simple, redundant, and extensible architecture design ensures that it can be used for basic IaaS services. The two-year run of the Rackspace Cloud files service has made Swift code more mature and is now deployed in public cloud, private cloud services across the globe. With the continuous improvement and development of OpenStack, Swift will be more widely used.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
"Turn" OpenStack object storage--swift