The emergence of cloud storage and cloud computing is generated by the demand of information mass storage and processing, so whether it is a real cloud, the problem of storage and computation should be solved first.
One: Cloud storage
In the practical process of moving to commercial applications, the possibility of the current popular system needs to be solved by using the model of Key/value model and schema free list model as the abstract data models.
PC can be linux,windows,mac, but the latter both are difficult to assume the task as a low-cost cluster, more suitable as a user's terminal;
Linux system, if you build cloud storage, Distributed file systems, distributed databases, distributed search engines, distributed Web servers, distributed caching, oh, can indeed find a lot;
However, these many distributed components can be effectively combined to support cloud storage, only concrete practices can be witnessed.
Google's design is disruptive, starting with the full use of these distributed components, based on a new design for their own business requirements of the storage engine:
- Distributed File System GFs
- bigtable provides structured and semi-structured views of data
- megastore, based on BigTable, has a more database-oriented approach to transaction support, and supports acid principles for the data in the group
- chubby is the controller for GFS and bigtable, primary server elections, metadata storage, coarse-grained lock services
- sstable as a file to be stored within bigtable, with a specific format
For the above several core modules, each one covers a lot of technical details, in order to be easy to understand, only as simple as possible analysis:
One: GFs Distributed File system, the first invention that does not belong to Google, there are many similar distributed file systems
Issues to be addressed in Distributed file systems
A: An application that abstracts on top of the operating system's filesystem
B: A reasonable plan to support the planning and replication of mass storage "load balancing for multiple replicas, asynchronous Replication"
C: How files are stored, how quickly they are retrieved, accessed
D: Diversity of storage structure, key/value format, directory format, multidimensional format "How storage nodes are designed"
E: How the client interacts with multiple storage nodes "How the Tracker Designs"
F: How to achieve balance between multiple clients and multiple storage nodes "load Balancing, Task scheduling"
G: Provide a transparent and consistent access interface for client access
H: How to manage failover of multiple nodes to provide system availability
Summary: Distributed file system, the need to design a storage node and control node "tracker", as well as provide the best possible management tools and access interface;
With these questions, you can see how Google's GFS solves these problems.
GFS: A large amount of unstructured information storage platform, and provides redundant data backup, automatic load balancing of tens of thousands of servers and failure server detection mechanisms
GFS Design principles:
- the ordinary PC as storage node, control node, data support redundant backup, automatic detection of the effectiveness of the machine to provide services, the failure of the machine automatic recovery.
- storage files for large files between 100M to 10G, must be a large file read and write operations to make optimization
- The system has a large number of append write, write content rarely change, rarely appear random write, support efficient append write
- The reading of the data most of the sequential read, a small number of random read
These design principles remind me of some of the principles of the enterprise-class document engine that I designed earlier:
- Provide efficient storage of unstructured data, storage support flexible ratings and catalogs
- The large data file G-level file upload, backup, download
- Able to quickly retrieve and locate the content of large data
- The operation of large files more time-consuming, do not affect the normal operation of other business
Based on the above design principles, learn about GFS's overall architecture
Composed of three parts:
- The only master server {master}
Be responsible for management work
Responsible for data storage and responding to read and write requests from GFS clients
The load interacts with the data storage nodes, masking the details of the access, so that the specific application operations are the same as local operations
GFS File System:
A file system similar to Linux/windows, having a directory and a tree structure of files stored throughout the directory, can be understood as namespaces
/data/bizdomain/module/timestamp/datafile
GFS provides common operational interfaces (APIs) for similar file creation, deletion, reading, and writing
Chunk: A physical unit that stores data, such as a large file, that is eventually segmented into a specific fixed-size block to store "Chunk", each Chunk size is 64M, and a file usually has multiple Chunk components
Each file may have different chunk, and these chunk can be stored on different chunk servers, and each chunk server can store chunk from different files.
In the chunk server, the chunk is further segmented, for a specific block and block as a basic storage operation unit.
The overall architecture of GFS is described below:
- master server, maintain GFS namespace, maintain chunk namespaces
To identify different chunk, each chunk has a unique number, all of which constitute the chunk namespace, and the GFS master server records the information that each chunk stores on that chunk server;
- The mapping relationship between the file and the chunk must be maintained within the GFS system
chunk server, load the actual storage of chunk, while responding to the GFS client's own responsible chunk read and write requests
- GFS client, responsible for reading and updating data
Instance: one client request, read support file, start at p position, read length L
App--->send (filename,start_position,read_length) to the GFS client, after the client is introduced, the internal conversion to the specific chunk sequence number, and then sent to
GFS master server, the master server obtains the location of the corresponding storage chunk server according to the management metadata, and returns it to the GFS client
The GFS client then establishes a connection with the specific chunk server, sends the read range of the corresponding read chunk number, and returns the corresponding data after the chunk server receives the request
The above implementation is similar to the implementation of the document engine:
- Clients to upload, download the file "The client needs to accept files, or file number", do not need to focus on the location of the file store
- Receive the file number, the main control is based on the file number first, get the corresponding storage node location, as well as the storage node path, the information returned to the client
- After the client establishes the connection with the storage node, downloads the specified file
GFS Master server
Using master-slave architecture, single master server, multiple storage servers
GFS manages the following meta data
- gfs Namespaces and Chunk namespaces
- Mapping relationship between file and its owning chunk
- Information stored on that chunk server by each chunk "to prevent failure, each chunk will be duplicated multiple copies"
GFs records namespaces and files to the chunk mapping table in the system log, and logs are stored on multiple machines
For the third type of metadata, the master server asks each chunk server at startup and periodically asks to keep the latest information
Other functions of the master server:
Create new chunk and backup data, load balancing between different chunk servers, and if that chunk is unavailable, generate new backups and garbage collection.
In data backup and migration, you need to consider:
- Maintain the availability of chunk data, when a chunk is not available, backup in time
- Reduce network transmission pressure as much as possible
System's ability to write and interact:
A chunk has multiple backups, and multiple backups need to be synchronized if they need to be updated
The idea of GFS implementation:
Select a primary backup, two secondary backups, and issue 3 chunk backup requests when updates are required, and notify the primary backup to begin writing if all backup requests are received back.
Waiting for the primary backup to write, return to the client, the client notifies the backup write start, if the secondary backup is completed, the overall return to the completion of the backup;
If there is a problem in the middle, undo the return;