Http://www.aosabook.org/en/index.html (chapter 2)
English version of the reference here Translation: http://www.oschina.net/translate/scalable-web-architecture-and-distributed-systems
Open-source software has become the basic component of some super-large websites. With the development of those websites, some best practices and guiding principles have emerged around their architectures. This chapter attempts to explain some key issues that need to be considered when designing a large website, as well as some components to achieve these goals.
This chapter focuses on web systems, although some of them are also applicable to other distributed systems.
1.1 Web Distributed System Design Principles
What exactly does building and operating a scalable Web site or application mean? In the end, such a system only connects users to remote resources over the Internet-enabling them to expand the resources distributed on multiple servers or access these resources.
Similar to most things in life, in the long run, it is very helpful to spend some time planning before building a web service. Understand the considerations and trade-offs behind large websites and make more informed decisions when creating smaller websites. The following are some core principles that affect the design of large-scale Web systems:
* Availability:The normal running time of a website is crucial to the reputation and operation of many companies. For some larger online retail sites, a few minutes of unavailability will result in thousands or millions of dollars in revenue losses. Therefore, the system must be designed to provide continuous services, in addition, quick recovery from faults is the most basic technical and business requirement. The high availability of distributed systems requires careful consideration of the redundancy of key components, rapid recovery from some system faults, and elegant degradation in case of problems.
* Performance:For most websites, website performance has become an important factor. The speed of the website affects usage, user satisfaction, and search engine rankings, which are directly related to revenue and whether users can be retained. Therefore, it is very important to create a system that is optimized for fast response and low latency.
* Reliability:The system must be reliable so that the same data request will always return the same data. After the data is changed or updated, the same request should return new data. You should know that if something is written to the system or stored, it will be persistent and will definitely remain unchanged for future retrieval.
* Scalability:For any large distributed system, size is only one aspect of the scale issue that needs to be considered. It is also important to strive to improve the ability to handle larger loads, which is often called system scalability. Scalability is based on many different system parameters: how much additional traffic can be processed? How easy is it to increase the storage capacity? How many more transactions can be processed?
* Manageability:Easy System O & M is another important consideration. The manageability of the system is equivalent to the scalability of O & M (maintenance and update. For manageability, it is easy to diagnose and understand problems, facilitate updates or modifications, and make system O & M easy (for example, does regular O & M not cause failures or exceptions ?)
* Cost:Cost is an important factor. Obviously, this includes hardware and software costs, but we also need to consider system deployment and maintenance. The developer time required for system construction, the O & M workload required for system operation, and training should be taken into account. Cost is the total cost of owning the system.
Each of these principles provides a decision basis for designing a distributed Web architecture. However, they are also inconsistent. The cost of achieving one goal is to sacrifice another. A basic example: simply adding more servers (scalability) to solve the capacity problem is manageability (you need to maintain an additional server) and cost (the price of the server) at the cost.
It is important to consider these core principles when designing any web application, even if you know that a design may sacrifice one or more of them.
1.2 Basic Concepts
When talking about the system architecture, there are several things to consider: what are the appropriate components, how these components are combined, and what is the correct trade-offs. It is generally not a wise commercial proposition to expand investment before it is needed. However, some foresight in design can save a lot of time and resources in the future.
This section mainly describes some important core factors for almost all large Web applications:Service, redundancy, partitioning, and troubleshooting. Each of these factors involves selection and compromise, especially in the context of the principles described in the previous section. To explain these things in detail, it is best to start with an example.
Example: image hosting application
You may have posted images online some time ago. For large websites hosting and providing massive images, building a cost-effective, highly available, and low-latency (fast retrieval) architecture poses many challenges.
Imagine a system where users can upload images to the central server, or request images through web links or APIs, just as they do with Flickr or Picasa. For simplicity, we assume that this application has two key parts: Uploading (writing) images to the server and querying images. Of course, we want to upload images very efficiently. At the same time, we are very concerned that when someone requests an image (for example, a webpage or another application requests an image), the system can quickly deliver the image. This is very similar to a Web server or content delivery network (CDN) Edge Server (CDN uses this server to store content in multiple places, so that the content is closer to the user in terms of geographical/physical distance, so as to provide more quickly.
Other important aspects of the system include:
* There is no limit on the number of images to be stored. Therefore, storage scalability needs to be considered.
* The image download/request latency is low.
* If a user uploads an image, the image will always exist (the reliability of the image data ).
* The system should be easy to maintain (manageability ).
* Because image hosting has little profit, the system should be cost-effective.
Figure 1.1 is a simplified function diagram of the system.
Figure 1.1 simplified architecture of image hosting applications
In this image hosting example, the system must be fast, data storage is reliable, and all these attributes are highly scalable. Building a small version of the application is easy and can be easily carried on a single server. However, this chapter does not make much sense. Let's assume that we want to build an application that can grow as huge as Flickr.
When considering a scalable system design, the functions are decoupled, and each part of the system is considered to be able to provide services by itself and have clearly defined interfaces. In practice, people think that a system designed in this way has a Service-Oriented Architecture (SOA ).For such systems, each service has its own different functional context and interacts with everything outside the context (usually the API exposed by another service) through an abstract interface.
Splitting the system into a group of complementary services decouples the operations of different components. This abstraction helps to establish a clear relationship between services, underlying environments, and consumers of services. Such a clear division helps isolate the problem and allows each component to be extended independently of other components. This type of service-oriented system design is very similar to the object-oriented design of program design.
In our example, all the requests for uploading and retrieving images are processed on the same server. However, splitting these two functions into two independent services makes sense when the system needs to be extended.
Now let's assume that this service is heavily used. In this case, it is easy to see how much write operations will affect the time it takes to read images (because these two functions will compete for sharing resources ). Depending on this architecture, this will have a great impact. Even if the upload and download speeds are the same (most IP networks are not like this, but are designed at the download speed: the upload speed is), file read operations are usually read from the cache, write operations will eventually be written to the disk (in the final consistency case, it may need to be written multiple times ). Even if everything is in the memory or read from the disk (such as SSD), Database write operations are almost always slower than read operations. (Pole position, an open-source database benchmark tool, benchmark ).
Another potential problem with this design is that Web servers such as Apache or Lighttpd usually have a maximum number of concurrent connections that can be maintained (the default value is about 500, but can be higher ). Under high traffic, write operations quickly consume the number of concurrent connections allowed. Since read operations can be asynchronous or use other performance optimization methods, such as gzip compression or multipart Transfer Encoding, the web server can quickly switch between read operations, and fast switching between clients to serve the maximum ratio of connections per second (using Apache, set the maximum number of connections to 500, and serve thousands of read operation requests per second is not uncommon) more requests. On the other hand, write operations tend to maintain an open connection during Image Upload. In most home networks, it takes more than one second to upload a 1 MB file, in this way, the web server can only process 500 such concurrent write operations.
Figure 1.2: Split read/write operations
Planning for such bottlenecks is a good case of splitting image read/write operations into independent services. 1.2. This allows us to expand any one of the two separately (because there are usually more read operations than write operations) and helps us sort out what is happening at each point. Finally, this separates future concerns, making it easier to resolve faults and expand problems such as slow read operations.
The advantage of this method is that we can solve the problem independently from other problems-we don't have to worry about writing and retrieving new images in the same context. These two services are still based on the global image corpus, but they can be adapted to the service (for example, queuing requests or caching frequently used images-see the following for more information) optimize their performance at will. From the perspective of maintenance and cost,Each service can be expanded independently on demand, which is very important, because if the service is mixed, in the above scenario, one service will inadvertently affect the performance of another service.
Of course, if you have two different endpoints, the above example can work well (in fact, it is very similar to the implementation and content delivery network of multiple cloud storage providers ). Although there are many ways to solve such bottlenecks, each has a different trade-off.
For example, if you want to distribute users to different database shards to solve this read/write problem, each database shard can only process a certain number of users, and as users increase, you can add more database shards to the server cluster (see the demo about the Flickr extension work: http://mysqldba.blogspot.com/2008/04/mysql-uc-2007-presentation-file.html (note in the volumes )). In the first example, the hardware is easier to expand based on the actual usage requests, while in the case of changes in the user base (but it is required to assume that the usage between users is balanced, to add additional capacity ). For the former, if a service has faults or problems, it will weaken the functions of the entire system (for example, no one can write files ), however, if one of the database shards in Flickr fails, only users who use the shard will be affected. In the first example, it is easier to perform operations on the entire dataset. For example, update the write operation service to include new metadata or search for all image metadata. For the architecture of Flickr, you need to update or search for each database shard (or you need to create a search service to sort the metadata-in fact they do the same ).
There is no correct answer to the discussion of these systems, but back to the principles described at the beginning of this chapter, determine the system requirements (frequent reading or writing or both of them, concurrency level, queries the entire dataset, range, sorting, and so on). It is useful to benchmark different solution options, understand how the system fails, and prepare a reliable plan to respond to failures.
To handle faults elegantly, the Web architecture must have redundant services and data. For example, if only one copy of a file is stored on a single server, losing the server means losing the file. Data loss is rarely a good thing. A common way to solve this problem is to create multiple or redundant copies of data.
The same principle can also be applied to services. If the application features a core component, ensure that multiple copies or versions run simultaneously to prevent the system from spof.
Creating redundancy in the system can eliminate single points of failure and provide backup or backup functions for disaster recovery. For example, if two instances of the same service are running in production, the system canTransfer failureTo the sound copy. The expired backup can automatically occur or be manually involved.
Another key part of service redundancy is to create a shared-nothing architecture.With this architecture, the O & M work of each node can be independent of other nodes, and there is no central "brain" to manage the status or coordinate node behavior. This helps improve scalability because new nodes can be added without special conditions or knowledge. However, the most important thing is that such a system does not have a single point of failure, so it is more flexible for failures.
For example, in our image server application, all images are stored in another place (ideally, they are located in different geographic locations to cope with disasters such as earthquakes or data center fires) redundant copies are stored on the hardware, and the services that provide image access are also redundant, all potentially serving requests (see Figure 1.3) (Server Load balancer is an excellent way to make it possible, which will be detailed below ).
Figure 1.3: redundant image hosting applications
There may be very large datasets that cannot be stored on a single server. It is also possible that an operation requires a lot of computing resources, resulting in lower performance and the computing capability needs to be enhanced. In either case, you have two options: vertical or horizontal scaling.
Vertical Scaling adds more resources to a single server. Therefore, for a very large dataset, this means adding more (or larger) hard disks so that a single server can accommodate the entire dataset. For computing operations, this means to migrate computing to servers with faster CPU or larger memory space. In any case, the resources of a single server can meet the demand for more resources and achieve vertical scaling.
On the other hand, horizontal scaling adds more nodes. For large datasets, this is part of the data set that is stored on the second server. For computing resources, this means dividing operations or loads into additional nodes. In order to make full use of horizontal scaling, it should be regarded as an essential design principle of the system architecture. Otherwise, it would be quite troublesome to modify the system or split the context for horizontal scaling.
When it comes to horizontal scaling, a more common technology is to partition or block services. Partitions can be distributed, so that logical feature sets are independent of each other. They can be achieved through geographical boundaries or other standards (such as non-paid users vs. Paid users.These solutions provide enhanced service or data storage capabilities.
In our image server example, you can save images stored on a single file server to multiple file servers, each file server has its own unique image set (see Figure 1.4 ). This architecture allows the system to save images to a file server. When the server is about to be full, an additional server is added like a hard disk. This design requires a naming rule that can bind the file name to the storage server. The name of an image may be in the form of a complete hash scheme mapped to all servers. Alternatively, each image is assigned an incremental ID. When a user requests an image, the image retrieval service only needs to save the ID range mapped to each server (similar to the index) you can.
Figure 1.4: Image Storage Service using redundancy and partitioning
Of course, assigning data or functions to multiple servers is challenging. A key issue isData Locality. For distributed systems, the closer the computing or operation data is, the better the system performance.Therefore, a potential problem is that data is stored on multiple servers. When a single data is needed, they are not the same, it forces the server to pay an expensive performance cost for from the network.
Another potential problem is inconsistency.. When multiple different services read and write the same shared resource, they may experience competition-some data should be updated, but the read operation happens just before the update-in this case, data is inconsistent. For example, a client sends a request to rename an image titled "dog" to "little guy", which may result in competition in a hosting solution. At the same time, another client sends a request to read this image. The title "dog" or "little guy" displayed on the second client is unclear.
Of course, there are still some obstacles to the partition, but the partition allows the problem-data, load, usage mode, etc.-to be cut into manageable data blocks. This will greatly improve scalability and manageability, of course, not without risks. There are many ways to reduce risks and handle faults; however, there is a limited space and I will not go into details. If you are interested, see this article for more error tolerance and detection information.
1.3 build an efficient and Scalable Data Access Module
Some core issues have been taken into account when designing a distributed system. Now let's discuss a difficult part: Scalable Data Access.
For most simple web applications such as lamp systems, see Figure 1.5.
Figure 1.5: Simple Web Application
As they grow, two major changes have taken place: Application Server and database expansion.In a highly scalable application, the application server is usually minimized and generally a shared-nothing ArchitectureShared nothing architecture is a distributed computing architecture in which there is no centralized storage and there is no resource competition in the entire system. This architecture is highly scalable, is widely used in Web applications), which makes the application server layer of the system horizontally scalable. Because of this design, database servers can support more loads and services. At this layer, real scaling and performance changes are beginning to play a role.
The remaining sections focus on providing quick data access through more common policies and methods to make these types of services faster.
Figure 1.6: Simplest Web Application
Most systems are simplified to Figure 1.6, which is a good start. If you have a lot of data and want to access it quickly, it's like a pile of candy is placed at the top of your office drawer. Despite being too simplified, the previous statement implies two difficult problems: storage scalability and fast data access.
For this section, we assume that you have a large data storage space (TB) and you want users to randomly access a small amount of data (see Figure 1.7 ). This is similar to locating an image file on the file server in the image application example.
Figure 1.7: access specific data
This is very challenging because it needs to load several terabytes of data into the memory and directly convert it into disk Io. You need to know that reading from a disk is much slower than reading from memory-the memory access speed is as fast as that of agile Chuck nois ), the access speed of disks is the same as that of heavy trucks. This speed difference will increase more in large datasets. The memory access speed for real-number sequential reads is at least six times that for disks, random reading speed is 100,000 times faster than the disk (refer to "Big Data Compaction" http://queue.acm.org/detail.cfm? Id = 1563874 ). In addition, even if a unique ID is used, solving the problem of getting a small amount of data storage location is also an arduous task. This is like taking out the last Jolly from your candy storage point without looking at it.
Rancher tastes the same as candy.
Thank God, there are many ways you can make such operations easier. Four of them are more important.Cache, proxy, index and load balancing. The rest of this chapter will discuss how to use each concept to accelerate data access.
Cache uses the local access principle: recently requested data may be requested again. They are used on almost every layer of the Computer: hardware, operating systems, Web browsers, web applications, and so on. The cache is like the memory for short-term storage: It has space limitations, but the access speed is usually faster than that of the source data source and contains most recent access entries. Cache can exist at various layers of the architecture, but it is often common at the front end. Here, data is usually returned quickly without the burden of downstream layers.
In our API example, how does one use cache to quickly access data? In this case, you can insert the cache in two places. An operation is to add a cache on your request layer node, as shown in Figure 1.8.
Figure 1.8: Insert a cache at the request layer node
You can configure a cache directly on a request layer node to store the corresponding data locally. Each time a request is sent to the service, if the data exists, the node will return the locally cached data quickly. If the data is not cached, the request node searches for data on the disk. The request layer node cache can be stored in the memory and local disk of the node (faster than the network storage ).
Figure 1.9: Multiple caches
What will happen when you expand these nodes? As shown in Figure 1.9, if the request layer is expanded to multiple nodes, each host may still have its own cache. However, if your server Load balancer randomly distributes requests to nodes, the same request will point to different nodes, which increases the cache miss rate. There are two options to solve this problem: Global cache and distributed cache.
Global cache: All nodes use the same cache space. This involves adding a server or a file storage system, which is faster than accessing the source storage and accessing through all nodes. Each request node queries a local cache in the same way. This cache scheme may be a bit complicated because it is easily overwhelmed when the number of clients and requests increases, however, it is useful in some architectures (especially those dedicated hardware to make global cache very fast, or fixed datasets need to be cached ).
There are two common forms of cache in the description diagram. In Figure 1.10, when a cache response is not found in the cache, the cache itself finds data from the underlying storage. In Figure 1.11, when no data is found in the cache, the request node retrieves data from the underlying layer.
Figure 1.10: Global cache (cache is responsible for searching data)
Figure 1.11: Global cache (the request node is responsible for searching data)
Most applications that use global cache tend to be in the first category. This type of cache can manage data reading and prevent clients from sending massive requests to the same data.However, in some cases, the second type of implementation seems more meaningful. For example, if a cache is used for a very large file, a cache with a low hit rate will cause the buffer to fill up the cache with a Miss. In this case, there will be a large proportion of total datasets in the cache. Another example is that files stored in the cache in the architecture design are static and will not be excluded. (This may be because the application requires the latency of the surrounding data-the data of some fragments may need to be very fast in the big data set-the application logic knows in some places, the exclusion policy or hotspot will be better than the cache solution)
In distributed cache (Figure 1.12), each node caches a portion of the data. If you regard the refrigerator as a cache for grocery stores, then distributed cache is like placing your food in multiple places-your refrigerator, cupboard, and lunch box-where you can easily access it without having to rush to grocery stores. Cache generally uses a consistent hash function for segmentation, so that when a request node looks for a data, it can quickly know where to find it in the distributed cache, to determine whether the modified data is available from the cache. In this case, each node has a small cache. You can send a data search request to another node before you go directly to the original data storage to find the data. Therefore,One advantage of distributed cache is that you can have more cache space by adding a new node to the request pool.
One disadvantage of distributed cache is to fix missing nodes.Some distributed cache systems bypass this problem by making multiple backups on different nodes. However, you can imagine that this logic is becoming more complex quickly, especially when you add or delete nodes at the request layer. Even if a node disappears and some cached data is lost, we can also get it at the source data storage address-so this is not necessarily catastrophic!
Figure 1.12: distributed cache
The greatness of caching is that they make our access faster (the premise of course is to use it correctly). The method you choose must be faster with more requests. However, the cost of all these caches is that additional storage space must be available, which is usually stored in expensive memory; there is never a problem. Caching makes processing faster and provides system functions under high loads. Otherwise, the server will be downgraded.
There is a popular open source cache project memcached (http://memcached.org/) (it can be used as a local cache or distributed cache); of course, there are also some other operations supported (including some special settings of the Language Pack and framework ).
Memcached is used as many large web sites. Although it is powerful, it is only a simple memory key-value storage method, it optimizes arbitrary data storage and quick retrieval (O (1 )).
Facebook uses a variety of different caches to improve the performance of their sites (see "Facebook caching and performance "). At the language level (using PHP built-in function calls), they use $ globalsand APC for caching, this helps to make intermediate function calls and result return faster (class libraries are available in most languages to Improve the Performance of web pages ). The global cache used by Facebook is distributed across multiple servers.
Memcached at Facebook "), such a function call to access the cache can use a lot of parallel requests to obtain stored data on different memcached servers. This enables them to have higher performance and throughput when allocating data space to users, and there is a central server for updates (this is very important because when you run thousands of servers, cache invalidation and consistency will be a big challenge ).
Now let's discuss how to deal with data that is not in the cache ···
In short, a proxy server is a hardware or software in the middle of a client and a server. It receives requests from the client and forwards them to the server. A proxy is generally used to filter requests, record logs, or convert requests (Add/delete headers, encrypt/decrypt, compress, and so on ).
Figure 1.13 Proxy Server
When you need to coordinate requests from multiple servers, the proxy server is also very useful. It allows us to optimize the request traffic from the perspective of the entire system.Collapsed Forwarding is one of the methods used by the proxy to accelerate access. Multiple identical or similar requests are compressed in the same request, then, send a single result to each client.
Assume that several nodes want to request the same data and it is not in the cache. When these requests pass through the proxy, the proxy can merge them into a request through the compression forwarding technology, so that the data only needs to be read once from the disk (see Figure 1.14 ). This technology also has some disadvantages. Because each request has some latency, some requests may be delayed due to waiting to merge with other requests. In any case, this technology can help improve performance in a high-load environment, especially when the same data is repeatedly accessed. Compression Forwarding is a bit similar to caching technology, but it does not store data, but acts as the agent of the client to optimize their requests to some extent.
In a LAN proxy server, the client does not need to connect to the Internet through its own IP address, but the proxy will merge requests with the same content. It is easy to confuse here, because many Proxies also act as caches (this is indeed a suitable place for caching), but the cache is not necessarily a proxy.
Figure 1.14 merge requests using a proxy
Another method of using proxy is not only to merge requests with the same data, but also to merge data requests close to the storage source (usually the disk.Using this policy can maximize the use of local data for requests, which can reduce the data latency of requests. For example, if a group of nodes request Part B information, such as partb1 and partb2, we can set a proxy to identify the space regions of each request, merge them into a request, and return a BIGB, the Data Source Read is greatly reduced (see Figure 1.15 ). When you randomly access the TB data, the difference in request time is very obvious! The proxy is particularly useful when it is under high load or when the cache is restricted, because it can basically Merge multiple requests into one in batches.
Figure 1.15: Use a proxy to merge data requests with tight space
It is worth noting that,Proxy and cache can be used together, but it is usually better to put the cache in front of the proxyThe reason for putting it above is the same as the reason why it is best for fast runners to start in the first round of the marathon. Because the cache extracts data from the memory and the speed is fast, it does not mind the existence of multiple requests for the same result. However, if the cache is on the other side of the proxy server, an additional latency will be added before each request reaches the cache, which will affect the performance.
If you want to add a proxy to the system, you can consider a lot of options; squid and varnish have been tested in practice and are widely used in many practical web sites. These proxy solutions provide a lot of Optimization Measures for most client-server communications. Install one of the two as the reverse proxy (reverse
Proxy, which is explained in the Server Load balancer section below) can greatly improve the performance of the web server and reduce the workload required to process requests from the client.
Using indexes to quickly access data is a well-recognized policy to optimize data access performance. Most of us may learn about indexes from databases. Indexes use increasing storage space and slower writes (because you must write and update indexes) in exchange for faster reading.
You can apply this concept to a big data set, just as it is applied to traditional relational data storage. Indexing requires you to carefully consider how users can access your data. If the dataset contains many TBS, but each packet (payload) is small (may only be 1 kb), you must use indexes to optimize data access. Finding small data packets in such a large dataset is very challenging, because it is impossible for you to traverse all data within a reasonable period of time. Even more likely, such a large dataset is distributed across several (or even many) physical devices-this means you need to find the correct physical location of the expected data in some ways. Index is the most suitable method to do this.
Figure 1.16: Index
An index can be used as a table in the Content-each item in a table indicates the location of your data storage. For example, if you are looking for the second part of B's data-how do you know where to find it? If you have an index sorted by data type (Data A, B, C), the index will tell you the starting position of Data B. Then you can jump to that location and read the second part of data B you want. (See Figure 1.16)
These indexes are often stored in the memory or in a very fast local location (somewhere very local) for client requests ). Berkeley DBS (bdbs) and tree data structures often store data in sequence and are ideal for storing indexes.
There are often many layers of indexes. As a Data Map, you are directed from one place to another until you get the data you want. (See Figure 1.17 .)
Figure 1.17 multi-layer index
Indexes can also be used to create multiple views of the same data ). For large datasets, this is a great way to define different filters and categories without creating multiple additional data copies.
For example, imagine that the image storage system actually stores images on every page of the book, and the service allows the customer to query the text in these images and search for the content of all the books on each topic, just as the search engine allows you to search for HTML content. In this case, pictures of all books occupy a lot of server storage, and it is difficult to search for a page. First, it is easy to access the inverse indexes used to query any word or Word Array (tuples). Then, it is also challenging to navigate to the exact page and location of the book and obtain accurate images as return results. Therefore, in this case, the inverted index should be mapped to each location (such as book B), and then B should contain an index of all words, locations, and occurrences of each part.
An inverted index of index1 may look like the following-each word or Word Array corresponds to a book containing them.
| Word (s)
|| Book (s)
| Being awesome
|| Book B, book C, book d
|| Book C, book F
|| Book B
This intermediate index may look like the above, but may only contain information about words, locations, and book B. This nested index architecture requires a small enough space for each sub-index to prevent all the information from being saved in a large inverted index.
This is the key aspect of a large system, because even if it is compressed, these indexes are too large to store, which is too expensive. In this system, if we assume that many books in the world -- 100,000,000 (seeinside Google Books blog post)-each book has only 10 pages (just for the following good computing ), each page contains 250 words, that is, 250 billion (250
Billion. If we assume that each word has five characters and each character occupies 8 characters (OR 1 byte, even if some characters use 2 bytes), each word occupies 5 bytes, then, even if each word is contained only once, this index will occupy more than GB of storage space. Then, you can understand how fast the storage space is growing by creating indexes that contain many other information-phrases, data locations, and occurrences.
By creating these intermediate indexes and using smaller segments to represent data, the big data problem can be solved. Data can be distributed to multiple servers, and access is still fast. Index is the cornerstone of information retrieval and the foundation of modern search engines. Of course, this section is a simple introduction, and there are many other in-depth studies not involved-for example, how to make the index faster and smaller, including more information (such as relevancy )), and seamless updates (race conditions under competitive conditions), there are some administrative difficulties; in the update of massive addition or modification of data, especially when it involves relevancy) and score (scoring), there are also some difficulties ).
It is very important to quickly and easily find data. indexing is an effective and simple tool for this purpose.
Server Load balancer
Finally, let's talk about another key part of all distributed systems, Load balancer. Server Load balancer is an indispensable part of various architectures because it is responsible for assigning loads to a group of nodes that process service requests. In this way, multiple nodes in the system can transparently serve the same function (see Figure 1.18 ).Its main purpose is to process a large number of concurrent connections and allocate these connections to a request processing node, so that the system can be scalable, you can only add new nodes to process more requests.
Figure 1.18: Server Load balancer
There are many algorithms used to process these requests, includingRandomly select nodes, cyclically select nodes, or even select nodes based on memory or CPU utilization.. Server Load balancer can be implemented using software or hardware devices. Recently, an open-source software Load balancer that has been widely used is called haproxy.
In distributed systems, Server Load balancer is often at the forefront of the system.In this way, all requests can be distributed accordingly. In some complex distributed systems, it is also common to distribute a request to multiple load balancers, as shown in Figure 1.19.
Figure 1.19: multiple load balancers
One challenge facing Server Load balancer is how to manage data related to user sessions.On an e-commerce website, if you only have one client, you can easily save the items that you put into your shopping cart, he can still see the things in his shopping cart when he visits the next time (this is very important, because it is very likely to be bought when the user returns to find the product that is still in the shopping cart ). However, if a user is distributed to a node in a session, but the user is distributed to another node during the next access, inconsistency may occur here, because the new node may not keep the items in the user's shopping cart. (If you put the six boxes of child Nongfu Spring Into your shopping cart, but you will not get angry when you come back next time and check that the shopping cart is empty ?)
One way to solve this problem is to make the session sticky, so that the same user is always distributed to the same node, but it is difficult to use a similar failure slave (Failover) this reliability measure has been adopted. In this case, the items in the user's shopping cart will not be lost, but if the node that the user keeps becomes invalid, there will be a special situation, the assumption that the items in the shopping cart will not be lost is no longer true (although I hope not to write this assumption into the program ). Of course, this problem can also be solved using other strategies and tools mentioned in this chapter, such as services and many methods that are not mentioned (suchServer cache, cookie, and URL rewriting).
If there are not many nodes in the system, the round robin DNS system may be more meaningful because the Load balancer may be expensive, it also adds an additional layer of unnecessary complexity. Of course, there will be a variety of scheduling and load balancing algorithms in large systems. Simple points include random selection or cyclic selection. For complex points, you can consider the utilization rate and processing capability. All these algorithms are used to distribute browsing and requests, and provide useful reliability tools, such as automatic failover or automatic failure node proposal (such as node failure response ). However, these advanced features make it difficult to diagnose problems. For example, when the system load is large, the Server Load balancer may remove a slow or time-out node (because the node needs to process a large number of requests), but for other nodes, this actually increases the deterioration of the situation. At this time, a large amount of monitoring is very important, because the overall traffic and throughput of the system may seem to be decreasing (because the number of requests processed by nodes is decreasing), but individual nodes are getting increasingly busy.
Server Load balancer is a simple way to expand your system capabilities. Like other technologies mentioned in this Article, it plays a fundamental role in distributed system architecture. The server Load balancer also provides a key function that must be able to detect the running status of a node. For example, if a node loses response or is overloaded, the server Load balancer can remove the Server Load balancer from the node pool where the request is processed, and then use other redundant nodes in the system.
So far, we have introduced many faster data reading methods, but another important part of the data layer's scalability is the effective management of writing. When the system is simple, there is only a minimum processing load and a small database, and the speed of writing can be predicted. However, in more complex systems, writing may take a long time that is almost impossible to determine. For example, data may have to be written to several locations in different databases or indexes, or the system may be under high load. In these cases, writing or any type of task may take a long time, and the pursuit of performance and availability requires the creation of Asynchronization in the system; A common way to do that is to use the queue.
Figure 1.20: Synchronous request
Imagine a system where each client initiates a remote service task request. Each client sends their requests to the server. The server completes these tasks as quickly as possible and returns results to each client separately. In a small system, a server (or logical Service) can provide rapid services for incoming client requests, just as quickly as they come, which should work well. However, when the server receives a request that exceeds the processing capacity, each client is forced to wait for the end of the request from other clients before generating a response. This is an example of synchronous requests, as shown in Figure 1.20.
This kind of synchronization behavior will seriously reduce the client performance; the client is forced to wait, and the effective execution is zero until its request is answered.Adding additional servers to bear system loads does not solve this problem. Even for effective load balancing, it is extremely difficult to ensure equal and fair distribution in order to maximize client performance. In addition, if the server cannot process the request or fails, the client will also fail to upstream. To effectively solve this problem, we need to establish an abstraction between the client request and the execution of the actual service provision.
Figure 1.21: request management using a queue
Enter the queue. A queue is as simple as it sounds: a task enters, is added to the queue, and then the workers pick up the next task as long as they have the ability to process it. (See Figure 1.21) these tasks may represent simple database writing or complex tasks, such as generating a thumbnail preview for a document. When a client submits a task request to a queue, they are no longer forced to wait for the result; they only need to confirm that the request is correctly received. This confirmation may be used as a reference for the work result when the client requests.
The queue enables the client to work in asynchronous mode and provides a strategic abstraction of client requests and responses. In other words, there is no difference between requests and responses in a synchronization system, so they cannot be managed separately.In an asynchronous system, the client requests a task and the server responds to the confirmation received by the task. Then, the client can periodically check the status of the task and request the result once the task ends.When the client waits for an asynchronous request to complete, it can freely execute other work, or even asynchronously request other services. The latter is an example of how a queue and a message can be leveraged in a distributed system ).
The queue also provides protection against service interruptions and failures.For example, it is very easy to create a highly robust queue that can retry service requests that fail due to an instant Server failure. Compared to directly exposing the client to intermittent service interruptions-this requires complicated and often inconsistent client error handling, it is more desirable to use a queue to enhance the guarantee of service quality.
Queues are the basis for managing distributed communication between different parts of a large-scale distributed system, and there are many methods to implement them. There are many open-source queues such as rabbitmq, activemq,
Beanstalkd, but some also use services like zookeeper, or even data storage like redis.
It is interesting to design an effective system for fast big data access, and there are a large number of good tools to help design a variety of applications. This article only covers some examples, just some superficial things, but will become more and more-and there will certainly be more innovations in this field.