Principles and introduction of CDN (Content Delivery Network)

Source: Internet
Author: User
Tags transparent image website server

CDN stands for content delivery network (CDN. The purpose is to add a new network architecture to the existing Internet to publish website content to the "edge" closest to the user's network, so that users can obtain the desired content nearby, solve the problem of Internet congestion and improve the response speed of users accessing websites. Technically, it solves the slow response speed caused by low network bandwidth, large user traffic, and unevenly distributed outlets.
Content Delivery Network (CDN) is an effective method to solve the problem of poor Internet performance. The basic idea is to avoid bottlenecks and links on the Internet that may affect data transmission speed and stability, so that content transmission can be faster and more stable. A layer of smart virtual network formed by placing node servers in various parts of the network on the basis of the existing Internet, the CDN system can redirect users' requests to the nearest service node in real time based on the network traffic and connection, load status, distance to the user and response time of each node..
In fact, content delivery network (CDN) is a new network construction method. It is a network covering layer that is particularly optimized for publishing broadband rich media on traditional IP networks; in a broad sense, CDN represents a network service model based on quality and order. To put it simply, content delivery network (CDN) is a strategically deployed overall system, which includes four requirements: distributed storage, Server Load balancer, network request redirection, and content management, content management and global network traffic management are the core of CDN. Based on users' proximity and server load judgment, CDN ensures that the content provides services for users' requests in an extremely efficient manner. In general, the content service is based on a cache server, also known as a proxy cache (surrogate). It is located on the edge of the network and is only "One hop" away from the user. At the same time, the proxy cache is a transparent image of the content provider source server (usually located in the data center of the CDN service provider. This architecture enables CDN service providers to provide end users with the best possible experience on behalf of their customers, that is, content providers. These users cannot tolerate any latency in request response time. According to statistics, CDN technology can be used to process 70%-of the entire website page ~ 95% of content access Traffic reduces the load on the server and improves the performance and scalability of the website.
Compared with the existing content publishing mode, CDN emphasizes the importance of the network in content publishing. By introducing active content management and global load balancing, CDN is fundamentally different from the traditional content publishing mode. In the traditional content publishing mode, content publishing is completed by the ICP application server, while the network is only a transparent data transmission channel, this transparency is manifested in the quality assurance of the network, which only stays at the layer of data packets, but cannot distinguish the quality of service based on different content objects. In addition, due to the "Best Effort" feature of the IP network, the quality assurance is achieved by providing sufficient end-to-end bandwidth between the user and the application server, far greater than the actual needs. In this content publishing mode, not only is a large amount of valuable backbone bandwidth occupied, but the load of the ICP application server also becomes very heavy and unpredictable. In the event of some hot events and surge traffic, local hot spots will be generated, so that the application server will be overloaded and quit the service. Another drawback of this center-based app server's content publishing model is the lack of personalized services and the distortion of the broadband service value chain, the content provider undertakes the content publishing services that they should not or are not doing well.
Throughout the value chain of broadband services, content providers and users are located at both ends of the entire value chain, and network service providers are used in the middle to connect them. With the maturity of the Internet industry and the transformation of business models, more and more roles in this value chain are becoming increasingly subdivided. Such as content/Application operators, managed service providers, backbone network service providers, and access service providers. In this value chain, each role must work in a division of labor and perform their respective duties to provide good services to customers, resulting in a win-win situation. From the perspective of the combination of content and network, content publishing has gone through two stages: content (Application) server and IDC. The IDC boom also gave birth to the role of hosting service providers. However, IDC cannot solve the issue of effective content publishing. The content in the center of the network does not address the occupation of backbone bandwidth and the traffic order on the IP network. Therefore, pushing content to the edge of the network to provide nearby edge services for users, thus ensuring the service quality and the access order on the entire network becomes an obvious choice. This is the CDN service mode. The establishment of CDN solves the dilemma of "centralization and decentralization" for content operators, which is undoubtedly valuable for building a good Internet value chain and an indispensable optimal website acceleration service.
Currently, large websites with high access volume in China, such as Sina and Netease, all use the CDN network acceleration technology. Although the Website access is huge, it will feel fast wherever it is accessed. Generally, if the website server is in China Netcom, the access by telecom users is very slow. If the server is in China Telecom, the access by Netcom users is very slow.
It adopts a distributed network cache structure (that is, the internationally popular Web Cache Technology). By adding a new network architecture to the existing Internet, publish the website content to the Cache Server closest to the user. Using the DNS Server Load balancer technology, you can determine that the user source accesses the Cache Server nearby to obtain the required content and solve Internet network congestion, to speed up the response of users to visit websites, as if multiple accelerators are provided in various regions, to achieve fast and redundant acceleration for multiple websites.

CDN features

1. Local cache acceleration improves the access speed of Enterprise websites (especially websites containing large numbers of images and static pages) and greatly improves the stability of these websites.
2. The image service eliminates the bottleneck of interconnection between different carriers, achieves network acceleration across carriers, and ensures good access quality for users in different networks.
3. Remote acceleration Remote Access Users can automatically select the Cache Server Based on DNS load balancing technology and select the fastest cache server to accelerate remote access.
4. Bandwidth Optimization automatically generates a remote mirror (image) cache server for the server. Data is read from the cache server during remote user access, reduces remote access bandwidth, network traffic, and web server load of the original site.
5. the widely distributed cluster anti-attack CDN nodes and intelligent redundancy between nodes can effectively prevent hacker intrusion and reduce various types of D. d. o. s attack impact on the website, while ensuring good service quality.

Key Technologies in CDN

(1) content publishing: with the help of indexing, caching, stream splitting, and Multicast Technologies, the content is published or delivered to the remote service point (POP) closest to the user);
(2) content routing: it is a holistic network load balancing technology. Through the redirection (DNS) mechanism in the Content Router, user requests are balanced on multiple remote POP, to get the response from the latest content source for user requests;
(3) content exchange: Based on the Content availability, server availability, and user background, it uses application layer exchange, stream splitting, and redirection (ICP and WCCP) on the POP cache server) and other technologies to intelligently balance load traffic;
(4) Performance management: it obtains the status information of network components through internal and external monitoring systems, measure the end-to-end performance of content publishing (such as packet loss, latency, average bandwidth, start time, and frame rate) to ensure that the Network is in the optimal running state.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.