Linux operations must be the face of the question (a)

Source: Internet
Author: User
Tags domain name server dedicated ip domain server node server varnish haproxy nginx load balancing

One, Varnish, Nginx, squid the advantages and disadvantages of their respective caches

To do the cache service, we must choose the professional cache service, the first choice squid and varnish.


Varnish


Nginx


Squid

Advantages 1.

2. multi-core support;

3.
  • Cross-platform

  • Non-blocking, high-concurrency connections (thanks to the use of the latest Epoll model)

  • Event-driven ( nginx implement a process that loops through multiple prepared events for high concurrency and lightweight )

  • Master/worker structure

  • Memory consumption small

  • Built-in health check function

  • Bandwidth saving (gzip compression supported)

  • High stability

    1. Provides cache acceleration, application layer filtering control functions

    2. Complete and huge cache technical data

    3. A lot of application production environment

    Disadvantages 1. No automatic fault tolerance and recovery function, data loss after reboot;

    2. Online expansion is more difficult.

    3. the size of the cache file on the machine is 2GB maximum;

    4. cluster is not supported.


    1. dynamic links with parameters are not supported

    2.Nginx there is no mechanism for cache expiration and cleanup within the cache, and these cached files are permanently stored on the machine, and if you want to cache a lot of things, it will burst the entire hard disk space.

    3. only 200 status code can be cached, so the backend return 301/302/404 and other status codes will not be cached, if there is a very large number of pseudo-static link is deleted, it will continue to penetrate leading to the back-end bearing pressure

    4.Nginx does not automatically select memory or hard disk as storage media, everything is determined by the configuration, of course, the current operating system will have an operating system-level file caching mechanism, so there is no need to worry about large concurrent read the IO performance problems.
    1. Complex configuration
    Application Scenarios Concurrency requirements are not very large for small systems and applications A large number of businesses that provide Web services such as social networking, news, e-commerce, and virtual hosting Working on the big old CDN

    Summary: cache acceleration (static acceleration, bandwidth saving, Edge push): Varnish>squid>nginx

    Reverse proxy (road acceleration, hidden master node): Nginx>varnish>squid

    Second, the internal business of CDN

    A: The CDN responds to user requests based on the user's geographic location, bandwidth, network traffic, and the user's request to the CDN cache node that is closest to the user. So as to solve the congestion of Internet network, improve the response speed of users to visit the site.

    Third, querying the deep structure of DNS

    1. When the browser input www.qq.com access to the domain name, the operating system will first check whether the local Hosts file has this URL mapping relationship, if so, first call this IP address mapping, complete the domain name resolution.

    2. If the hosts do not have the mapping of this domain name, then find the local DNS parser cache, whether there is this URL mapping relationship, if any, directly return, complete the domain name resolution.

    3. If the hosts do not have a corresponding URL mapping relationship with the local DNS resolver cache, first locate the preferred DNS server (local DNS server) set in the TCP/IP parameter, and if this server receives the query, if the domain name is queried, Included in the local Configuration zone resource, the parse result is returned to the client to complete the domain name resolution, which is authoritative.

    4. If the domain name to be queried is not resolved by the local DNS server zone, but the server has cached this URL mapping relationship, call this IP address mapping to complete the domain name resolution, which is not authoritative.

    5. If the local DNS server local zone file and cache resolution are invalidated, the query is based on the local DNS server's settings (whether to set forwarders), and if the forwarding mode is not used, local DNS sends the request to 13 root DNS. When the root DNS server receives the request, it will determine who is authorized to administer the domain name (. com) and will return an IP that is responsible for the top-level domain name server. After the local DNS server receives the IP information, it will contact the server responsible for the. com domain. After the server that is responsible for the. com domain receives the request, if it cannot resolve itself, it will find a management. com domain's next-level DNS server address (qq.com) to the local DNS server. When the local DNS server receives this address, it will find the qq.com domain server, repeat the above action, query until the www.qq.com host is found.

    6. If the forwarding mode is used, this DNS server will forward the request to the first level of DNS server, by the top level of the server to resolve, the previous level of the server if it cannot resolve, or find root DNS or transfer requests to the upper ancestor, in this cycle. Whether it is a local DNS server or a forward or a root hint, the result is returned to the local DNS server, which is then returned to the client.

    Note: From the client to the local DNS server is a recursive query, and the DNS server is the interactive query is the iterative query.

    Iv. Three models of LVS

    The load Balancing scheduling technology of cluster can be distributed based on IP, port, content, etc., in which the load scheduling based on IP is the most efficient. In the IP-based load Balancing mode, there are three modes of operation, such as address translation, IP tunneling and direct routing,

    650) this.width=650; "Src=" https://s1.51cto.com/wyfs02/M00/90/EB/wKiom1jzM_iQLjRGAADS_tfDcqo842.png-wh_500x0-wm_ 3-wmp_4-s_553566918.png "title=" Lvs.png "alt=" Wkiom1jzm_iqljrgaads_tfdcqo842.png-wh_50 "/>

    1. Address conversion (network address translation): referred to as the NAT mode, similar to the private network structure of the firewall, the load scheduler as a gateway to all server nodes, that is, as the client Access portal, is also the client's access to each node to the exit. Server nodes use private IP addresses, which are located in the same physical network as the load scheduler, and are more secure than the other two methods.

    2.IP Tunnel (IP Tunnel): referred to as Tun mode, the use of an open network structure, load scheduler only as a client access portal, each node through their own Internet connection directly respond to the client, and no longer through the load scheduler. The server nodes are scattered in different locations in the Internet, have independent public IP addresses, and communicate with each other through the dedicated IP tunnel and the load scheduler.

    3. Direct Routing: referred to as Dr Mode, the semi-open network structure is similar to the structure of Tun mode, but the nodes are not scattered everywhere but are located in the same physical network as the scheduler. The load scheduler connects to each node server over the local network and does not require a dedicated IP tunnel.

    Summary: in the above three modes of operation, the NAT method only needs a public IP address, thus becoming the most easy-to-use load balancing mode, security is also better, many hardware load balancing devices are used this way In comparison, Dr Mode and Tun mode have a more powerful load capacity and a wider range of applications, but the security of the nodes is slightly worse.

    Five, LVS, Nginx, haproxy advantages and disadvantages





    Nginx




    LVS




    Haproxy













    Excellent






    Point

    1 , working on the 7 layer of the network, can be targeted for HTTP applications to do some diversion strategy, such as for the domain name, directory structure, its regular rules than haproxy more powerful and flexible, which is one of the main reasons for its widespread popularity, The nginx can be used more than LVS on this occasion.

    2 , nginx dependence on the stability of the network is very small, in theory can ping the load function can be, this is one of its advantages; on the contrary, LVS rely on the stability of the network is relatively large, I have deep experience;

    3 , Nginx installation and configuration is relatively simple, easy to test, it can basically make the error with the log print out. LVS configuration, testing will take a relatively long time, LVS on the network dependence is relatively large.

    3 , can bear high load pressure and stability, in the case of hardware is generally able to support tens of thousands of times the concurrency, the load is relatively smaller than LVS.

    4 , Nginx can detect the server internal faults through the port, for example, according to the server processing the status code returned by the Web page, timeout, etc., and will return the wrong request back to another node, but the disadvantage is that the URL is not supported to detect. For example, the user is uploading a file, and processing the upload node just in the upload process failure, Nginx will upload to another server to re-processing, and LVS directly broken off, if it is to upload a large file or very important files, users may be dissatisfied.

    5 , Nginx is not only a good load balancer/reverse proxy software, it is also a powerful Web application server. Lnmp is also a very popular web architecture in recent years and has a good stability in high-traffic environments.

    6 , Nginx is now more and more mature as the Web reverse acceleration cache, faster than the traditional squid server, you can consider using it as a reverse proxy accelerator.

    7 , Nginx can be used as a middle-class reverse proxy, this level nginx basically no opponent, only can compare Nginx only lighttpd, but lighttpd at present has not done nginx full function, configuration is not so clear and easy to read, Community information is far from Nginx active.

    8, Nginx can also be used as a static web page and image server, this aspect of performance is no opponent. The Nginx community is very active and there are many third-party modules.

    1 , anti-load capacity, is the work in the Network 4 layer only for distribution, no traffic generated, this feature also determines its performance in the Load Balancer software is the strongest, the memory and CPU resource consumption is low.

    2 , low-profile, this is a disadvantage is also an advantage, because there is not much to configure things, so do not need too much contact, greatly reducing the chance of human error.

    3 , stable work, because of its own resistance to load capacity is very strong, its own complete two-machine hot-standby program, such as Lvs+keepalived, but we in the implementation of the project is the most used or lvs/dr+keepalived.

    4 , no traffic, LVS only distributes the request, and the traffic does not go out from it, which guarantees that the performance of the Equalizer IO will not be affected by large traffic.

    5 , the application scope is wide, because LVS is working on the 4 layer, so it can load balance almost all applications, including HTTP, database, online chat room and so on.

    1 and Haproxy are also supported for virtual hosts.

    2 , the advantages of haproxy can be supplemented with some of Nginx's shortcomings, such as supporting session retention, cookie guidance, and support to detect the status of the backend server by obtaining the specified URL.

    3 , haproxy similar to LVS, itself is just a load balancer software, simply from the efficiency of haproxy will be better than nginx load balancing speed, in concurrent processing is better than nginx.

    4 , Haproxy support the TCP protocol load balanced forwarding, can load balance MySQL read, the backend of the MySQL node detection and load balancing, you can use lvs+keepalived to load balance MySQL master and slave.

    5 , Haproxy Load Balancing strategy is very many, the Haproxy load balancing algorithm now has the following 8 kinds of specific:

    ①roundrobin, said the simple polling, this is not much to say, this is the load balancer basically have;

    ②STATIC-RR, said according to the weight, suggest concern;

    ③leastconn, indicating that the minimum number of connections to deal with, suggest concern;

    ④source, according to the request source IP, this is similar to Nginx's ip_hash mechanism, we use it as a way to solve the session problem, it is recommended to pay attention to;

    ⑤ri, which represents the URI according to the request;

    ⑥rl_param, which represents the URL parameter according to the request ' balance Url_param ' requires an urlparameter name;

    ⑦HDR (name), which indicates that each HTTP request is locked according to the HTTP request header;

    ⑧rdp-cookie (name), which indicates that each TCP request is locked and hashed according to the cookie (name).





    missing


    Point

    1 , Nginx can only support HTTP, HTTPS and email protocol, so it is smaller in the scope of application, this is its disadvantage.

    2 , the health check of the backend server, only support through the port to detect, does not support through the URL to detect. The direct hold of the session is not supported, but can be solved by Ip_hash.

    1 , the software itself does not support regular expression processing, can not do static and dynamic separation, and now many sites in this area have a strong demand, this is the advantage of nginx/haproxy+keepalived.

    2 , if the site is a relatively large application, lvs/dr+keepalived implementation is more complex, especially after the WindowsServer machine, if the implementation and configuration and maintenance process is more complex, relatively, nginx/haproxy+ Keepalived is a lot easier.
    1. Poor extensibility


    This article is from "Hello Sunshine" blog, please be sure to keep this source http://hexiaoshuai.blog.51cto.com/12156333/1916464

    Linux operations must be the face of the question (a)

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.