Server Load balancer principles and practices part 3 basic concepts of Server Load balancer-network Basics

Source: Internet
Author: User
Tags file transfer protocol


Server Load balancer principles and practices part 3 basic concepts of Server Load balancer-network Basics







SeriesArticle:





 



Server Load balancer: requirements of Server Load balancer



Server Load balancer details Article 2: load balancing Basic concepts of Server Load balancer-Network Basics



Server Load balancer Part 3: basic concepts of Server Load balancer-Server group using Server Load balancer



Server Load balancer details Article 4: elastic load balancer basic concepts of Server Load balancer-Data Packet Flow During Server Load balancer



Server Load balancer details Article 5: basic concepts of Server Load balancer-Health Check



Server Load balancer details Article 6: what is load balancing basic concepts of Server Load balancer-Network Address Translation (NAT)



Server Load balancer Article 7: basic concepts of Server Load balancer-aws elastic load balancer Direct server return



Server Load balancer details Article 8: aws load balancer Server Load balancer Advanced Technology-Session persistence (upper)



Server Load balancer (Advanced Server Load balancer Technology)-network load balancer azure load balancer Session persistence (medium) f5 load balancer 



Server Load balancer details Article 10: Server Load balancer Advanced Technology-Session persistence (lower) 



 



 



Server Load balancer is not a new concept in the server world. We have invented many cluster technologies to implement joint computing, but they are only applied on a few proprietary systems. Even so, Server Load balancer has become a powerful solution for mainstream applications in many fields, including the scalability, high availability, security, and manageability of Server clusters.



 



First, Server Load balancer significantly improves the scalability of applications and Server clusters by distributing loads among multiple servers.



 



Secondly, Server Load balancer improves the availability of the application system because it can direct traffic to the backup server when a server or application fails.



 



Third, Server Load balancer can improve manageability in multiple ways. It allows network or system administrators to easily migrate applications from one server to another or add more servers to run applications.Program. Finally, it is quite important that Server Load balancer improves the security of applications and server systems by protecting Server clusters from various types of DoS attacks.



 



The advent of the Internet has brought a lot of new applications and services: Web, DNS, FTP, SMTP and so on. Fortunately, it is very easy to differentiate and process the Internet traffic. Because the Internet has a large number of clients requesting specific services, and each client can be identified by IP addresses, this makes it feasible to distribute the load to multiple servers that run the same software and provide the same service.



 



This article introduces the basic concepts of Server Load balancer and covers the basic knowledge that helps readers understand the working principle of Server Load balancer. Many different application systems can perform load balancing. Generally, Server Load balancer products are used to manage Web Server clusters. Therefore, we take Web servers as an example to discuss and understand Server Load balancer. All these concepts apply to other applications.



 



Network Basics



 



First, let's discuss the basic knowledge of layer-2 switching, TCP and web server load balancing. Before introducing Server Load balancer, let's take a look at the process of obtaining a page request and response from the web server.



 



1Entry to exchange technology



 



Now we will briefly introduce the working principles of layer-2 and layer-3 switching to provide the necessary basic knowledge for understanding the concept of Server Load balancer.



 



The MAC (Media Access Control) Address defines the entity of the only network hardware in the Ethernet. The IP (Internet Protocol) Address defines the only host in the interconnected network. The port used by the switch to receive data packets is called an entry, while the port used by the switch to send data packets is called an exit. The switch receives data packets at its entry, selects the egress and forwards the data packets. The difference between different vswitches is that the information used to select the egress is different, and some vswitches modify some information before the packets are sent out. After a layer-2 switch receives a packet, it determines its destination address based on the packet header information, such as the MAC address, and forwards the packet. In contrast, layer-3 switches packet header information based on layer-3, such as IP addresses in data packets. Before forwarding data packets, a layer-3 Switch modifies the MAC address of the destination IP address to the MAC address of the Next Hop or destination IP address. A layer-3 switch is also known as a router, and a layer-3 switch is also known as a route. The server Load balancer checks layer-4 or even layer-5 to layer-7 data packets for exchange decisions. Therefore, it is called a layer-4 to layer-7 switch. As a part of the Server Load balancer function, the Server Load balancer also handles layer-2 and layer-3 switching. Therefore, it is also called a layer-2 to layer-7 switch.



 



To make the network easier to manage, the network is subdivided into many subnets. A subnet usually refers to a network that connects many computers or data centers to a group of servers in a building or one floor. All communications within a sub-network can be implemented through layer-2 switching. ARP Protocol (Address Resolution), defined in RFC 826, is a very important layer-2 Exchange Protocol. All Ethernet devices use ARP to learn the relationship between MAC addresses and IP addresses. A network device can broadcast its MAC address and IP address through ARP to let other devices in the same subnet know its existence. A broadcast message is also called a broadcast domain because it is sent to all devices in the same subnet. With ARP, all devices can understand the existence of other devices in this subnet. For subnet communication, a gateway device, such as a layer-3 switch or router, is required. Any computer must be connected to one subnet and set a default gateway to communicate with other subnet computers.



 



2 TCPOverview



 



Transport Control Protocol (TCP), defined in RFC 793, is currently the most common protocol for reliable data exchange between two hosts. TCP is a stateful protocol, that is, the process of establishing a TCP connection, exchanging data, and terminating the connection. TCP ensures the orderly transmission of data and the complete receipt of data through the verification value. The High-level applications running on the TCP protocol do not need to consider the data integrity issue. As shown in the OSI model in 1.1, TCP is a layer-4 protocol.



 



Shows how TCP works. Three handshakes are required to establish a TCP connection. In this example, the client wants to exchange data with the server, and the client sends a SYN packet to the server. Important information in the SYN packet includes the source IP address, source port, destination IP address, and destination port. The source IP address is the Client IP address, and the source port is randomly generated by the client. The destination IP address is the IP address of the server, and the destination port address is the listening port of the application on the server. Standard applications such as Web Services and file transfer protocol (FTP) use ports 80 and 21 respectively. Other applications may use other ports, but the client must know the port number of the application to access the application. The SYN Packet also contains a starting serial number. The starting serial number of each new connection from the client to the server increases progressively. When the server receives a SYN packet, it responds to a syn ack packet, which contains the Starting sequence number of the server. The client then responds to an ACK packet, indicating that the connection is established. Then the client and the server can exchange data on the connection. Each TCP connection is uniquely represented by four values: source IP address, source port number, destination IP address, and destination port number. These four values are the same in each packet for a given TCP connection. Note that the source IP address of the data packet from the client to the server and the source port number of the client are changed to the destination IP address and destination port number of the data packet from the server to the client, the source usually refers to the host that sends data packets. Once the client and the server complete data exchange, the client sends a FIN packet, and the server sends a fin ACK packet, thus ending a TCP connection. When the session is in progress, the client or server sends a TCP reset to the other party and terminates a TCP connection. In this case, if you still need to exchange data, you must re-establish the connection.






UDP is another popular layer-4 protocol, which is often used by applications such as streaming media. Unlike TCP, UDP is a stateless protocol. When using UDP to exchange data, you do not need to establish or end a session. UDP does not provide reliable transmission assurance as TCP does. Applications running on UDP must ensure reliable data transmission. We still think that using UDP to exchange data between two hosts is a session, but we cannot determine the start and end of the UDP session. A udp session can also be uniquely identified by the source IP address, source port number, destination IP address, and destination port number.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.