How to configure cross-route access in NLB multicast Mode

Source: Internet
Author: User

How to configure cross-route access in NLB multicast Mode

Some time ago, because a large number of shared files were required to be accessed, two servers were configured to improve the concurrent access speed, and NBL for windows was used for configuration. There were many configuration methods on the Internet. After configuration, multicast can be accessed locally, but it cannot be accessed from other network segments. Later, I checked information on the Internet. In some cases, it is best to use the unicast mode to support cross-network access, in addition, the NBL working mode is changed to the unicast mode, but some hosts may be accessible, and some cannot be accessed. Based on the working principle of unicast, I learned that the physical address of the NIC will be changed to the mac address starting with 02BF. I checked the mac address forwarding table on the layer-3 Switch and found that the address starting with 02bf only appears on one physical port, this address is not available on the port of the other host machine. Because the switch does not support one mac address and bind two ports, the address has to be deleted from the address forwarding table, this satisfies the requirement that the mac address is not bound to a specific port. All computers can be accessed and the problem is solved. However, a port flooding problem occurs. One computer needs to transmit a large amount of data to one server in the cluster. Because all the mac addresses added to the Cluster Server are the same, this mac address does not exist in the address forwarding table of the vswitch. Therefore, when you send data to this mac address, it is sent to all ports of the current VLAN. Diagram of port flooding








From this, we can see that the data flowing from Port 18 to port 19 has appeared on other ports. When the data volume is large, it exceeds 100 Mbps, which puts some pressure on network access. To solve this problem, you can connect to a layer-2 switch under the current layer-3 so that all traffic can be directed to the port where the layer-2 switch is located, this will make the vswitch a bottleneck for access, so NLB makes no sense. On the other hand, you can configure vswitches to support virtual MAC addresses appear in the configuration of NLB server port, which requires a lot of the latest version of the switch software support, which is not supported here huasan s5500-28-EI switch, there are no conditions for software upgrades. It seems that there are only other methods.

Return to the NLB multicast mode, enable the multicast IGMP support during configuration, and configure igmp-snooping on the switch so that the multicast only appears on the port with the NLB configured, but it can only be accessed in the same CIDR block. How can I access it across CIDR blocks? The IP address that originally needs to access NLB will send an arp broadcast to the network. When the cluster receives this arp request, it will send a response to inform the cluster of the MAC address corresponding to the IP address, this address is a multicast address starting with 0100. In this network, this address will be received by the requesting computer. Therefore, the cluster that can be accessed within the same CIDR block cannot be accessed across CIDR blocks. The solution is to let computers in other network segments know that the cluster address exists. Some solutions are to bind the cluster IP address to the multicast MAC address. In this case, you can try to perform operations on the switch, however, it indicates that the MAC address is invalid and cannot be bound. By querying the specification of the switch, you can see a statement from some configuration details that does not check ARP binding, and try to run the ARP binding check on the switch, then, bind the cluster IP address to the corresponding multicast MAC address, and try to access the cluster on a computer in another network segment. But there are still some inaccessible computers. Check the network environment carefully. The computers that cannot be accessed are not directly connected to the current three-tier switch, but on another three-tier switch, log on to another layer-3 Switch and directly ping the cluster IP address, which cannot be accessed. It seems that this layer-3 Switch has checked the ARP binding. Try not to bind ARP. You can ping it, the computer connected to the cluster can also access the cluster. All problems have been solved.

After, the cluster is configured with multicast mode. From the figure, we can see that the port flood is gone and the network traffic is normal.

Ps: Operation Statement on H3C switch:

Undo arp check enable # do not bind arp entries. If you run this statement on both layer-3 switches, You can bind them.

Summary:

The configuration of NLB in single-play mode is simple and can be accessed across network segments, but port flooding may occur. Therefore, this mode is generally not used.

If you use the NLB configuration in multicast mode, you need to configure the switch. It is best to enable the support for IGMP. In this way, multicast is only transmitted to the port of the participating cluster, and the network pressure is low.

Through this configuration, I have studied Microsoft NLB in depth, checked information from various aspects, and finally found a solution to the problem from the switch manual. Therefore, when solving all kinds of problems, you need to clarify the crux of the current problem, instead of trying it once. To see which situation can be done, you still need to understand the working principle and use theory to guide the practice, in this way, the problem can be solved.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.