The user's proper understanding of the San Inter-Switch Link (ISL) protocol helps to avoid the performance impact of unreasonable configuration. ISL has several manifestations in the San topology, and proper deployment is critical to the user's San application.
First, it is important to understand the difference between the physical data link in the SAN and the flow of data on those links. Optical fiber Protocol is a FULL-DUPLEX protocol, we can understand that a single, but also can be understood as the sending link to write data on the receiving link reading data. However, for ISL, the receiving link for one of the switches is the sending link for another switch, either received or sent, and we can all be seen as the same data stream in the two physical links between the switches. For example, a physical link allows data from switch x to switch y, and another physical link allows data from switch y to switch x, so whether it's a physical link a and a physical link B.
The server can read data from the storage device or write data to the storage device. Both read and write data are data streams, and we usually think that the read data stream occupies a physical link, while the write data stream passes through another physical link. In fact, any data stream can be transmitted through any one of the physical links.
Standard and network ISL
There are two forms of ISL, the first of which is called a standard ISL. This form of ISL isolates disparate data streams completely into separate links. For example, when a server writes data to a storage device, the write data stream takes up only link a and never occupies link b. Conversely, when a server reads data from a storage device, the read data stream takes up only link B. This is true regardless of how many servers or storage devices are connected to the SAN. Both read and write data streams will never occupy the same physical link. Figure 1 shows a standard ISL.
Note that if there are 50 servers, of which 49 are writing data to the storage device, and the 50th server is reading data from the storage device, the server that is reading the data will not be interfered with. The link that is occupied by 49 servers will be a bottleneck, and the link that is occupied by the 50th server will provide full bandwidth to the server.
The second form of ISL is called network ISL. In this form, both the read data stream and the write data stream are on the same physical link. (Figure 2)
If server 1 writes data to storage B, server 2 reads the data from storage a, and two streams pass through the same physical link in the ISL2. ISL1 and ISL3 are standard ISL,ISL2 is a network ISL. At this point, if 49 hosts are connected to switch A and they write data to storage B, the physical link between Switch B and switch C will become congested, affecting server 2 reading data from storage a.
For network ISL, read and write data streams can share the same physical link. In contrast, read and write data streams also significantly affect each other's performance. If a physical link becomes saturated, it is difficult to determine why. As in Figure 2, assume that servers 1 and 2 each need link bandwidth 60%,isl1 and ISL3 can easily meet the requirements, and ISL2 will become congested. To check the port usage of the HBA and storage device, you will find that none of the ports are busy to have an impact on performance. Unless you know a lot about the flow of data in your SAN, it's hard to find out where the problem is.
Network ISL also makes it difficult to determine when a physical link becomes congested. Users must consider two forms of data flow, not just single data streams on a single physical link. For standard ISL, users can measure the link peaks of write data by providing sufficient ISL to prevent saturation from occurring, and for congested ISL, users need to take into account the factors that read the data stream. It's not just the peak time of writing data, but the peak of the sum of data read and written throughout the day. Assuming that the peak of the write data stream occupies 80% of the bandwidth at 9 o'clock in the morning, the read data occupies less than 10% of the bandwidth at this point in time, and there is no problem. However, if the write data consumes 50% of the bandwidth at 2 o'clock in the afternoon and the read data occupies 60%, there is a problem.
The impact of ISL on San topologies
There are three basic types of San topologies: flat, full network, and core-edge. The flat topology has no ISL, it is one or more switches that connect to different servers and storage devices, and the server can access only the storage devices that it connects to the same switch. There is no any-to-any (any point to any point) connection mode.
The full network topology connects the switches in a flat topology to each other to provide a any-to-any connection. This is usually the case where the storage device connected to the switch is full, but the storage connected to another switch is available. ISL provides a any-to-any connection, so no new storage devices are required.
If you connect the server to the same switch as the storage device that it frequently accesses, you will get better performance for the mesh topology. In this case, the traffic between the switches is reduced, thereby reducing the ISL latency. ISL is only used to provide any-to-any connections if necessary. They can also be used to share resources, such as tape drives.
Cascade Switch Latency is small and less than 2 microseconds. This is not a problem for 99.9% of applications. But for applications that do not tolerate such delays, it is best to use a flat topology.
Core-The edge is a hierarchical topology in which the core switches connect all the storage resources. It can also connect a tape drive, a tape media server, and a first-tier server. The first tier server can be a server that is sensitive to ISL latency. Core switches in the core-edge topology typically use a guide for performance and usability reasons.
The edge switch that connects to the server can be a guide-level switch or a normal switch. There are several factors that can help users decide what switches to use at the edge. These edge switches are connected to the core switch through an ISL and are not connected to each other. The advantage of the core-edge topology is the use of the standard ISL pattern, which is a very easy to extend topology. storage devices, servers, and switch devices can be easily added. It uses fewer ISL, and is less expensive than a mesh topology.
All servers connected to the edge switch have access to all storage devices through the core switch. If you need to add a new storage array, you can connect it to the core switch. This avoids the challenge of choosing which switch to connect specifically to. The new server can be connected to the edge switch so that all ISL usage is in common ISL mode. If all the ports on the edge switch are used, the administrator can connect the new edge switch to the core switch, which does not cause business disruption. By the time the core swap ports run out, the history of the San shows that vendors will release a guide-level switch with more ports.
Replacing the current core switch into a more multiport switch process is simple and can also migrate old core switches to the edge. If your demand for ports exceeds the speed of a new product launch, or if you do not want to migrate the original core switch, you can easily add a second core switch to the original architecture.
For situations where both servers and storage devices are deployed on the same switch, if an ISL is not actually used, the only advantage of a mesh topology is no ISL latency. But as demand grows, these advantages will disappear. Once you start using ISL, you no longer have the advantage of ISL without delay. For a network ISL, you can't just consider a little or nothing about the connection to the device, and you need to carefully design how to connect the device to the port. In contrast, an appropriate core-edge design ensures minimal ISL latency and ISL invalidation, as well as uninterrupted scalability to meet the growth of host numbers and storage capacity.