Not long ago a PhD student and blogger who studied Sdn complained: "Now the Open Source SDN controller performance is poor Ah, 2K a new stream per second will prompt packet-in too much, stop working." The blogger asked how he defined a stream, which he said with TCP 5 tuple. The blogger asked him how he produced such dense packet-in, he said, using a server to send packet-in directly to the SDN controller. Bloggers then asked about the configuration of the server and the SDN controller, saying the server was 8 cores and the SDN controller was 4 cores. Instead of asking more questions, bloggers thought, "What is the reason for people to design experiments like this?" If the data in this experiment is correct, then what does it mean?
To this day, a lot of people have two misunderstandings about SDN and OpenFlow . First, the SDN controller with OpenFlow is the update flow table in per flow. Second, each flow has its own life cycle, and the controller updates the flow table only for active flow, and the remaining table entries are deleted. the misconception is that any version of the OpenFlow standard does not say an update to the per-flow, nor does it update the flow table for active flow only. In fact, OpenFlow only defines an interface between the controller and the switch, and there is no feed to use these interfaces at all.
A very interesting question is: Why do people have the above misconceptions about SDN and OpenFlow? First, the name OpenFlow is very bad, and it seems to imply that the agreement is per flow, but that is not the case at all. If there is a chance to re-name this protocol, OpenTable will be more appropriate, because it essentially opens up the various flow tables in the switch, and the SDN controller can edit these flow tables.
In addition, the OpenFlow protocol itself requires the switch to be forwarded match+action, rather than the traditional 2-layer + 3-layer +tcam forwarding method. The transformation of this forwarding mode is one of the most important reasons for people to produce these misunderstandings. In the early practice of openflow, it is found that it is difficult to take wildcard match+action forwarding mode on the existing hardware ASIC. The most immediate scenario for deploying SDN on hardware is to use only the Tcam tables on the ASIC, as this table supports wildcard match and supports various actions such as drop,forward,broadcast,copy to CPU. There is a direct correspondence between Tcam table and OpenFlow Protocol's forwarding mode. There was an early switch to support OpenFlow on the market: they supported the OpenFlow protocol, turning each flow modification message of OpenFlow into the corresponding Tcam table entry. This has a very positive impact on the popularity of SDN and OpenFlow. But it has a problem: Tcam is a bit expensive, leading to even today, the most mature ASIC only support 3k-4k tcam table items, so small table can only do demo, scale deployment is the Arabian Nights. The Genius SDN pioneers came up with a way to overcome the problem of too small a tcam, which was to use reactive to write Tcam. In layman's terms, only the table entries for those active streams are saved, and those that are no longer active will be deleted because of timeouts. Specifically, for each new flow of the first package, the switch does not know what to do, so the exchange opportunity to send a packet-in to the controller, after the controller receives the packet-in, calculates the path and tells each switch how to forward the new stream through Flow_mod. If the individual table entries associated with this stream have not seen the new package for a period of time, the stream is considered to have ended, and the related table entries are purged from the Tcam table. This reactive method really solves the problem that the Tcam table is too small to some extent. But the successor of SDN has forgotten that the tcam is too small because, reactive is the fruit. If a beginner has just finished reading OpenFlow spec, he might take it for granted that SDN by design is reactive, and ignores the reason to explore reactive. In fact OpenFlow spec and is not reactive have any relationship.
History speaks here, we return to the head to see the article began the experiment. It is not difficult to find that the experimental designer subconsciously assumed that OpenFlow is per flow and is reactive! It is not scientific to design experiments with such assumptions. Unfortunately, these two incorrect assumptions are still ingrained in the brains of many SDN researchers, and in the experiment, they rely on generating different TCP 5 tuples to simulate a large number of new streams. There is a lot less misunderstanding in the industry because the design bottleneck is evaluated before any SDN product is designed: If every new TCP 5 tuple will cause packet-in, the SDN controller will definitely become a system bottleneck. This system design will undoubtedly be abandoned in the first time. We have never heard of a vendor announcing their controller's ability to handle packet-in, not because it is their secret, but because the stability of the system is designed to be independent of the speed of the new stream, and no one cares about the test results at all.
Analysis here, we already know that reactive model, although to some extent, to solve the problem caused by the tcam too small, but also let the SDN controller become the bottleneck of the system. If the original experimental data is correct, then the reactive SDN controller simply cannot control the large-scale network and traffic. In this case, it might be suggested to study how to improve the performance of the SDN controller so that it is no longer a bottleneck. But this is the typical trifles, to solve a problem that could not exist. Imagine, if the Tcam is big enough, people will also adopt reactive way to design SDN system? Obviously not. In correspondence, proactive will become a more reasonable way, namely: before the new stream has arrived, the corresponding table items are edited, so that the new stream will not cause the emergence of packet-in.
See here, maybe a friend will ask: 1) How does the SDN controller know what stream will be generated? 2) Tcam only so big, if the new stream before the active editing of the flow table, how to load it?
In fact, these two problems are pseudo-problems, the reason why they are listed here is to keep the logic of the article coherent. The person who raised question 1 is still confined to the misconception that SDN is per flow. Unless man-made scheduling, SDN controllers are never smart enough to be able to guess what new streams are generating. But it's okay, the point is not to know what new streams are going to happen, but to know where each device is. The internet has evolved to this day and is never forwarded by per flow, but by Per device! Traditional two-layer switches need to learn (VLAN, MAC) to port mapping, which is actually learning where each device is. As long as you know Mac1 in Port1, no matter what flow, as long as the MAC1 to go forward to port1. The same logic can be extended to SDN in a very natural way, where the controller learns where the devices are and tells the network how the switches are reaching each device. Learning channels can be many, such as ARP,LLDP,LACP and so on. Learning Mac But addiction, you can also learn IP. all in all, if you turn the mental model of per-flow into a mental model,sdn system designed with per-device, everything will become a lot more natural.
Question 2 is a much larger pseudo-problem. As described in the previous paragraph, the pioneers of SDN very idealized design of the OpenFlow protocol, and later found that only expensive tcam can and OpenFlow agreement to produce more direct correspondence, so openflow and Tcam become same boat, always paired to appear. The problem is: Our goal is to use centralized control to simplify network management, why must use Tcam? Why must you use OpenFlow? This only touches the nature of the problem.
Blogger's answer is: we do not have to use Tcam, we do not necessarily use OpenFlow. Everything is design choice, only a different choice, no best choice. More representative is the choice of the academy and Industry. The Academy, represented by Professor Nick McKeown, advocates a complete redesign of the hardware switching chip [link] to fully adapt to the OpenFlow protocol. The industry's approach is less aggressive: Cisco has designed ACI and Opflex, allowing protocol adaptation to hardware Asic;bigswitch to follow OpenFlow, but only using Tcam to store complex rules, and L2 and L3 tables to store general forwarding rules, To the maximum extent to avoid the size of the Tcam limit. Under the joint efforts of many Parties, the limitations of SDN and OpenFlow design are becoming history, and reactive's system design has fulfilled its historical mission. Now the practical proactive SDN system looks like this: 1) SDN controllers constantly learn about various devices in the network, including switches, links, servers, virtual machines, etc., 2) administrators configure business relationships in a language, such as multi-tier Application and service chaining. 3) SDN controller translates the user configuration and edits the L2,L3 table in the switch via the South interface, where the table entries are not per flow but per device. The SDN controller also transforms the tenant's business logic into a Tcam table entry through the South interface, where the table entries are more complex and may need to match certain fields of source+destination and L4.
An even greater benefit of designing an SDN system in a proactive way is that it greatly simplifies the design of the high availability (HA) so that the SDN controller is no longer a single point of failure for the entire system. This will be discussed in more detail in future articles.
Speaking so much, bloggers are just conveying a message: the use of reactive to design SDN systems has historical reasons. Now that these reasons no longer exist, Proactive's SDN system design has become mainstream.
Proactive vs. reactive