Optimization of the routing protocol for the last modified route entry

Source: Internet
Author: User

How is the routing protocol used in self-organizing networks? In the previous article, we have answered this question. So today we will focus on how to optimize it. First, let's take a look at the problems in the network.

An Ad-hoc network is a network composed of a group of wireless mobile terminals with peering features, therefore, communication can be achieved without the need for a fixed communication network infrastructure, and has become a new wireless technology in the commercial field. Ad-hoc networks have such characteristics: full-Wireless multi-hop transmission, mobile, bandwidth, and energy restrictions. Therefore, when designing a routing protocol suitable for Ad-hoc networks with high efficiency and reliability problem, in recent years, researchers have proposed a variety of different routing protocols, one of which is the classical one 。

The routing protocol is an on-demand routing protocol based on the "minimum hop count". It does not need to maintain the route table periodically and only needs to establish the route as needed, this greatly reduces the traffic overhead. However, because of its "minimum hop count" routing rule, it is easy to cause excessive load on the intermediate node of the network, this paper proposes an optimization scheme based on latency control for the shortcomings of the routing protocol 。

1. Question proposal

Ad-hocOn-DemandDistanceVectorRouting is an on-demand improved distance vector routing protocol, which is different from the table-driven routing protocol, the on-demand routing protocol is used only when the source node needs to route to the target node or when a node wants to join a multicast group. When the source node needs a path to the target node, it initiates a path discovery process in the network, and does not need to periodically exchange route information and update route tables. This feature greatly reduces traffic Overhead control, this is very useful in wireless networks. Although this kind of advantage exists in the routing protocol, because it is based on the "minimum hop count, in many cases, the intermediate nodes of the network are shared by multiple links, while the edge nodes of the network are frequently used. This will not only cause a waste of network resources, but also lead to congestion, this affects the network end-to-end latency and network throughput 。

Especially for large Ad-hoc networks with high loads, this may cause excessive loads on intermediate nodes and lead to hot spots, thus affecting the services that pass through these nodes, this increases the end-to-end latency and reduces the service throughput, resulting in a rapid decline in network performance. Because the routing of the route in the routing protocol is based on the "minimum hop count, therefore, when S1 finds the D1 path, it selects this path for transmission services because node S1 is shortest to node 1 and then to node D1; when looking for D2 from S2, the link will also select this path because node S2 is the shortest to node 1 and then to node D2, in this way, it is obvious that node 1 acts as the intermediate node of the two links at the same time, there will be a high load, while Node 2, node 4, node 3, node 5, but it has never been used. It is unreasonable to waste a lot of network resources 。

Therefore, it is of great significance to improve the routing protocol for different services so as to achieve better service transmission, next, we will propose an improvement method based on businesses that have high requirements on end-to-end latency 。

2. Extended routing protocol based on latency Control

Thoughts on Improvement

In a network, end-to-end latency is composed of transmission latency and node processing latency, and the transmission latency is only related to the spatial distance in the link, which is very small for a wireless Ad-hoc network, at the same time, the processing latency of nodes is also composed of the latency of the nodes processing a single group and the latency of the group queuing wait. Similarly, the latency of the nodes processing a single group is basically stable, therefore, the end-to-end latency of links in the network is usually determined by the queue latency of service groups. Especially when the network traffic is high, queuing latency plays a decisive role. Therefore, you can use this feature to establish a link with a short latency to complete service forwarding 。

Control the sending time of the RREQ group by considering the node latency. When the node queue latency is large, in this way, a large delay will be performed on the received route request RREQ group, and then it will be updated and forwarded through a series of forwarding, the first RREQ group received in the target node can basically ensure that it is reached through the shortest link of the latency link, the destination node can establish a route from the destination node to the source node based on the information of the first RREQ group received, and return the RREP group to the original path. When the source node receives the RREP group, based on the RREP group information, you can create a route with the shortest latency from the source node to the target node and send the corresponding services. This reduces the end-to-end latency of the network 。

Extended routing method of the routing protocol in the routing Mode

First, the node periodically counts the group latency of the current node. The statistical method is as follows, kT is the sum of the latencies of all nodes passing through node k in a cycle 。

MaxS is the maximum queue length of a node. When a node S in the network needs to obtain a route to D of another node, the node S will broadcast a route query RREQ group to the surrounding node. After the intermediate node receives the route request RREQ group, it first determines whether the node has received the RREQ group, if you have received the message, it will not be processed. If you have not received the message before, update the reverse route from the node to the source node, then, based on the node latency of the current node, the delay * kaT time is used to forward the route request. After receiving the route request, the target node determines whether it is a new RREQ group, A reverse route will be established to the source node, and the source node will reply to the RREP group. If the RREQ group has been received, no processing will be performed, the source node S can obtain the path and path quality information of the target node D through route reply. Finally, node S adds the path and path quality information to the route cache, and check whether the data in the cache needs to be sent by the data 。

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.