Aggregation of data to the base station is a common requirement for sensor network applications. A common method is to establish at least one aggregation tree, the root node as the base station. When the data generated by the node is aggregated to the root node, it goes up along the aggregation tree and forwards it to the other nodes when the node receives the data. Sometimes aggregation protocols need to examine past packets in the form of aggregated data in order to obtain statistics, calculate aggregations, and suppress duplicate transmissions.
The data flow of the aggregation protocol, in contrast to a one-to-many distribution protocol, provides a way to send packets to the root node in a multi-pair, best-effort, multi-hop manner.
When there is more than one root node in the network, a forest is formed. The aggregation protocol implicitly allows a node to join one of the aggregation trees by selecting the parent node. The aggregation protocol provides the best effort, multi-hop transmission to the root node, which is an anycast protocol, which means that the protocol will try to transfer the message to at least one of any nodes. However, this transmission is not guaranteed to be successful, there are also issues that are propagated to multiple root nodes, and the order in which packets arrive is not guaranteed.
Because the node storage space is limited and the algorithm requirement is distributed, the implementation of convergence Protocol will meet many challenges, including the following points.
1) routing loop detection: detects whether the node chooses descendant nodes as parent nodes.
2) Repetitive suppression: Detects and processes duplicate packets in the network to avoid wasting bandwidth.
3) Link estimation: Estimate the link quality of the single hop.
4) Self-interference: Prevent forwarded packets from interfering with the sending of the packets generated by themselves.
Example: TinyOS CTP protocol
The CTP (Collection tree Protocol, Aggregation tree Protocol) [7] is the aggregation protocol that comes with the TinyOS 2.x, and is one of the most commonly used aggregation protocols in practical applications. The overall structure of the CTP, the basic concepts involved, and the workflow are described in detail below.
The CTP can be divided into three parts: a link estimator, a routing engine, and a forwarding engine. This is shown in the three-part relationship 2-17. The link estimator is located at the lowest level and is responsible for estimating the quality of communication between two adjacent nodes. The routing engine is located in the middle tier and uses the information provided by the link estimator to select the node that has the least cost of transmission to the root node as the parent node. The forwarding engine maintains a send queue of local and forward packets, choosing the right time to send the packets of the team header to the parent node.
The link estimator is primarily used to estimate the link quality between nodes for routing engines to calculate routes. The link estimator implemented in the TinyOS 2.x combines the success rate of broadcast LEEP frames and the transmit success rate of unicast packets to calculate the single-hop bidirectional link quality.
Link Estimation switching protocol (link estimation Exchange Protocol,leep) is used to exchange link estimation information between nodes, and defines the detailed format of LEEP frames used for exchanging information.
Inbound link Quality 2-18a, there is a node pair (a, A, b) as a reference node, A to B sends the total number of frames is Totalin, where B successfully received the number of frames is Successin, which is:
The value of the Totalin can be calculated indirectly through the sequential number in the LEEP frame of the A-node broadcast. LEEP frame has a Sequence Number field, Node a each broadcast LEEP frame, the field will be added to the 1,B node only need to calculate the successive received LEEP frame sequence number of the difference to get a total number of LEEP frames sent.
Inbound link quality can also be obtained in other ways, such as link quality indicators such as LQI or RSSI, but this requires the wireless module to support such functions.
The outbound link quality 2-18b is shown, the node pair (a, A, b) as a reference point, B sends a frame number to a totalout, where a succeeds in receiving a frame number of successout, thus:
Because LEEP frames are sent by broadcast, node B cannot know if node A received, and thus cannot calculate successout. But b to a of the outbound link quality is a to B inbound link quality, to solve the problem, only let A to the inbound link between it and b quality feedback to B, which is actually one of the main functions of LEEP frame.
The TinyOS 2.x uses a 8-bit unsigned integer to represent the outbound or inbound link quality. To reduce the loss of precision and make full use of the 8-bit space, the TinyOS 2.x expands 255 times times as it actually stores the value.
Bidirectional link quality 2-18c, for a pair of pairs (A, b), the bidirectional link quality is defined as follows:
Bidirectional link quality = inbound link quality × Outbound link quality
Local interference or noise can cause (A, B) and (b,a) link quality to be different, defining the bidirectional link quality is to take this into account.
TinyOS 2.x uses the EETX (Extra expected number of transmission) value to represent the estimated value of the bidirectional link quality. There are two types of eetx values used in leep: Window eetx and cumulative eetx. The window eetx is a eetx that is calculated based on the receiving and receiving success rate in the window when the number of LEEP frames received or the number of packets sent reaches a fixed window size. The cumulative eetx is the eetx of this window and the last cumulative Eetx weighted addition. According to the principle of exponential moving average, the weights of old values are gradually reduced to adapt to the change of link quality, which is a more realistic statistic method.
The LEEP has the following 3 requirements for the Data Link layer: 1) has a single-hop source address, 2) provides a broadcast address, and 3 provides a LEEP frame length. Where a single-hop source address is required to determine the outbound link quality of which item in the neighbor table is updated by the node that receives the broadcast LEEP frame. The data link layer of the existing node can generally meet these 3 requirements.
According to the above analysis, it is known that the LEEP frame should have at least one sequence number and the inbound link quality between the neighbor nodes. The LEEP frame structure 2-19 implemented in the TinyOS 2.x is shown.
The fields are defined as follows:
Nentry: The number of LI items at the tail.
Seqno:leep the frame sequence number.
RSRVD: the reserved field must be set to 0.
Link Information Item format 2-21 is shown.
The fields are defined as follows:
Node Addr: The link-layer address of the neighbor node.
Link quality: The inbound link quality from Node ID nodes to this node.
There are two types of link estimators available in the TinyOS: the standard LE Estimator and the 4-bit link estimator. You can choose which link Estimator to use by changing the corresponding path in the application Make?le. The implementation of the standard LE estimator is in the Tos/lib/net/le directory. The LinkEstimator.h header file contains the Neighbor table size, the Neighbor table entry structure, the LEEP frame head and tail structure, and the definition of the constants used in the LEEP protocol. Linkestimator.nc contains methods that other components can invoke from the Linkestimator. As shown in the code in Figure 2-22, these methods can be divided into 3 classes: one for the quality of the link, a class for manipulating the neighbor table, and a class for packet estimation.
The LINKESTIMATORC.NC accessory is used to illustrate that the link estimator provides the Linkestimator interface. The LINKESTIMATORP.NC module is a concrete implementation of the leep. The purpose of Leep is to obtain the bidirectional link quality between this node and the neighbor node. In the implementation of LEEP, a combination of two strategies is used to calculate the estimated value, as shown in the relationship between 2-23.
Estimates based on LEEP frames are called L estimates. It estimates the EETX value by LEEP the information of the frame. The LEEP frame is sent using the Send.send () method, which calls the Addlinkestheaderandfooter () function to add the LEEP frame header and the end of the frame. The tail holds the link quality table of this node to the neighbor node, if the table is not placed in the LEEP frame at one time, then the next time the LEEP frame is sent from the first last placed table item, to ensure that each table item has an equal opportunity to send. Each package adds 1 to the Sequence Number field in the frame. The timing of the transmission is determined by the user of Linkestimator.
Whenever a LEEP frame is received, the Subreceive.receive () event is triggered. The handler updates the neighbor table based on the LEEP frame header and frame footer information. These operations are concentrated in the function processreceivemessage (), which finds the corresponding neighbor table entry for the Leep frame sender, and calls the Updateneighborentryidx () function to update the value of the packet count and packet loss count. The number of drops is the difference between this time and the last LEEP frame in the Sequence Number field.
When the number of packets received reaches the size of a fixed window, the Updateneighbortableest () function is called to calculate the inbound link quality inqualitywin in that window:
The responsibility of the routing engine is to select the next hop for transmission. An ideal routing engine should be able to choose the minimum number of hops to the root node and connect the best possible transmission path, which can reduce the number of forwarding and packet loss, thus reducing the energy consumption of the sensor network and prolonging the lifetime of the network. However, because of the limited storage capacity and processing capacity of the nodes, it is difficult to store a large number of routing information and use complex routing algorithms, so the common routing protocols in the wired network, such as OSPF and RIP protocols in TCP/IP, are not applicable here. The routing design of Sensor networks is simple and effective, and the best results are achieved with limited resources. TinyOS 2.x in the CTP implementation of the routing engine can be better to achieve this goal. It is used to establish the aggregation tree to the root node, using the information provided by the link quality estimator to select the next hop node reasonably, so that the number of transfers from the sampling node to the root node is as little as possible.
The first step is to clarify the route metric used in the CTP, called Path ETX (expected number of transmission), which is the sum of the ETX of the parent node to the root node and the single hop etx between this node and the parent node, as shown in 2-24. The Eetx value relationship provided by the single hop ETX and the link estimator is etx= eetx+1. Its size can reflect the number of hops to the root node. In general, the smaller the path ETX the closer to the root node, the routing engine chooses the ETX smallest neighbor as the parent node, in order to obtain the minimum number of transfers from the root node.
The routing table is the core data structure of the routing engine. In the CTP used in the route table structure 2-25, it stores the neighbor node information, mainly the neighbor's path ETX value. The size of the routing table depends on the size of the Link Estimator Neighbor table because the node that is not in the neighbor table cannot enter the routing table as a neighbor node.
Each field has the following meanings:
P: Take the route bit. If a node receives a packet with a P-position bit, it should transmit a route frame as soon as possible.
C?: congestion identification. If a node discards a CTP data frame, the next transmission must be routed to the C-position of the frame.
Parent: The current parent node of the node.
ETX: The current ETX value of the node.
When a node receives a route frame, it must update the ETX value of the corresponding address of the routing table. If the ETX value of a node changes greatly, the CTP must transmit a broadcast frame to notify other nodes to update their routes. The route frame replaces the source node address with the parent node address compared to the CTP data frame. The parent node may find that the ETX value of the child node is much lower than its own ETX value, when it needs to be ready to transfer a route frame as soon as possible. Information about the parent node currently in use is recorded in the current Routing information table. such as its AM layer address, path ETX and so on.
TinyOS Middle Road by the implementation of the engine in TOS/LIB/NET/CTP Directory in the following file: Ctproutingenginep.nc module, which is the specific implementation of the routing engine; TreeLouting.h, which defines some of the structures and constants used in the routing engine; Ctp.h, which defines the structure of the routing frame.
As can be seen from the source code shown in Figure 2-27, CTPROUTINGENGINEP is a generic component that can be used to set the routing table size, minimum and maximum interval for beacon frame delivery. It uses a link estimator, two timers, and some packet processing interfaces, and the interface provided is primarily the routing routing interface, which contains one of the most important commands nexthop () to provide the next hop information for the upper component.
The Beacon frame Timer (Beacontimer) is used to periodically send a messenger frame. The send interval is a number of levels of growth. The initial interval is a constant mininterval (with a value of 128), doubling the interval after each update of the routing information. Therefore, with the gradual stabilization of the network, nodes broadcast beacon frames are seldom seen. Timer interval in the use of exponential growth on the basis of the addition of random numbers, in order to develop the wrong time to send a message frame, to avoid the simultaneous sending of beacon frame nodes caused by channel conflict. In addition, the timer can be reset to the initial value, which is mainly used to deal with some special cases, such as the node receives a p position bit of the packet request as soon as possible to send a frame, or to provide the upper user reset Interval function.
The routing timer (Routetimer) is used to periodically initiate update routing tasks. The update interval is fixed to a constant beacon_interval with a value of 8192. The Update routing task is initiated when the timer is triggered.
Send Beacon frame task triggered by beacon frame timer. The ETX value, current parent node, and congestion information of this node are communicated to other nodes in a broadcast manner. The update routing task is usually triggered by the routing timer, but can also be triggered under other conditions, such as the expiration of the beacon frame timer, the recalculation of the route, the removal of a neighbor, and the need to update the route selection. The update routing task iterates through the routing table to find the node with the smallest path ETX value as the parent node, and the node cannot be congested or the parent node of this node.
The Beacon frame receive event, the Beaconreceive.receive () event, is triggered when the beacon frame of another node is received. It updates the corresponding routing table entries based on the sender and ETX values of the beacon frame. If you receive a beacon frame for the root node, the link estimator is called to pin it in the neighbor table. If the Beacon Frame's p position bit, the Beacon frame timer is reset so that the beacon frame of the node is broadcast as soon as possible for the requestor to receive.
The routing engine is shown in Workflow 2-28. The routing engine is initialized when the node starts. The routing engine initializes the routing engine automatically when the node starts by connecting the Init interface to the Softwareinit interface of the mainc. Initialization is done by initializing the current route information, initializing the routing table to NULL, initializing the route frame message buffer, and some state variables.
The application formally launches the routing engine through the start () method of the Stdcontrol interface, which launches two timers Routetimer and Beacontimer. Where the interval of Routetimer is set to Beacon INTERVAL (8192), Beacontimer's next send time initial value is set to Mininterval (128).
Since the trigger time value of the Beacontimer setting is much smaller than the trigger interval of Routetimer, the Beacontimer will be the first to trigger and post Updateroutetask () to update the routing, then Post Sendbeacontask () The task sends a beacon frame. Thereafter, the routetimer triggers and delivers updateroutetask () at a constant interval, while the Beacontimer trigger doubles the time interval for the next trigger.
In addition to the timers constantly triggering to deliver tasks, the routing engine also needs to process the beacon frames of other nodes. When a broadcast beacon frame is received, the Beaconreceive.receive () event is triggered and the corresponding routing table entry is updated based on the sender in the Beacon frame and its etx value.
In addition, if the link estimator rejects a candidate neighbor, the routing engine also removes the neighbor from the routing table accordingly and updates the routing to ensure consistency between the routing table and the neighbor table.
The forwarding engine is primarily responsible for the following 5 types of work:
1) Pass the packet down, retransmit as needed, and pass the corresponding information to the link estimator based on whether the ACK is received.
2) Decide when to pass the package down one hop.
3) detects inconsistencies in the route and notifies the routing engine.
4) Maintain a packet queue that needs to be transferred, which mixes locally generated packets and packets that need to be forwarded.
5) detects a single hop repeat transmission due to a missing ACK.
A routing loop is a node that forwards a packet to the next hop, and the next hop node is its descendant node or itself, causing the packet to loop through the loop, as shown in 2-29, because node E mistakenly chooses the H node as the parent node at some point, resulting in a routing loop.
Packet duplication refers to a node receiving a package that has the same content multiple times. This is mainly due to kaneshige transmission. For example, the sender sends a packet, the recipient succeeds in receiving the packet and replies to the ACK, but the ACK is lost in the middle, so the sender sends the packet again, causing the packet duplication at the receiver.
The CTP data frame is the format that the forwarding engine uses to send local packets. It adds some fields to the packet header to suppress packet duplication and routing loops. CTP Data frame format 2-30 is shown.
The fields are defined as follows:
P: Take the route bit. The P-bit allows the node to request routing information from other nodes. If a node receives a packet with a P-position bit, it should transmit a route frame.
C?: Congestion flag bit. If a node discards a CTP data frame, it must place a C bit in the data frame of the next transmission.
THL?: (Time has lived, has survived), it is mainly used to solve the routing loop problem. When a node produces a CTP data frame, it must be set THL to 0. When a node receives a CTP data frame, it must increase the THL value. If the packet received by the node is THL to 255, it wraps around to 0. This field is primarily used to resolve a problem where the packet has been stuck in the loop for too long, but it has not yet been implemented in the current version of the CTP.
ETX: The ETX value of the single-hop sender. When a node sends a CTP data frame, it must fill in the Etx field with the ETX value of the route to the single-hop destination. If a node receives a route gradient smaller than its own, it must be ready to send a route frame.
Origin: The source address of the package. The forwarded node cannot modify this field.
Seqno: Source sequence number. The source node has this field set, and the forwarding node cannot modify it.
COLLECT_ID: High-level protocol identification. The source node has this field set, and the forwarding node cannot modify it.
Data: Payload. 0 or more bytes. The forwarding node cannot modify this field.
Origin, Seqno, collect_id together identify a single source packet, and origin, Seqno, collect_id, and THL together identify the only instance of a packet in the network. The difference between the two is very important in the iterative suppression of the routing cycle. If the node suppresses the source packet, it may discard the packet in the routing loop, and if it suppresses the package instance, it allows the packet to be forwarded in a short route loop unless the thl happens to wrap around to the same condition as the last forwarding.
The message sending queue structure is the core structure of the forwarding engine. It holds a pointer to the queue item, and the message in the queue item to which the team head element points is sent preferentially. A pointer to the corresponding message, the corresponding sender, and the number of retransmissions in the queue item (entry). The queue item assignment method for a local package and a forward package differs: The queue item that forwards the packet is allocated through the buffer pool, and the queue item for the local package is statically allocated during compilation.
A buffer pool is a facility in the operating system for unified management of buffer allocations. Applications can use the interfaces provided by the buffer pool to easily obtain and free buffers. This is valuable for tinyos, which cannot allocate storage space dynamically, because it can reuse a static storage space.
There are two buffer pools used in the forwarding engine: Queue item buffer Pool (qentrypool) and message buffer pool (messagepool). The queue item buffer pool is used to allocate space for queue items, as shown in 2-31, when the forwarding engine receives a message that needs to be forwarded, it takes an idle queue entry from the queue item buffer, and then puts the pointer of the queue item in the end of the message queue after the corresponding initial. When a message is successfully sent and an ACK is received or the number of messages re-sent is discarded, the forwarding engine releases the queue item corresponding to the message from the queue item buffer pool so that it becomes idle, so that the queue buffer pool can allocate this space to subsequent messages.
The message buffer pool works like a queue item buffer pool, except that it has a message structure. In TinyOS 2.x, the initial size of the message buffer pool is set to a constant Forward_count (a value of 12). The initial size of the queue item buffer pool is Client_count + Forward_count, where Client_count is the number of COLLECTIONSENDERC users, plus it takes into account that locally generated packets also go into the send queue. The maximum number of local packets is the number of COLLECTIONSENDERC users, which ensures that the sending queue does not constantly overflow due to too many local senders. If this factor is not taken into account, the node may have a false impression of congestion in the case of local senders.
Buffer switching is a delicate link in the forwarding process. As shown in 2-32, the message structure obtained from the buffer pool is not directly used to store the currently received message, but instead is used to store the next received message. Because the currently received message must already have its own storage space, you can find the entity of the message as long as the corresponding queue entry points to it. However, the next received message should not be stored in this space, and the buffer exchange is used to allocate another piece of free storage space for the next received message. Traditionally, a message structure is set up to receive a message, and it is copied to the free storage space every time a message is received. In contrast, buffer swapping can eliminate the overhead of a single copy.
Component TOS/LIB/NET/CTP/CTPFORWARDINGENGINEP.NC implements the forwarding engine. As can be seen from the following source code, Ctpforwardingengine uses the interface provided by the routing engine unicastnamefreerouting to get the next hop information, using the system-provided queue, Pool, Sendcache The interface implements the message sending queue, the queue item buffer pool, the message buffer pool, and the Send message cache respectively, and also uses the information that Linkestimator used to send the link estimator feedback packets.
Ctpforwardingengine provides an interface for 4 different role-playing node services in the network. Provides a send interface for the sender, provides the listener with a Snoop interface, provides a intercept interface for the network processor, and provides the receive interface for the receiver.
The 4 key functions of the forwarding engine are the packet receive subreceive.receive (), packet forwarding forward (), packet transport Sendtask (), and the aftermath of the packet Transfer Subsend.senddone ().
The receive () function determines whether a node forwards a package. It has a buffer cache of the most recently received packets, which can be determined by checking the buffer to see if it is duplicated. If not, the forward () function is called to forward.
The forward () function formats the packets that need to be forwarded. It checks whether the received packet has a routing loop, using the method of determining if the ETX value in the header is smaller than the path ETX the node. Then check if there is enough space in the send queue, and if not, discard the package and place the C bit. If the transmit queue is empty, the post Sendtask task is ready to be sent.
The Sendtask task checks the packet that is located in the Send queue team header, requests routing information for the routing engine, prepares for the transfer to the next hop, and submits the message to the AM layer.
When the send is finished, the Senddone event handler checks the results of the send. If the package is confirmed, the package is removed from the transmission queue. If the package is generated locally, the senddone signal is uploaded. If the package is forwarded, the message structure is freed to the message buffer pool. If there are any remaining packages in the queue (such as those that are not confirmed), it initiates a random timer to re-deliver the task. This timer is essentially used to limit the CTP's transfer rate and not allow it to be contracted as soon as possible, in order to prevent self-conflict on the path.
When the forwarding engine receives a forwarding packet, it checks whether the packet is in the cache or send queue, primarily to suppress packet duplication. If it is not a repeating package, call forward () to forward. The forward () function assigns a queue item and a message structure to the message pool, and then puts the queue item pointer in the Send queue. If the timer that posts the Send Message task is not running at this point, the Send Message task is immediately posted to pick the packet sent by the queue header. After a successful send, the Senddone event will be triggered to do some cleanup work, such as checking whether the packet just sent receives a link-level ack, and if so, removes the packet's queue entry from the queue and frees the related resources. If no ACK is received, the retransmission timer is started, and once again the Send Message task is sent for retransmission. If the number of retransmissions exceeds Client-count, the package is discarded.
The forwarding engine is also responsible for sending local packets. The application sends the local package by using the COLLECTIONSENDERC component. The NesC compiler allocates a queue item statically for each consumer based on the number of users of the Collectionsenderc component, and points to the respective queue item with an array of pointers. If a consumer needs to send a packet, first check if it has a null pointer. If NULL, the previous data sent by the consumer has not been processed, return send failed, if not NULL, it points to the queue item is available, populates the queue item with the contents of the packet and puts it in the send queue waiting to be sent.