Four-axis flying machine 1.7 nrf24l01p wireless communication and Improved Ring Buffer

Source: Internet
Author: User

Original article. You are welcome to repost it. For reposted content, please indicate the source.

It took more than 10 days to write a blog again. One is the Mid-Autumn Festival advantage activity, and the other is the completion of many things ..

Finally, the NRF communication was completed, and improved ring buffering and simple communication protocol planning were used.

There are quite a few things to do, and the actual workload is not small .. Haha .. First, we need to write a simple UI on the remote control to display some data, then read the handle data and pass it to the flight control via NRF to see if the data is correctly transmitted, in addition, it also solves various problems in data transmission. Although nRF905 has been used before, the nrf24l01 has never been used, but it is similar.

1: Use of the nrf24l01p Module

2: Implementation of Improved Ring Buffer

3: Communication Protocol

 

1: nrf24l01p Module

There is a lot of information about the nrf24l01, so I will not elaborate on it. How to configure it? Just look at the manual. By the way, there are many manuals on the Internet and there are various Chinese versions, however, the best practice is the official Original English version. There is no ambiguity. You can find many details in this manual. We use the enhanced shockburst mode, which can be automatic ack. The ack with payload mode is not used, because the ACK with payload mode is used, and the PRx end (receiving end) send data passively. Only the other Party sends the data. In the ACK package, we send the data we need to the other party. If the other party does not have the data, we cannot send the payload in ACK, which can only be used in some specific scenarios. Then we need to control NRF to convert the data to TX mode by ourselves, and return to Rx mode immediately after the data is sent .. Shows the conversion of each mode. This is taken from the original English manual.

We can see that the conversion between Tx mode and Rx mode goes through the standby-1 mode, and Ce = 0 needs to be returned to the standby-1 mode in the middle, and then the RX and TX modes are controlled through the prim bit, reset Ce = 1. Although we can directly change prim to the Tx mode, it is safer and more reliable to follow the instructions above.

When talking about the ACK under enhanced shockburst, you have to go to another graph, as shown in the figure:

In fact, the English part clearly defines the ACK and address settings .. However, we can say that there are six channels for receiving data from the PRx receiver, but channel 0 is special and is used to receive data from ACK packets.

I have read this image and many people have different understandings... Different understandings have different understandings of tx_addr and rx_addr. Let's first explain the above figure, and then talk about the online tx_addr and rx_addr understandings and our own understandings.

For the sender, you need to set two addresses for sending: tx_addr and rx_addr_p0. The two sets have the same IP address. We assume the address is address 1,

For Acceptor: You need to set a rx_addr address. The rx_addr address must be the same as the address set above, that is, address 1, P0 or P, the receiving end compares the address in the monitored data packet with its own rx_addr. If it is the correct one, it will receive the data, the data format of the air data transmission (it can be understood as the link layer data, and the physical layer is the air, ha, this understanding does not know, right)

After receiving data, the sender will send an ACK packet. Who will send this packet? Sending address 1, right. That's why the sender needs to set rx_addr_p0 to address 1. This is used to receive ACK packets. After receiving ACK packets, tx_ds is interrupted, the MCU is notified to be sent completely.

This address may be a bit mixed. How to send and receive data is address 1 ...... Yes. At first, I was dizzy .. Let's take a look at the two addresses.

Internet understanding: it probably means that information is transmitted in the air. NRF will monitor all information on the channel. NRF's rx_addr indicates the address of the target to be received, this address is also used to return ack. tx_addr is its own address, marking itself. It seems that I have seen an understanding of this. How can this be said? Point-to-point is okay, yes .. In this case, many-to-many operations may cause confusion.

In our understanding, tx_addr is the target address, which is the same as the TCP/IP address, and rx_addr is my own address. The main reason for obfuscation is that ack returns the result through its own rx_addr_p0 address (we can regard p0 to P5 as the port in TCP/IP, in fact, the requirements for the first four addresses from P1 to P5 are the same. Only the last address can be changed by yourself. In this way, the port is more reasonable ). Isn't there six receiving addresses? We use p0 to do nothing else. Instead, we only use it to receive ACK packets and P1 to receive data. In this way, we can understand that Rx is the local address, put it in P1, TX is the target address, and put it in tx and P0.

If you need to activate more receiving addresses or receiving ports, and then enable rx_addr, and each rx_addr will be processed by different threads, isn't the port function similar to the port function in the remainder TCP/IP implemented.

The sending process of the sender sends data and waits until tx_ds, max_tx, and tx_ds are sent and ACK data is received, mat_tx indicates that the ACK packet is not received after the set number of resends is sent, which means that the sending fails.

When the receiving end detects the rx_dr information, it means that we can read the data to process the data.

We have implemented all the above. In fact, our communication has been completed. We call the sending command to send data and convert it to the Rx mode. Wait for the data to be received and receive the data, if data is to be sent, it is converted to the sending mode to send data. After sending, it is converted to the Rx mode to wait for the received data. This Infinite Loop continues.

That's right. Our communication is complete,But this is extremely unstable and unreliable. A complete system, I think communication won't be done here. There must be a high-speed cache. Why do we need a high-speed cache?Imagine what will happen when the frequency of sending data is very high. The receiving end receives rx_dr and then reads the information and processes the data. The processing process takes time, when another rx_dr is returned, packet loss occurs. If we receive rx_dr and then process the data, we can see the rx_dr mark again, at this time, it is not a packet loss problem, but a rx_dr signal is directly missed. That is to say, if there is a packet in rx_buff, we do not know it at all. Even if we receive rx_dr, we will immediately clear it and then process the data, at this time, the second rx_dr will not be lost, but what about the third and fourth rx_dr? Although we can judge the Status Register in every rx_dr, check whether there is any data in rx_buff, read all the data, but the processing time is there, the rx_buffer will be full if it doesn't keep up with you (we use the Dynamic Data Length mode). When it's full, it won't be able to receive new data, and the sender will receive the max_tx signal, indicating that the data has not been sent successfully .. The root cause of all this is that we need to process the received data, which takes time to process. When a large amount of data comes in high frequency, we cannot guarantee that the data processing speed is fast enough to ensure that each received packet can respond instantly. How can this problem be solved? Yes. A high-speed cache is required. With the high-speed cache, you can simply put the data into the cache and put it into the cache, so that other tasks can be processed, so as to ensure that the data processing time is as short as possible, respond to incoming packets as quickly as possible. The following describes the cache tasks.

2: Implementation of Improved Ring Buffer

Before reading the following, let's take a look at the explanation of the circular buffer on Wikipedia: http://zh.wikipedia.org/wiki/ Region

We are using ring caching. Here we will mainly talk about the Improved Ring caching in the end.

The traditional ring buffer is pushed in byte, and in block data, block data is pushed in a fixed length, for example, each time it is pushed into eight bytes.

Byte-pressed ring cache is used, for example, serial port. This is the input in the data stream mode. Write one byte for one byte, and write one byte for one byte. During reading, the first and last pointers are read from the beginning, it is easy to turn the header when it comes to the end. It is a loop cache format,Note: When reading data, you need a memory space to store the data. Each time you read the data, you need to copy the data from the cache to a continuous memory space, after that, many serial ports are implemented in this way. Yes, the problem lies in this. During the process, we need to copy the file one more time. This is a waste of time. What should we do? The following block-byte-based annular buffer solves this problem.

The block data push can solve the problem that needs to be copied during data reading. In fact, when we need to read data, we only need to return the Data Pointer, then we can read it,The most fundamental reason for the above assignment and re-processing is that the data is not guaranteed to be continuous in the cache.For example, if a data packet comes, 10 bytes are long, the first five bytes exist at the end, and the last five bytes have a header, if you return the data header pointer, the first five pieces of data are correct, and the last five pieces of data have crossed the border. I don't know what data to read, therefore, you need to read the data to another memory (which can be understood as our variables or arrays). After reading the data, you can ensure that the data is continuous before processing. The block data pushing method does not have this problem. For example, if we set that 10 bytes are pushed each time, then the buffer size is set to a multiple of 10, in this way, if a part of the data is at the end of the header, you can return the Data Pointer to operate the data. After the operation is complete, pop the data and solve the problem, this saves memory space and reduces the number of copies, improving the processing speed,The only defect is that the length of the pushed data must be the same,This makes us feel a little bit perfectionist and can be said to be a little bit of obsessive-compulsive disorder. So we started our work on our own and wrote two circular buffers that can be pushed into any data and read without the need to copy data.

Improved Ring Buffer: To put it simply, we need to push the data to any length and read the data to the loop buffer that only returns the pointer. One problem is that you press the data to any length, if there are several packages in it, how do you read them until they are read? How long is the first packet? How long is the second packet? The second is to ensure that the data in the cache of a single packet is continuously stored in the memory. First problem: we add a length information before each packet to identify the next packet. The second problem is that we monitor that when the end cannot accommodate this packet, we enable the Skip column mode, jump to the header, and then write from the header, this ensures that the data is continuously accessed in the memory. when reading the data, you need to determine the hop column flag and wait until the jump is completed. The specific details are not described in detail. The general implementation method is like this. This also has a disadvantage, that is, each package occupies a certain amount of space to access the length information, and when the last length is not enough, I will jump to the front, the following space is wasted. This cannot be done .. Haha .. There are advantages and disadvantages .. However, this kind of general ring buffering method is quite popular among others.

3: Communication Protocol

To be continued... Tomorrow ..

 

Four-axis flying machine 1.7 nrf24l01p wireless communication and Improved Ring Buffer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.