I. Summary
Most TCP-based webcam designs are the same as those of the blog post "UDP-based webcam Design and Implementation". This blog post uses the TCP protocol stack as the nichestack protocol stack (likewise, can be implemented using the LWIP protocol stack). For protocol analysis and PC design, see the blog article "Network routine analysis and Client Program Design Based on the nichestack protocol stack ".
Ii. Experimental Platform
Hardware Platform: diy_de2
Software Platform: Quartus II 9.0 + NiO II 9.0 + Visual Studio 2010
Iii. Hardware
For this part, refer to the blog post "Design and Implementation of UDP-based network cameras ".
Iv. underlying software design and host computerProgram
This blog post focuses on the design of the underlying software (niosii) and the host computer program, as shown in the communication flowchart between the underlying and the host computer.
Figure 1 Communication flowchart between the underlying layer and the PC
The above is the overall flowchart of the system. The following describes some key Debugging Processes.
1. Basic functions of Image Transmission
Debug the system based on the debugging experience in "Design and Implementation of UDP-based network cameras.
First, rewrite the buffer on the C # side to verify that the C # display control is normal. See figure 2 for debugging.
Figure 2 C # control display debugging Diagram
The length of the underlying sent data packet is 1500, and the length of the TCP data packet is 54. Therefore, a packet can be displayed (1500-54)/2 = 723 pixels, the control displays two lines of 83 pixels. You can see the details at the top of the control.
Second, write the data to be sent in the niosii and send it to the C # end. The verification is normal. Tcp pull mode is used here. That is, the C # client sends a command manually, and the underlying layer sends a packet of data. After manually sending 107 commands, the C # control can display a frame of fixed color image sent from the underlying layer, as shown in 3.
Figure 3 established color image sent at the underlying layer
Again, based on the verification of the above two steps, add the read SRAM data to transfer the acquired image, and add the automatic sending command function in C, then the image data can be continuously received on the C # side, as shown in figure 4.
Figure 4 Image
2. speed test and improvement of Image Transmission
The image can be displayed in step 1, but the speed is very low. A frame of image is displayed in about 30 s, and the network transmission speed is about 10 kb/s. The reason for slow transmission is that the network transmission adopts the PULL mode, that is, C # Sends a command and transmits a packet of data at the underlying layer. Such an image requires 107 commands, extremely low visibility efficiency. In addition, the C # program also causes slow speed.
Improvement 1: the C # client deletes unnecessary content, and the receiving and drawing parts have a separate thread.CodeAs follows:
Private Void Telnetthread (){ While (Socket. Connected ){ Try {Receive (); // Remove unnecessary output to improve reception Efficiency // String STR = receive (); /// /String STR = ""; // STR = Str. Replace ("\ 0 ",""); // String delim = "\ B "; // STR = Str. Trim (delim. tochararray ()); // If (Str. length> 0) // { // Console. writeline (STR ); // If (STR = outputmark + back) // { // // Backupspace Key Processing // This. rtbconsole. readonly = false; // Int curpos = rtbconsole. selectionstart; // This. rtbconsole. Select (curpos-1, 1 ); // This. rtbconsole. selectedtext = ""; // This. rtbconsole. readonly = true; // } // Else // { // Log (logmsgtype. Incoming, STR ); // } // } Thread. Sleep ( 100 );} Catch (Exception e) {console. writeline (E. tostring ());}} This . Sbpstatus. Text = " Status: disconnected " ;}
This improvement greatly increases the data receiving speed at the C # End, and eliminates the problem that data packets cannot be sent continuously using the for loop. In addition, each time the C # side receives a frame of image data, it automatically sends a command so that the underlying layer can continue to transmit the next frame of image.
Improvement 2: At the underlying layer, each packet of data sent points the sending pointer to the first address of the sending array. The Code is as follows:
For (J = 0 ; J < 107 ; J ++ ){ // After each packet of data is sent, the Pointer Points to the first address of the array. Tx_wr_pos = Tx_buf; For (K = 0 ; K < 1446 ; K ++ ){ If (K % 2 ) = 0 ) Tx_buf [k] = A [k/ 2 + J * 723 ]; Else Tx_buf [k] = B [k/ 2 + J * 723 ]; Tx_wr_pos ++;} // After the array is full, send it out. Send (Conn-> FD, tx_buf, tx_wr_pos-tx_buf, 0 ); Printf ( " Niosworking \ n " );}
IMPROVEMENT 3: In the bottom layer, PIO is interrupted. After each entry into the PIO is interrupted, the interrupt flag bit is cleared first. However, the system frequently enters the interrupt service function without interrupting the source, why? However, after you enter the interrupt service function, you can directly disable the interrupt enabling function and clear the interrupt flag. After the uC/OS II end completely transmits the image data, enable interrupt enabling.In addition, the interrupt enabling function cannot be enabled before the uC/OS II system is started. Therefore, the PIO interrupt enabling function should be disabled during initialization. When receiving commands from the client, enable interrupt enabling.See figure 1.
Through the above three improvements, the transmission speed is greatly improved, about kb/s. In this way, an image is displayed less than 3 s, and the display effect is clear.
V. Summary
According to the above several blog posts, we have completed the design and implementation of the Network Camera Based on UDP and TCP transmission protocols, and focused on the debugging process, that is, the process of discovering and solving the problem. This article demonstrates the correctness of the design principles. At the same time, the speed of network transmission has also been tested, but there are still many improvements and improvements, such as writing a frame of image at the same time of transmission.
Another key point is that the image compression method described in the blog post uses lossy compression. The final resolution is 320*240, In the rgb555 format. Therefore, the next step will focus on image CodecAlgorithmTo minimize the bit rate. Make the video smoother.