Training Report
Practical Training Questions:Video Monitoring
Purpose:1,Understand and master the principles of video monitoring;
2,Interpreting the video monitoring source program, understanding and understanding the usage and functions of various data structures in the program;
3Modify the program to enable remote video monitoring and image transmission between two remote PCs, and capture and transmit images;
4,Transplanted to the mini2440 Development Board to enable remote video monitoring and Image Capture and transmission between the Development Board and PC;
Practical training equipment: One or more sub-devices on the mini2440 Development Board, a PC with Windows XP or above, a virtual workstation in windows, and an operating system with Ubuntu 10.4 (Linux _ 2.6.32.2 kernel.
Training Content:
Day 1:Self-study and test the source program
The details are as follows:
The read-through program can identify the key points in the program and interpret these key points, such as video collection, video display, image file acquisition, and naming rules. Focus on the luvcview. c file and v412uvc. c file. Be familiar with and understand the data structure and functions, master the role and usage of the members in the vdin data structure, and know the usage, role, and importance of framebuffer and tmpbuffer: tmpbuffer stores an image file, while framebuffer stores a collection of these image files. The video file is finally formed and its size is 153600 bytes.
Day 2:Install the SDL library in Ubuntu 10.4 and compile and run the video monitoring source file luvcview_20070512;
The procedure is as follows:
FirstCopy the SDL library package to any directory. Here we copy this package to the/mnt/directory, decompress it, start a terminal, and enter the directory after decompression, then enter :. /confgure run the configuration file to generate the MAKEFILE file. Then, enter make install on the terminal to install the SDL library.
SecondCopy the video monitoring source file to the/mnt/directory, and enter the directory. Run the make command to compile the file. After the compilation is complete, plug in the camera and enter the file on the terminal. /luvcview command, run the program, a window will appear on the screen, and then show the collected video image.
Day 3: Use the TCP protocol for video transmission;
The procedure is as follows:
Copy the program into two copies, one for the server and the other for the client. Create a TCP server on the server and wait for the connection from the client, after the client connection is successful, wait for the client to receive the video information, and then call the display function to display the image. After the client connects to the server, it first collects the video information and sends the collected video information to the server.
Specific Code changes: comment out the code for collecting information on the server, call the Recv () function to accept the video information (framebuffer) from the client, and retain the code for displaying the image and video information. Collect information on the client, and then call the send () function to send the video information (framebuffer) to the server to comment out the code for displaying the image. This allows you to remotely transmit video information.
Day 4: use TCP and qt4 to remotely Transmit image loops between PCs and display the image functions:
Implementation Mechanism: The client uses the TCP protocol to send images to the server. The server saves the images in a folder and displays them cyclically on the interface provided by qt4.
Requirements: Complete and ordered transmission of at least 20 images.
Implementation:
Client: Connect to the server and set the integer global variable to control the number of times the image is sent (tmpbuffer). Here we define 20 requests. After 20 requests are passed, the transfer is stopped. In order to make the receiving end fully receive the request, the latency of sending is adjusted from 0.01 seconds in the source code to 1 second.
Server: Create a server, wait for the client connection, connect, receive the image, and save the received image to a size greater than the largest bit in the transmitted image, here we define the Buf length as 7000, which is large enough, and then define an array for storing the image name (the length of the names of 20 images can be up to 100 ), the size is 100 bytes. After the number of cycles, the name of a picture is formatted from 0 to 19 and output to name. Then, the open function is called to create and open the name in sequence, write the data in each Buf into the image file named by this name, and all 20 images are transferred.
Day 5:The Program for video transmission is transplanted to the Development Board to enable image and video collection on the Development Board and display on the PC:
Implementation Mechanism: Cross-compile the client program that transmits the image into the program that can run on the Development Board Here we only use the cross-compile tool for the arm-linux-gcc-4.4.3, and then build the NFS environment, run this executable file on the Development Board to collect videos and send them to the PC server using TCP protocol to display video information.
Specific operations:
Development Board:Install the zimage_x35 file and rootfs_qtopia_qt4.img file system compiled by the kernel of the super_vivi_128m Bootloader and linux_2.6.32.2 on the Development Board to support USB cameras.
Client:Connect to the server and send the image and video information (framebuffer) to the server for receiving. Run on the Development Board.
Server:Create a server and wait for the connection. After the connection is successful, the system receives and displays the image and video information. Run on a PC.
Code Modification: Client code modification:
(1)Modify the MAKEFILE file and change the compilation tool to the Cross-compilation tool: Arm-Linux-GCC. Remove the value of the SDL option.
(2)Modify luvcview. c file, delete all the Code related to SDL, and delete all unnecessary code. The client only collects images, after the image is collected, the image is sent to the server for display. Therefore, other Code related to video collection can be deleted.
(3)Modify the v4l2uvc. c file and delete the sdl_delay file.
Code compilation: Open a terminal, import the cross-compilation environment, enter this directory, compile the code, and follow the prompts to comment out or modify the code. No error occurs during compilation.
Create an NFS environment:Enter shares-admin in the VM interruption to create the shared directory, and then enter Mount-T nfs-O nolock, rsize = 1024, wsize = 1024 Virtual Machine IP address on the Development Board: directory of the shared file. To build the NFS environment.
Run the program on the Development Board: Copy the cross-compiled executable file to the shared file and run the file on the Development Board through the NFS environment.
Day 6:On the Development Board, you can send images to the server on the PC to transmit images accurately:
Implementation Mechanism: On the Development Board, run the client to collect images and transmit them to the PC server. The server uses the interface provided by qt4 to display images cyclically.
Requirements: Send at least 20 images, and ensure that the entire image is received and displayed cyclically.
Specific implementation:
Development Board: Same as the kernel and file system of the experiment development board on the fifth day
Client:Run on the Development Board. function: transmits images. Due to the difference in CPU clock frequency and other hardware between the Development Board and the PC, the sending latency is required. Here we set it to 5 seconds, that is, it is sent every 5 seconds.
Also use the arm-linux-gcc-4, 4, 3 cross compiler to compile the client so that it can run on the Development Board.
Server: The server that receives images between PCs on the fourth day also realizes receiving and saving the images to a local folder, and uses the interface provided by qt4 to display the images cyclically. To ensure full acceptance of the images sent from the client, we set the latency to 5 seconds during cyclic reception, which is very effective after the experiment.
Summary:
This video monitoring project has benefited me a lot. When I understood and mastered the video monitoring principles and specific operations, I became very interested in the project, I have inspired my enthusiasm for learning embedded systems and accumulated more valuable experience in future project development, I am more determined to learn embedded. Finally, I would like to thank my teachers and classmates for helping me.
Cao leping
PM