H.264 embedded video surveillance system project guidance

Source: Internet
Author: User

Reprint please indicate from: http://blog.csdn.net/ayangke Yang Yi QQ: 843308498

I am about to look for a job. I want to review my previous projects and give some guidance to my children's shoes.

Hardware: mini2440 software: Linux-2.6.32


I. Introduction to H.264

H.264 is a video compression encoding standard. The standard provides high-quality image transmission at a low bandwidth (less than 2 m. According to data analysis, H.264 saves an average of 264 of the transmission code streams compared with the previous generation of MPEG2, and 64% of the transmission code streams than mpeg4.


Ii. H.264 Video Monitoring System Architecture

The architecture diagram is as follows:


Note: It is best to use the mesh 2000 camera, which has been discontinued and is hard to find. I bought a new camera and eventually failed to use UDP transmission because the data in the collected image package is too large. Because UDP can send packets up to 2 K at a time, otherwise packet loss will occur, as a result, the client displays the received data in green.


Iii. Video data collection interface v4l2

The first problem we need to solve is, of course, how to collect the camera data. To operate the camera, you have to deal with the camera driver. Fortunately, the Linux kernel has integrated most of the camera drivers. We don't need to worry about this. If the Linux kernel is not integrated, we only need to port it ourselves.

V4l2 is short for video for Linux two. It is a set of standard APIs provided by Linux. Applications call audio and video device drivers to operate audio and video devices through an API. This API shields Audio and Video hardware details and provides a common interface for applications. V4l2 is provided to users through the ioctl System in Linux. Different commands are written through iotcl to implement different operations on audio and video devices.

The procedure is as follows:


First open the video device, and then obtain the device and image information, such as the maximum and minimum resolution, and then obtain the information of the camera storage buffer and perform memory ing, then, make necessary settings for the camera, such as getting the size of a frame image, the image format (RGB, YUV, etc.), and start the acquisition. Below are some common V4L2 commands:

1 VIDIOC_REQBUFS: Memory Allocation

2 VIDIOC_QUERYBUF: converts the data cache allocated in VIDIOC_REQBUFS to a physical address.

3 VIDIOC_QUERYCAP: Query driver Function

4 VIDIOC_ENUM_FMT: get the video format supported by the current driver

5 VIDIOC_S_FMT: sets the frequency capture format of the current driver.

6 VIDIOC_G_FMT: Read the current drive's frequency capture format

7 VIDIOC_TRY_FMT: Verify the display format of the current driver

8 VIDIOC_CROPCAP: Query driver pruning capability

9 VIDIOC_S_CROP: Set the border of the video signal

10 VIDIOC_G_CROP: frame for reading Video Signals

11 VIDIOC_QBUF: read data from the cache

12 VIDIOC_DQBUF: Put the data back into the cache queue

13 VIDIOC_STREAMON: video display function

14 VIDIOC_STREAMOFF: end video display function

15 VIDIOC_QUERYSTD: checks the standards supported by the current video device, such as PAL or NTSC.

Specific Operation Reference this article: http://blog.csdn.net/seven407/article/details/6401792#comments


Iv. Operations on H.264 encoding library

We collected a frame-by-frame video data through the V4L2 interface, and then our second question is how to compress the frame data in the memory to the H.264 standard. H.264 is a set of standards. Many organizations write and implement the code. I use the T264 encoding library, which is developed by domestic video coding organizations. It complies with H.264 video encoding standards and incorporates the advantages of JM, X264, and Xvid source code. Complete the compression encoding in Linux. You can download the T264 source code, compile it, and generate some columns in the T264/avr folder *. obj files. Applications can directly use the functions provided by these target files to encode videos in YUV format. Shows the process.


First, the init_param function reads the encoding configuration information of the configuration file, including the image frame size, 1-frame spacing, and the number of reference frames. The configuration reference file enconfig.txt is included in the source code library of Tsung. Generally, only the configuration of the image frame size needs to be changed. Then t1__open calls the configuration information read by init_param to read the 264 encoder for initialization. Then, call t1__malloc to allocate space for the encoder to store the encoded frame data. Call T264_encode to start encoding. The parameters of this function include the memory address mapped to the video data in the previous V4L2 operation, and the function returns the size of the encoded frame of data. In this way, the video data is compressed into the memory space allocated for T264. Call t1__close to disable the encoder when no encoding is required.

5. UDP-based real-time video transmission

When we get the H.264 encoded compressed data, we need to send it to the windows client. The following question is how to communicate with windows and transmit video data. How to access network programming should know how to transmit UDP data through socket. Note: If the compressed data packets cannot be transmitted through UDP after defecation, because windows applications do not have time to write the data packets, they are ready for use by others, so they can only use UDP, if you can write windows applications, you can consider using TCP, and then the camera is not limited to mesh 2000. The following is the UDP transmission process:

I don't need to talk about UDP anymore. Anyone who has learned network programming knows it.


Vi. Main program flowchart


VII. Conclusion

The reason why I went to this project was because I listened to the video of the program embedded in the country, and did not come up with an idea after listening to the video. Later I checked a lot of materials to find out the general idea. Later, I found an article which is a master's thesis of Guilin Emy of Electronics Science and Technology. I talked about this project in detail, tested a lot of data, and wrote Windows applications. I think the author of this project should be from this person. The windows application embedded in the country is the same as the texture on the article, and the source code is not provided, and the bug is not fixed, later, I uploaded this article to the country-embedded communication group and was kicked out. No matter how much it is, it has nothing to do with me. I give you download: http://download.csdn.net/detail/ayangke/3969255

This article only gives a general idea of implementation. The specific implementation also involves reading more information and writing code! If you have any questions, contact me: QQ: 843308498.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.