0 Introduction
With the development of video coding and decoding technology, computer network technology, digital signal processing technology and embedded system, the remote video surveillance system with embedded network video server as the core has begun to emerge in the market. The system transmits the analog video signal from the camera directly into the video stream through the built-in embedded video encoder, which is transmitted through the computer network. Embedded network video server with video encoding processing, network communication, system control and other powerful functions, directly support network video transmission and network management, so that the scope of monitoring to reach an unprecedented breadth. In the remote video surveillance system, the original video stream acquired by the camera needs to be compressed before transmission, and FFmpeg can compress the original video into H264 format video stream, H264 is a widely used high-precision video recording, compression and publishing format, so the use of ffmpeg to achieve.
1 System Solutions
The system is running embedded Linux system on s3c2440 platform, using CMOS camera OV9650 to obtain real-time video image data, using H264 standard to compress and encode video stream by FFmpeg original video, and transmit through network, The user can view the remote video image in real time using OPENCV display after the ffmpeg decoding of the receiving processing end.
The system consists of two parts: collecting sending end and receiving processing end, using client/server design mode to realize the mutual communication between them, sending control signal from the receiving processing end to collecting transmitting side, collecting the transmitting end to open the camera for video data collection, and collecting the original video data is yuv422 format, Compressed into H by FFMPCG encoding. 264 format video stream, transmitted through the communication network to the receiving processing end, the receiving processing end receives the video stream data, through the FFmpeg decoding, through the OPENCV to display. Acquisition and transmission of video data acquisition and transmission using Samsung's arm920t core s3c2440 as an embedded microcontroller, receiving processing end of the use of ordinary computers. As shown in System Scenario 1.
2 Acquisition Send End
The acquisition sending end mainly includes the embedded Linux platform and the camera two parts, the embedded Linux platform needs to build the cross-compiling environment, and the camera needs the driver to work properly.
Embedded Linux platform using Samsung's S3C2440A processor as a hardware platform, s3c2440a processor is a arm920t core-based 16/32bit embedded processor, the frequency
400MHz, up to 533MHz, supports 30/130/200 megapixel CMOS camera, supports linux2.4 and Wince4.2 dual operating systems, and is suitable for use in embedded systems where power and cost are more sensitive.
Using the CMOS camera OV9650 produced by Omni Visio, the camera has high sensitivity, low power consumption, high resolution (up to 1300x1028 pixels), supports a large number of commonly used image formats, and supports automatic image control. Consistency with s3c2440 is maintained on the interface. The output image is up to 1.3 million pixels, and the output image format includes SXGA,VGA,QVG A,cif,qcif, etc., and can output images of different sizes. For different output image formats, the maximum frame rate can be different, up to 120f/s, the output of the 8-bit data format includes YUV/YCBCR (4:2:2), GRB (4:2:2), raw RGB data 3 kinds.
2. 1 Build embedded Linux Platform
Establish the basic process of embedded Linux system: First build the cross-compiling environment on the host, then transplant the Linux boot program to the target board, build the embedded Linux system and port to the target board. The construction of embedded Linux system mainly consists of cutting and configuring the kernel, porting the kernel and peripheral drivers according to the actual hardware system, and constructing the root file system of Linux.
2. 2 Camera Drive Configuration
The CMOS camera driver is written in modules because the driver in modules form is dynamically loaded into the Linux kernel.
Once the driver is loaded, you can manipulate the camera just as you would a normal file. Such as: Define INTM_FILEV412, open the camera via M_filev412=open ("/dev/camera" O_RDWR) via Read (Fd,&inyuv422,d SIZE) Read the camera's video data into the array inyuv422, and turn the camera off via Closc (m_filev412). Once you have the video data, you can encode it through ffmpeg.
2.3 FFmpeg encoding
2. 3. 1 FFmpeg Introduction
FFmpeg is an open source, free cross-platform video and audio streaming solution that is free software with a LGPL or GPL license (depending on the component you choose) and is a complete open source solution that integrates recording, conversion, audio/video codec functionality. The development of FFmpeg is based on the Linux operating system and can be compiled and used in most operating systems. FFmpeg support MPEG, DivX, MPEG4, AC3, DV, FLV and so on more than 40 kinds of encoding, AVI, MPEG, OGG, Matroska, ASF and other more than 90 kinds of decoding; TCPMP, VLC, MPlayer and other open-source players have used ffmpeg.
FFmpeg in FF means fast Forward.
2. 3. 2 encoding
OV9650 camera output data is in yuv422 format, and ffmpeg encoding needs to input yuv420 format data, so before encoding need to first convert the yuv422 format data to YUV 420 format. The function Sws_scale () in FFmpeg can implement this process.
Before using FFMPEG encoding, you need to initialize the FFmpeg library, register all codecs and file formats, set the encoder bitrate, frame rate, encoding pixel format and other parameters, and then look for the encoder and open it before you can encode it by opening the encoder. The setup process for parameters is accomplished by setting individual member parameters in the struct avcodeccontext, for example by setting Avcodeccontext->bit_rate,avcodeccontext->width, Avcodeccontcxt->height can set the bitrate, width and height, etc., by setting the avcodeccontext->pix_fmt=pix_fmt_yuv420p to set the YUV420 pixel format. The encoding core function is Avcodec_encode_video (). Each frame of the system is captured, and the Avcodec_encode_video () function is encoded into a video stream of H. Its encoding process is shown in 2.
The following is a detailed description of the role of the main functions in the various steps of the coding process:
1) Av_register_all (): All file formats and codecs are included in the registry, and the codec cannot be opened without this step.
2) Av_open_imput_file (): Open the webcam video file.
3) Av_find_stream_info (): Look for the video stream.
4) Av_find_encoder (): To find the encoder, the encoder parameters need to be initialized in the PCODEC, parameter initialization is very important, the image quality of the encoding has a great impact.
Pcodec=avcodec_find_encoder (codec_id_h264);//Find H. 264 Format Encoder
5) Avcodec_alloc_frame (): Allocates memory for encoded frames.
Pframe=avcodec_alloc_frame ();//pframe to avframe format
6) Avcodec_open (): Open encoder.
7) Av_read_frame (): reads a frame of video data from the video stream.
8) Avcodec_encode_video (): Encode one frame of video data.
9) Avcodec_close (): Turn off the encoder.
Avformat_close_mput file (): Close the video camera files.
3 Receive processing end
The receiving processing terminal can communicate with any one of the acquisition and sending terminals. After the connection, it can receive the video data sent by the sending side, and it is displayed after FFmpeg decoding.
3. 1 FFmpeg decoding
The process of decoding with FFmpeg is roughly the same as the coding process, except that the core function of decoding is Avcodec_decode_video (). After the receiving processor receives a frame of data, it is stored in the memory space in the Avframe format via Avpicture_fill () and then decoded using the Avcodec_decode_video () function. Its decoding process is shown in 3:
3. 2. Video display
FFmpeg decoding out the format is YUV (. I420) format that needs to be converted to RGB (. RGB24) format, the format conversion can be implemented using the Sws_scalc () function in FFmpeg.
The display video is OpenCV. The core function shown is cvshowimage (char* name,lpllmage* DST), converting the resulting RGB (. rgb24) format data into OPENCV format lpllmage data, which is then shown on the Monitoring window, 4:
4 concluding remarks
With the development of video compression technology, embedded video surveillance has become an important position in the field of monitoring. With s3c2440 as the embedded hardware platform, through the camera data acquisition, in the embedded Linux and Windows operating system of the cross-platform, to achieve FFmpeg codec, for the actual embedded video surveillance system video compression transmission design, provides a feasible method.
Turn: FFmpeg remote video Surveillance system coding and decoding