Detailed description of the Design of Multi-Channel Embedded H.264 Video Servers-Linux general technology-Linux programming and kernel information. The following is a detailed description. With the rapid development of computer networks and video compression technology, more and more attention has been paid to the research and application of multimedia technology, especially the development of video servers, especially for Embedded Video Servers [1] [2].
Because of its small size and flexible installation, as long as it can be connected to the Internet, it can provide real-time video monitoring services for any authorized users, this avoids the expensive cost of laying special lines for video signal transmission.
An embedded video server is a multimedia information server that provides video collection, video data compression, and network transmission functions. Its transmission processes video streams, while video transmission features high real-time performance and large data volumes. It must meet the following requirements: 1. high bandwidth, high bandwidth ensures the transmission efficiency of multimedia data with large data volumes. 2. QoS is supported to ensure transmission quality and Resource Reservation. 3. Multiple transmission modes are supported. Due to the limitations of embedded environment resources, the QoS of video data is not guaranteed in terms of real-time video data transmission and image quality. In particular, in the case of multiple channels, the transmission quality of real-time videos decreases sharply as the number of channels increases. The performance bottleneck of the embedded video server mainly lies in the transmission of video data. Therefore, shortening the transmission time of video data can improve the performance of the video server. To reduce the transmission time of video data, you can start from two aspects: 1. Reduce the amount of information transmitted by video data. It mainly uses high-performance compression encoding technology to reduce the amount of information transmitted. 2. Adopt a transmission protocol suitable for multimedia data. Considering the embedded system resources and their valuable resources, we have selected H. 264 [3] technology and RTP [4] transmission protocol specially designed for multimedia data transmission. The experimental results show that the embedded video server can occupy less bandwidth than the previous video server without affecting image quality.
2 system hardware composition
The HHARM2410 embedded development kit is used in the design. It consists of the core board and the bottom board. The core board is integrated with the Samsung S3C2410 processor (203 Mbit/s clock speed and Mbit/s bus speed ), 64 m sdram and 16 m flash. The bottom board provides the following peripheral interfaces: a four-line RS-232 serial port, a usb host interface, a 10 M/M Adaptive Ethernet interface, a tft LCD interface, a touch screen interface. The operating system uses a cut-off embedded Linux. The structure of the embedded video server is shown in.
(400) {this. resized = true; this. width = 400; this. alt = 'click here to open new window';} "onmouseover =" if (this. resized) this. style. cursor = 'hand'; "onclick =" window. open ('HTTP: // www.linuxeden.com/upimg/allianz 080704/1522430.jpg'); ">
Embedded video server hardware composition
Applications collect H.264 video streams through the encoder module, package them in real time according to the RTP protocol, and implement real-time stream transmission (IP Streaming) through the Ethernet interface ). In addition, you can expand a block by 802. 11b/g wireless module for wireless network transmission, and an IDE Hard Disk can be expanded as a local H. 264 video and image storage.
3. Server Software Design
The server is the core of the entire system. It works on the embedded Linux platform. The embedded Linux is a standard Linux, which is stable, secure, and highly efficient, good real-time performance. The server adopts modular design. from a functional perspective, the software architecture of the server can be divided into five modules: collection module, encoding module, network transmission module, storage module, and device control module. Its architecture 2 is shown below:
[Img = 500,214] http://www.linuxeden.com/upimg/alli%080704/1522431.jpg%/img]
Server Software Architecture Diagram
(1) The acquisition module mainly completes Video Acquisition and image format conversion, where the acquired image format is set to YUV. V4l is implemented in Linux v4l [5]. v4l is an audio and video interface specification provided in Linux. These interfaces are used to compile drivers of all audio and video devices.
(2) The encoding module is used to compress and encode the collected images. There are two ways to compress the collected image data. One is to compress the data with hardware, which can be a dedicated system or a general system, the dedicated system uses a dedicated chip to compress the image hardware, while the general system uses a general chip to compress the data. Compared with general systems, using dedicated chips to compress images with hardware can quickly compress images and reduce processor overhead. The second compression method is implemented by software. This method requires high hardware configuration on the machine, but is flexible to use. Considering that the hardware technology is mature, therefore, we adopt the second method to perform soft compression on the collected image data. Here we use H. 264 Standard for compression. H.264 compression standard is a new encoding method. Compared with other compression encoding methods, H.264 can achieve higher compression ratio and better image quality. H.264 encoders are open-source and can be downloaded from the Internet. t264 is used here to compress the collected YUV data frame by frame.
(3) the network transmission module transfers on-site multimedia data and historical multimedia data. It is used to support on-site preview and record playback at the browser end. The basic process is to adjust the bit rate of the Code passing through the bit rate control part, and then transfer the code to the network by the RTP component. At the beginning of transmission, the multicast controller negotiates multicast policies based on the multicast policies of the browser provided by the multi-user proxy. During transmission, the RTCP component monitors network conditions in real time and gives feedback? The decision controller controls the splitter, video frame splitter, and code stream composite components to achieve dynamic integration of code streams. Its transmission architecture 3 is shown below:
[Img = 467,201] http://www.linuxeden.com/upimg/alli%080704/1522432.jpg%/img]
Architecture of Video Stream Transmission
(4) In the storage module, multiple video data collected by multiple cameras are encoded and compressed to form a composite media stream. The storage component is H. 264 file, and write the corresponding file information to the database.
(5) In the device control module, the device controller receives control commands sent from the user interface or from the network, controls the decoder, and controls devices such as cloud platform and lens.
4 Client Software Design
The client receives, decodes, and displays video data, and dynamically sets the number of encoders. From a functional perspective, the client's software architecture can be divided into three modules: Device control module, network receiving and feedback module, and display module. Its Architecture 4 is shown below:
(1) the device control module, based on the commands entered by the user (such as changing the video window size, number of receiving channels, image resolution, start/stop remote monitoring, etc ), control commands are generated and sent to the server through a TCP connection. The device controller on the server receives these control commands and remotely controls devices such as cloud platform and lens.
(2) the receiving and feedback part of the network is based on the user's basic bandwidth (LAN or non-lan) and request Task Type (on-site preview or historical playback ), determine whether the received code stream is received based on the multicast policy. The RTP component receives the bitstream. The RTCP component detects the packet loss rate of the bitstream and sends it back to the server.
[Img = 500,228] http://www.linuxeden.com/upimg/alli%080704/1522433.jpg%/img]
Client Software Architecture
(3) In the display module, the synchronization Source Filter obtains the bitstream from the RTP component and completes decoding and Synchronous Video Playback under the coordination of the controller. Here, we use the Microsoft DirectShow [6] architecture to complete real-time decoding of H.264 code streams and image display.
The test proves that for an embedded video monitoring system that includes video servers and multiple monitoring customer centers, the above network transmission can complete real-time transmission of video data, and the network adaptability is good.