Design and Implementation of ARM-based video surveillance Terminals

Source: Internet
Author: User

Design and Implementation of ARM-based video surveillance Terminals
[Date: 2008-7-29] Source: China Power Grid Author: su Yixin Hu Jie, School of automation, Wuhan University of Technology [Font:Large Medium Small]

 

Introduction

Video surveillance systems are widely used in the industrial, military, and civil fields, and play an important role in security and environmental monitoring of these industries. Video surveillance systems are gradually going from analog to digital. With the rapid development of semiconductor technology and the increasingly sophisticated multimedia video codec technology, the application of high-performance and complex video stream compression algorithms in embedded systems has become a reality. Nowadays, the monitoring system is mostly implemented by combining a dedicated processor or an embedded computer processor with a DSP. This article discusses how to combine ARM processor with software compression.

Overall Design of the Video Monitoring System

First, you need to make a general plan for the system, divide the system into several functional modules, and determine the implementation methods of each module. The entire video monitoring system adopts a C/S structure and consists of two parts: server and client. The server mainly includes the collection, compression, and transmission programs running on the S3C2410 platform. The client is the receiving, decompression, and playback programs running on the PC. The video monitoring terminal captures real-time video information from the on-site camera, compresses the information, and transmits it to the video monitoring server over Ethernet.

As shown in the system diagram (Figure 1), video image collection and packaging are completed on the server side, and image receiving, unpackaging, and playback are completed on the client side.

System Hardware Design

The system adopts a modular design scheme, which mainly includes the following modules: Main Controller Module, storage circuit module, peripheral interface circuit module, power supply and reset circuit, as shown in figure 2.

Main Controller Module of S3C2410

The main controller module is the core of the entire system. The S3C2410 processor is a 16/32-bit microcontroller based on the ARM920T processor core of Samsung. the maximum operating frequency of this processor can reach 203 MHz, its Low Power, streamlined, and fully static design is especially suitable for cost-and power-sensitive applications. S3C2410 provides a wide range of in-chip resources and supports Linux, which is an appropriate choice for the system. It can complete the scheduling of the entire system. When the system is powered on, it configures all functional registers of the chips to work, completes video stream encoding, and controls the physical layer chip to send video streams through the Ethernet controller.

System storage Circuit Module

The master controller also needs some peripheral storage units such as Nand Flash and SDRAM. Nand Flash contains Linux Bootloader, system kernel, file system, applications, environment variables, and system configuration files. The SDRAM is fast in read/write and used as a memory unit during system operation. The design adopts 64 m Nand Flash and 64 m sdram.

Peripheral circuit module

The peripherals used in this design include USB interface, Nic interface, RS232 interface and JTAG interface.

The USB main controller module of the video monitoring terminal is connected to multiple USB cameras through a dedicated USB hub. In the real-time monitoring status, the image data captured by each camera is transmitted to the USB master controller module of the video monitoring terminal through a USB hub, then, the USB main controller module is handed over to the S3C2410 processor for centralized processing. The S3C2410 performs real-time Encoding and compression on the acquired image. The encoded code stream is directly transmitted to the sending buffer, waiting for sending.

This design uses cs8900a extended network interface, which is a 16-bit Ethernet controller produced by Cirrus Logic. It can adapt to different application environments by setting internal registers. S3C2410 controls and communicates with the cs8900a network chip through the address, Data, control line, and chip selection signal line. As shown in connection 3 between cs8900a and S3C2410, cs8900a is selected from the ngcs3 signal of S3C2410, And the intrq0 of cs8900a is used to generate an interruption signal, which is connected to the 16-Bit Data Bus of S3c2410, the address line uses a [24: 0].

Cs8900a Ethernet control chip transmits data through the DMA channel. First, set the parameters of the transmission control and transmission address register, read data from the specified data storage area in sequence, send the data to the internal sending buffer, and use Mac to encapsulate and send the data. After a group of data is sent, the request DMA is interrupted and is processed by S3C2410.

The RS-232 interface is connected to PC serial bus, and the embedded system is displayed and controlled through PC. The JTAG interface is mainly used to debug the system and burn the program into flash.

System Software Design

The Software Design of the video monitoring terminal mainly includes two aspects:

(1) Build a software platform on the hardware, to build an embedded Linux software development platform, you need to transplant uboot, transplant the kernel of the embedded Linux operating system, and develop the device drivers of the embedded Linux operating system.

(2) Develop system applications based on the software platform. With the help of the Cross-compilation tool, develop the collection, compression, and transmission programs running on the video monitoring terminal.

Build a Linux platform based on S3C2410

Linux has many advantages, such as open source code, a powerful kernel that supports multiple users, multiple threads, multiple processes, good real-time performance, powerful and stable functions, customizable functions, and multiple architectures.

To build an embedded Linux development platform, you must first build a cross-compilation environment, as shown in figure 4. A complete cross-compilation environment includes the host and the target machine. In development, the host is a PC with a red hat company's FedoreCore 2 Operating System, and the target host is a video monitoring terminal based on S3C2410. The cross compiler is GCC3.3.4 for ARM, and the source code package version of the embedded Linux kernel is 2.6.8RC.

The Linux kernel source code package of 2.6.8RC contains all functional modules. In the system, only a part of this is used. Therefore, before compiling the kernel, you must first configure the kernel, cut off the redundant functional modules, and the customized kernel meets the system design. The procedure is as follows:

(1) type the command make menuconfig to configure the kernel, select the YAFFS file system, and enable NFS startup. The system uses the camera with the USB interface, so enable the USB device support module, including the USB file support module and USB master controller driver module. In addition, the USB camera is a video device. To enable the application to access it, you must enable the Video4Linux module.

(2) Use the make dep command to generate the dependency between kernel programs.

(3) make zImage command to generate the kernel image file.

(4) run the make modules and make modules_install commands to generate a system-ready module.

In this way, the zImage kernel image file is generated and downloaded to the Flash of the target platform.

This design uses a USB external camera, and requires loading in the form of modules during Kernel configuration. First, you must complete the driver. In the driver, you must provide basic I/O operation interface functions such as open, read, write, and close to implement interrupt processing, the memory ing function and ioctl, the control interface function for the I/O channel, are defined in struct file_operations. In this way, when the application performs such operations as open, close, read, and write on the device files and the system calls the operation, the embedded Linux kernel accesses the functions provided by the driver through the file_operations structure. Then, compile the USB driver into a module that can be dynamically loaded so that the camera can work properly.

Design of Video Surveillance terminal software

The software of the video monitoring terminal is divided into three parts by function: video collection, compression, and transmission. This software is developed based on the previously configured embedded kernel.

(1) Video Capture

Use the Video4Linux interface function to access USB camera devices and capture real-time video streams. First, define the v4l_struct data structure, such as basic device information, image attributes, and various signal source attributes. The acquisition module collects images from USB cameras through a USB hub, and starts multiple collection threads, listen on different ports respectively. Once there is a request connection, the collection thread immediately reads the video stream data from the device buffer and puts it into the video processing buffer for the next step.

(2) Video Data Compression

In a video monitoring system, a large amount of data needs to be transmitted over the network. To ensure transmission quality and real-time transmission, encoding and compression must be performed before transmission to reduce the amount of data, this paper adopts MPEG-4 coding standard for data compression. On the network, you can download the open-source xvidcore software as the core algorithm of video compression. xvidcore is an efficient and highly portable multimedia encoding software that performs cross-compilation on a PC, copy the generated file to the target system.

(3) Video Data Transmission

The role of the transmission module is to transmit compressed videos to remote PC customers. The transmission of video stream data is based on TCP/IP protocol. The standard RTP protocol is used for video transmission. RTP is currently the best solution to the problem of Real-Time Streaming Media transmission. For real-time streaming media programming on the Linux platform, we need to use open-source RTP libraries, such as LIBRTP and JRTPLIB. A simple handshake protocol is defined: the collection program at the PC end continuously sends request data packets to the Collection Terminal, and the Collection Terminal packs the captured images and returns them to the host. Each RTP information packet is encapsulated in the UDP message segment, and then encapsulated in the IP packet and sent out. The receiver automatically assembles the received data frames and restores them to video data.

Conclusion

This paper introduces the design scheme of an ARM-based video monitoring system. The hardware and software design of the system are discussed by using the soft compression algorithm. Compared with other video surveillance systems on the market, this system has a short development cycle and a low price. It is suitable for scenarios with low requirements on video images.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.