Video Acquisition Based on Embedded Linux and s32410 platforms

Source: Internet
Author: User

With the rapid development of multimedia technology and network technology and the arrival of the Post-PC era, it is possible to use embedded systems to implement remote video surveillance, videophone and video conferencing applications. To implement these applications, obtaining video data in real time is an important step. Based on the embedded Linux system platform, this article uses the video4linux kernel application programming interface function to achieve acquisition of Single-frame images and video continuous frames, and saved as files for further video processing and network transmission.
1. Hardware System on the system platform
The system platform hardware function Diagram 1 is shown in this article. The platform uses Samsung's processor S3C2410. This processor is integrated with a 32-Bit Microcontroller with arm's ARM920T processor core, 16 KB Instruction Cache and 16 KB data cache, LCD controller, Ram controller, NAND Flash Controller, 3 UART, 4 DMA, 4 timer with PWM, and parallel I/ O Port, 8-way 10-bit ADC, touch screen interface, I2C interface, I2S interface, 2 USB interface controllers, 2 SPI, the clock speed can reach up to 203 MHz. Based on rich processor resources, related configurations and extensions are also carried out. The platform is configured with 16 MB 16-bit flash and 64 MB 32-bit SDRAM. An Ethernet controller chip dm9000e is used to expand a network port and a host USB interface is introduced. Add the collected video image data to the input buffer by adding an external camera with USB port on the USB interface. Then, save it as a file, or run the image processing program transplanted to the platform to directly process the buffered image data, and then save and compress it into a UDP packet. Finally, images are sent to the Internet through network interfaces. This article only discusses the specific implementation of the video capture section.
2. Software Systems on the system platform
2.1 Linux and Embedded Systems
Linux has the advantages of small kernel, high efficiency, open source code, and direct network support provided by the kernel. However, the hardware resources of embedded systems are limited. Therefore, Linux cannot be directly used as an operating system. Therefore, you must customize the system for specific applications by configuring the kernel, reducing the shell and Embedded C library, this allows the entire system to be stored in flash with a small capacity. Linux dynamic module loading makes Linux cutting extremely convenient, and highly modular components make it easy to add. Thanks to the above advantages of Linux, on the platform implemented in this article, the operating system is customized for Linux armlinux. It enables MMU (Memory Management Unit) and is designed for MMU-supported processors.
2.2 Development Environment Establishment
The vast majority of Linux software development is carried out in native mode, that is, local development, debugging, and local operation. This method is usually not suitable for software development of embedded systems, because for the development of embedded systems, it does not have enough resources to run development tools and debugging tools on the local machine (that is, the embedded system platform. Generally, software development in embedded systems adopts the cross-compilation and debugging method. The cross-compilation and debugging environment is built on the host machine (that is, the host machine PC connected through the serial port as shown in Figure 1). The corresponding Development Board is called the target board (that is, the embedded arm2410 system ).
Generally, the host machine and the target board have different processors. The host machine is usually an Intel processor, while the target board 1 shows Samsung S3C2410, therefore, the program must use a processor-specific compiler to generate code that can run on the corresponding platform. The GNU Compiler provides this function. during compilation, you can select the host machine and target machine required for development to establish the development environment. The first step before developing embedded systems is to use a PC as the host machine and install the specified operating system on it. For Embedded Linux, the Linux system should be installed on the host PC. Then, create a development environment for cross-compilation and debugging on the host machine. The specific establishment of the development environment will not be discussed here. This article uses a highly portable C language to compile a video collection program on the host machine, and then uses the cross-compilation and debugging tool to compile links to generate executable code, and finally port the program to the target platform.
3. Video collection implementation
As mentioned above, armlinux runs on the system platform. After MMU is enabled, the system enters the protection mode. Therefore, applications cannot directly read and write the I/O Areas of peripherals (including I/O Ports and I/O memory ), at this time, we usually need to use the peripheral driver to enter the kernel to complete this work. Video Acquisition in the system is implemented in two steps: one is to write the driver to the USB port digital camera in the kernel, and the other is to write the upper-layer application to obtain video data. This article focuses on the next step.
3.1 USB port digital camera driver implementation
In Linux, the device driver can be seen as an interface between the Linux kernel and external devices. The device driver shields the application from hardware implementation details so that the application can operate on external devices like a common file, you can use the same standard system call interface functions as in the operating file to enable, disable, read/write, and perform I/O control operations on hardware devices, the main task of the driver is to implement these System Call functions. The embedded armlinux system used on the system platform has no essential difference in kernel functions with the Linux operating system, so the tasks to be implemented by the driver are the same, as long as the compiler, some header files, and library files used during compilation involve the specific processor architecture, these can be specified in the MAKEFILE file.
Video4linux (simple v4l) is the kernel driver for video devices in Linux. It provides a series of interface functions for Video device application programming, these video devices include popular TV cards, video capture cards, and USB cameras. For USB-port cameras, the driver must provide basic I/O operation interface functions such as open, read, write, and close. Implement interrupt handling, memory ing, and ioct1, the control interface function of the I/O channel, and define them in struct file_operations. In this way, when an application calls a device file such as open, close, read, and write, the Linux kernel accesses the functions provided by the driver through the file_operations structure. For example, when an application reads a device file, the kernel calls the READ function in the file_operations structure. To drive USB camera headers on the system platform, first compile the USB controller driver module into the kernel statically so that the platform supports the USB interface, and then use the camera to collect data, use insmode to dynamically load the driver module, so that the camera can work normally, and then proceed to the next step of Video Stream collection programming.
3.2 camera Acquisition Programming in video4linux
After the USB camera is driven, you only need to write an application for Video Stream collection. Based on the development characteristics of the embedded system, first write the application on the host machine, and then use the cross compiler to compile the link to generate executable files on the target platform. The host machine communicates with the target board through a print terminal for cross debugging. After successful debugging, the host machine is transplanted to the target platform. This article describes how to write a collection program on the host machine that installs the Linux operating system.
(1) data structure defined in the program
Struct voide_capability grab_cap;
Struct voide_picture grab_pic;
Struct voide_mmap grab_buf;
Struct voide_mbuf grab_vm;
These data structures are supported by video4linux and their usage is as follows:
* Video_capability includes the basic information of the camera, such as the device name, the maximum and minimum resolution supported, and the signal source, the member variables in the struct are name [32], maxwidth, maxheight, minwidth, minheight, channels (number of signal sources), and type;
* Voide_picture includes various attributes of the image collected by the device, such as brightness, hue, contrast, whiteness, and depth;
* Video_mmap is used for memory ing;
* The frame information mapped by voido_mbuf using MMAP is actually the frame information entered into the camera memory buffer, including size (frame size) and frames (maximum number of frames supported) and offsets (the offset of each frame to the base address ).
Main System Call functions used in the program include: open ("/dev/voideo0", int flags), close (FD), MMAP (void * Start, size_t length, int Prot, int flags, int FD, off_t offset), munmap (void * Start, size_tlength), and IOCTL (int fd, int cmd ,...).
As mentioned above, the Linux system regards a device as a device file. In user space, you can use the standard I/O system to call a function to operate the device file, so as to achieve communication and interaction with the device. Of course, the corresponding support for these functions should be provided in the device driver. Here we will explain IOCTL (int fd, int cmd ,...) Function, which is used to control the I/O channel in the user program. FD represents the device file descriptor, and CMD represents the control command of the user program on the device, A ellipsis is generally a parameter indicating the type length.
(2) Collection Procedure Implementation Process
First, open the video device. The file of the camera in the system is/dev/video0. The grab_fd = open ("/dev/video0", o_rdwr) function is called by the system ), grab_fd is the file descriptor returned by the device after it is opened (an error-1 is returned when it is opened). Later, the system can call the function to operate the device file. Then, the ioct1 (grab_fd, vidiocgcap, & grab_cap) function is used to read information about the camera in struct video_capability. After the function is successfully returned, the information is copied from the kernel space to the grab_cap member components of the user program space. The printf function can be used to obtain the member component information, for example, printf ("maxheight = % d", grab_fd.maxheight) obtains the maximum vertical resolution. The ioct1 (grab_fd, vidiocgpict, & grab_pic) function is used to read the voideo_picture information in the camera buffer. This information can be changed in a user space program. The specific method is to first assign a new value to the component, and then call the vidiocspict ioct1 function, for example:
Grab_fd.depth = 3;
If (ioct1 (grab_fd, vidiocspict, & grab_pic) <0)
{Perror ("vidiocspict"); Return-1 ;};
After you initialize the device, you can intercept the video image. There are two methods: Read () and MMAP () memory ing. Read () reads data through the kernel buffer. MMAP () maps device files to the memory, bypassing the kernel buffer. The fastest disk access is usually slower than the slowest memory access, therefore, the MMAP () method accelerates I/O access. In addition, MMAP () system calls allow processes to map the same file to implement shared memory. Each process can access files like accessing normal memory, you only need to use pointers instead of calling file operation functions during access. Because of the above advantages of MMAP (), memory ing (MMAP () is adopted in program implementation.
You can perform the following operations by using MMAP.
① Use the ioct1 (grab_fd, vidiocgmbuf, & grab_vm) function to obtain the frame information of the camera storage buffer, and then modify the settings in voideo_mmap, for example, You can reset the vertical and horizontal resolution of image frames and set the color display format. The following statement can be used:
Grab_buf.height = 240;
Grab_buf.width = 320;
Grab_buf.format = video_palette_rgb24;
② Next, map the device file corresponding to the camera to the memory area. Specifically, use grab_data = (unsigned char *) MMAP (0, grab_vm.size, prot_read | prot_write, map_shared, grad_fd, 0. In this way, the content of the device file is mapped to the memory area, which is readable and writable and can be shared between different processes. If the function succeeds, the pointer in the image memory area is returned. If the function fails, the returned value is-1.
The following sections describe single-frame acquisition and continuous frame acquisition:
* Single frame collection. The maximum number of frames (the value of frames) supported by the camera buffer frame information obtained above is generally two frames. For single-frame collection, you only need to set grab_buf.frame = 0, that is, to collect the first frame, use the ioctl (grab_fd, vidiocmcapture, & grab_buf) function. If the call is successful, then the device is activated to capture an image, which is not blocked. Then, ioct1 (grab_fd, vidiocsync, & frame) function is used to determine whether the frame image has been intercepted. If the frame image is successfully returned, the image data can be saved as a file.
* Continuous frame collection. Based on a single frame, the grab_fd.frames value is used to determine the number of cycles of frame data in the buffer zone of the captured camera frame. In the loop statement, vidiocmccapture ioct1 and vidiocsync IOCTL functions are used to capture each frame, but the address must be assigned to each captured image, use the statement Buf = grab_data + grab_vm.offsets [frame] To save the file format. To continue the collection, you can add another external loop. The External Loop statement only needs to assign frame = 0 to the original internal loop.
4. Summary
At last, I used a cross compiler to compile a continuous frame collection program (taking dual frame collection as an example and saving it as a BMP file) on the host PC to generate executable code, the migration to the target platform is completed. In order to further observe the captured image effect, I wrote a simple network communication program based on the network support of the target platform, the image files collected and saved as BMP are transmitted to the PC for display over the network, and the image files collected and saved as BMP are transmitted to the PC for display over the network, after analyzing the effect, return to the collection program and reset the information in video_picture, such as the brightness, contrast, and the resolution in voide_mmap, and re-port the image to achieve the best effect.
On the embedded system platform in Figure 1, video acquisition is completed by applying the methods described in this article, coupled with related video processing and network access. This constitutes a smart terminal device, it can be used for smart monitoring in factories, banks and residential areas around the clock and has broad market and application prospects.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.