From http://www.xiangb.com/vga/vga_946.html
Video for Linux two (video4linux2) is short for v4l2 and is the ultimate version of v4l. V4l2 is an API for collecting image, video, and audio data in Linux. It works with appropriate video collection devices and corresponding drivers, allows you to collect images, videos, and audios. It is widely used in remote conferences, videophone, video surveillance systems, and embedded multimedia terminals.
I. Video for Linux two
In Linux, all peripherals are regarded as special files and become "Device Files", which can be read and written like accessing common files. Generally, the file of the camera device that uses the v4l2 driver is/dev/v4l/video0. For general purpose, you can create a link to/dev/video0. V4l2 supports two methods to collect images: Memory Map ing (MMAP) and direct read (read ). V4l2 defines some important data structures in the include/Linux/videodev. h file. In the image collection process, the final image data is obtained through operations on the data. The v4l2 capability of the Linux system can be configured in the Linux Kernel compilation phase. This interface is available by default. V4l2 from Linux
In kernel 2.5.x.
The v4l2 Specification defines not only common API elements, image formats, input/output methods ), it also defines a series of interfaces (interfaces) for the Linux kernel driver to process video information. These interfaces mainly include:
Video Capture interface-video capture interface;
Video output interface;
Video overlay/preview interface-video overlay interface;
Video output overlay Interface;
Codec interface.
Ii. Principle of video collection by applications through v4l2
V4l2 supports memory Ming (MMAP) and direct read (read) for data collection. The former is generally used for continuous video data collection, and the latter is often used for static image data collection, this article focuses on video collection in memory ing mode.
The application uses the v4l2 interface to collect video data in five steps:
First, open the video device file, initialize the video collection parameters, and use the v4l2 interface to set the video image acquisition window, the captured dot matrix size, and format;
Second, apply for several frame buffers for video collection, and map these frame buffers from the kernel space to the user space to facilitate the application to read/process video data;
3. queue the requested frame buffer in the video collection input queue and start video collection;
Fourth, the driver starts video data collection. The application extracts the frame buffer from the video collection output queue. After processing, it puts the frame buffer into the video collection input queue again, collects continuous video data cyclically;
Fifth, stop video capture.
1. Video collection parameter initialization
In Linux, the camera hardware has been mapped to the device file "/dev/video0". Use the OPEN function to open the device file and obtain its file descriptor fd_v4l2, then, initialize the parameter of the file descriptor.
(1) set Video Acquisition window parameters
The Set collection window is to set a video collection area within the range of camera equipment. Assign values to the struct v4l2_crop. v4l2_crop is composed of a v4l2_buffer_type Enumeration type and a struct C of the v4l2_rect type to describe the type and size of the video collection window. Set type to the video collection type v4l2_buf_type_video_capture. C is the structure that represents the size of the collection window. Its members left and top represent the start horizontal and vertical coordinates of the video collection area, and width and height represent the width and height of the collection image respectively. After the value is assigned, use the ioctl function to set fd_v4l2 through this struct.
Struct v4l2_crop {Enum v4l2_buf_type type;
Struct v4l2_rect C;
};
(2) set the video dot matrix format and size.
Assign values to the v4l2_format struct, which consists of the type and FMT of the consortium to describe the current behavior and data format of the Video device.
Assign type to the video collection type v4l2_buf_type_video_capture, which defines a buffer of the video collection stream type. In FMT, the PIX is the v4l2_pix_format struct that represents the graphic format. You need to set several variables in the PIX. pixelformat indicates the collection format and v4l2_pix_fmt_yuv420. width and height indicate the width and height of the image, in bytes; sizeimage indicates the size of the storage space occupied by the image, in bytes; bytesperline indicates the number of bytes in each row. After the value is assigned, use the ioctl function _
V4l2.
Struct v4l2_format
{Enum v4l2_buf_type;
Union
{Struct v4l2_pix_format pix; // v4l2_buf_type_video_capture
Struct v4l2_window win; // v4l2_buf_type_video_overlay
_ U8 raw_data [1, 200]; // User-Defined
} FMT;
};
(3) set the frame rate for video collection
The v4l2_streamparm struct describes the attributes of the video stream. It consists of the type and the consortium parm. Type is the same as above. Because v4l2_buf_type_video_capture is selected, you only need to set the v412_capture struct capture in parm. Among them, the v4l2_fract structure timeperframe indicates the average time occupied by each frame, determined by the numerator and denominator elements. The length of time is numerator/denominator, while capturemode indicates the collection mode, the value of the collected high-quality image is 1, which is generally set to 0. After the value is assigned, use the ioctl function _
V4l2.
Struct v4l2_streamparm
{Enum v4l2_buf_type;
Union
{Struct v4l2_captureparm capture;
Struct v4l2_outputparm output;
_ U8 raw_data [1, 200];/* User-Defined */
} Parm;
};
2. Apply for and set the frame buffer for video collection
After the initialization is complete, the format and size of a frame of video data are only solved, and the collection of continuous video frame data needs to be solved by Frame Buffer Queue, that is, the driver needs to apply for several frame buffers in the memory to store video data.
The app uses the method provided by the API (vidioc_reqbufs) to apply for a number of frame buffers for video data. The number of requested frame buffers is generally no less than three. Each frame buffer zone stores one frame of video data, these frame buffers are in kernel space.
The application uses the query method (vidioc_querybuf) provided by the API to query the length and offset of the frame buffer in the kernel space.
The application then maps the requested kernel space frame buffer address to the user space address through the memory ing method (MMAP), so that the data in the frame buffer can be processed directly.
(1) queue the frame buffer in the video input queue and start video collection.
Two queues are defined in the Process of video processing by the driver: Incoming queues and outgoing queues ), the former is the queue waiting for the driver to store video data, and the latter is the queue where the driver has already put video data. 2.
The application needs to queue the above frame buffer in the video collection Input Queue (vidioc_qbuf), and then start video collection.
(2) cyclically collects continuous video data
After video collection is started, the driver starts to collect a frame of data and puts the collected data into the buffer zone of the first frame of the video collection input queue, that is, when the first frame buffer zone is full of one frame of data, the driver moves the frame buffer zone to the video capture output queue and waits for the application to retrieve the data from the output queue. The driver then collects the next frame of data and puts it into the second frame buffer. After the same frame buffer is full of the next frame of data, it is placed into the video collection output queue.
The application extracts the frame buffer that contains video data from the video capture output queue and processes the video data in the frame buffer, such as storage or compression.
Finally, the application puts the frame buffer that has processed the data into the video collection input queue, which can be collected cyclically, as shown in 1.
Figure 1 Video Capture input and output queues
(3) Stop collection and release the memory frame buffer.
3. Procedure and related APIs for video collection using v4l2
Video Capture operations in v4l2 are implemented by functions such as opening a video device, setting a video format, starting a video collection, processing video data cyclically, stopping video collection, and disabling a video device. The general procedure is as follows:
(1) Open the video device file. Int FD = open ("/dev/video0", o_rdwr );
(2) query the capabilities of a video device, such as video input or audio input/output. IOCTL (fd_v4l, vidioc_querycap, & Cap)
(3) set video collection Parameters
Sets the video standard, which includes PAL/NTSC and IOCTL (fd_v4l, vidioc_s_std, & std_id)
Set the size of the video image acquisition window, and use IOCTL (fd_v4l, vidioc_s_crop, & crop)
Sets the video frame format, including the frame dot matrix format, width, and height. Use IOCTL (fd_v4l, vidioc_s_fmt, & FMT)
Set the Frame Rate of the video. Use IOCTL (fd_v4l, vidioc_s_parm, & parm)
Set the video rotation mode and use IOCTL (fd_v4l, vidioc_s_ctrl, & CTRL)
(4) apply for frame buffer for video stream data from the driver
Request/apply for several frame buffers, generally no less than 3, using IOCTL (fd_v4l, vidioc_reqbufs, & req)
Query the length and offset of the frame buffer in the kernel space IOCTL (fd_v4l, vidioc_querybuf, & BUF)
(5) The application maps the address of the frame buffer to the user space through memory ing, so that you can directly operate on the captured frames without having to copy them.
Buffers [I]. Start = MMAP (null, buffers [I]. length, prot_read | prot_write, map_shared, fd_v4l, buffers [I]. offset );
(6) put all the requested frame buffers into the video capture output queue to store the collected data. IOCTL (fd_v4l, vidioc_qbuf, & BUF)
(7) Start video stream data collection. IOCTL (fd_v4l, vidioc_streamon, & type)
(8) The driver saves the captured frame of video data to the first frame buffer zone of the input queue. After the frame is saved, the buffer zone is moved to the output queue of the video collection.
(9) The application extracts the frame buffer that contains the collected data from the video capture output queue. IOCTL (fd_v4l, vidioc_dqbuf, & BUF), the application processes the original video data in the frame buffer.
(10) after processing, the application re-enters the frame buffer into the input queue, so that data can be collected cyclically. IOCTL (fd_v4l, vidioc_qbuf, & BUF)
Repeat steps 8 to 10 until data collection is stopped.
(11) Stop video collection. IOCTL (fd_v4l, vidioc_streamoff, & type)
(12) release the requested video frame buffer unmap and close the video device file close (fd_v4l ).
The above procedures include the logical relationship between video devices collecting continuous video data. In practical use, video data processing (such as compression and encoding) is often required. Otherwise, the video stream data volume is large and requires a large storage space and transmission bandwidth.
In the above process, each frame buffer zone has a corresponding state flag variable, where each bit represents a State
V4l2_buf_flag_unmapped 0b0000
V4l2_buf_flag_mapped 0b0001
V4l2_buf_flag_enqueued 0b0010
V4l2_buf_flag_done 0b0100
The status of the buffer zone is converted to 2.
Figure 2 transition of status signs in the buffer zone
Iii. Conclusion
V4l2 is a set of specifications (APIS) for developing video capture device drivers in Linux. It provides a unified interface for driver programming, all drivers of Video Capture Devices are managed. V4l2 not only brings great convenience to driver writers, but also facilitates the compilation and Transplantation of applications, which has a wide range of application values.