Video4linux (v4l) Basic tutorial and experience on using cameras (recommended)

Source: Internet
Author: User

Author: d_south <d_south@163.com>

Blog: http://hi.baidu.com/d_south

Writing Date: 2009

Preface: thoughts and ideas for writing this article

Due to my graduation project, I want to do video-related work in Linux, such as video collection and transmission. Because I am a cainiao, I need to search for it on the Internet to see how everyone is doing it. Of course, there is no clue at first, but many articles have mentioned video4linux (v4l ), so I think we can start from here ,. I have read some articles on the Internet, among which the most important is well-known. I wrote "Implementation of USB camera image acquisition based on video4linux" by wearing mice. there are a series of six articles written by Chen junhong, a preliminary study of video stream, and some documents in English, I have seen video4linuxprogramming, But This article focuses on the driver implementation of video devices in Linux. Therefore, it is helpful for low-end users who only use v4l-related system calls, but it is not helpful, video4linux
The kernel api reference describes in detail the functions of important struct in v4l. In addition, according to Chen junhong's article, I found a software called Movie TV. The source code of v4l is also worth reading, which will be introduced in later articles. I have read many articles on the Internet, most of which are related Code introduced by Chen junhong, or TV. Everyone is using this and using it well.

 

I am writing this article to accumulate materials for my graduation thesis, and I want to provide a learning path for people who want to learn about v4l in the future, this is because the articles mentioned in the previous section will certainly be of great help to anyone who reads them. Third, I hope I can write an article to help anyone who wants to learn, even a little bit.

The article is divided into three parts:

The first part introduces some basic concepts and methods of v4l, and uses system APIs to complete a series of functions to facilitate the development and use of subsequent applications.

The second part describes how to use v4l and uses an example program.

In the third part, I would like to briefly talk about the ideas for obtaining and processing image-related problems. In this chapter, I may talk about some of my understandings and experiences. In fact, there is a lot of information on the network. I just want to sort it out.

I feel that the programmers of Linux kernel and driver development are very good because they leave us an interface that is easy to use and make the underlying complex work very transparent to us, after reading the articles I mentioned above, I think it is relatively easy to use v4l (I hope this will happen if someone reads my article ), what is relatively complicated is what we should do after the image data is collected. I think this may be because many people, of course, are not particularly clear and clear. Therefore, I want to discuss some issues related to image data acquisition in the third part. Of course, my level is limited, please point out the incorrect methods and incorrect understanding of the concept in this article. I am very willing to learn and make progress together.

1. video4linux Basics

1.1 Introduction to v4l and some basic knowledge

I. first describe video4linux (v4l).

It is the basis of some video systems, video software, and audio software. It is often used in scenarios where images need to be collected, such as video surveillance, webcam, and videophone, often used in Embedded Linux is a commonly used system interface in Linux embedded development. It is a programming interface provided by the Linux kernel to the user space. After various video and audio devices develop corresponding drivers, they can control the video and audio devices through the system APIs provided by v4l, that is to say, v4l is divided into two layers. The underlying layer is the driver of the audio and video devices in the kernel, and the upper layer provides the system APIs. What we need is to use these system APIs.

Ii. File Operations in Linux

File Operations on Linux systems do not belong to this document. However, you still need to understand the functions and usage of related system calls. These include open (), read (), close (), IOCTL (), and MMAP (). Detailed usage is not described. In Linux systems, various devices (including video devices) are also used in the form of files. They exist in the dev directory, so essentially, the use of various peripherals in Linux (if they are correctly driven) is essentially no different from file operations.

1.2 create a simple v4l function library

This section describes how to use v4l and how to create a set of simple functions. It should be said that it is a set of basic functions, which are basically enough but sufficient to show how to use v4l. These functions can be used by other programs to encapsulate basic v4l functions. This article only introduces some camera-related programming methods, andBasicAndSimplestSo some content is not introduced, and some content related to other video devices (such as video capture cards) and audio devices is not introduced, and I am not very familiar with this content.

Here is an overview of the functions to be developed.

The definitions of relevant struct and functions are put in a file named v4l. h. The compilation of related functions is put in a file named v4l. C.

This function library has the following definitions (that is, the content in v4l. h ):

# Ifndef _ v4l_h _

# DEFINE _ v4l_h _

# Include <sys/types. h>

# Include <Linux/videodev. h> // use the header file that must be included in v4l

This header file can be found in/usr/include/Linux, which contains the definition of various v4l structures and the use of various IOCTL methods, therefore, we will not detail the struct related to v4l in the following article. You can refer to this file to get the content you want.

The following are the defined struct and related functions. It is abrupt to give so much code, but it will be clear with a little explanation.

Struct _ v4l_struct

{

Intfd; // Save the device descriptor for opening the video file

Structvideo_capability capability; // This structure and the following structure are defined by v4l and can be found in the above header file

Struct video_picturepicture;

Struct video_mmap MMAP;

Struct video_mbuf mbuf;

Unsignedchar * map; // pointer to image data

Intframe_current;

Intframe_using [video_maxframe]; // these two variables are used for double buffering.

};

Typedef struct _ v4l_struct v4l_device;

// The struct defined above. Some chapters in this article have variables that define the channel, but for the camera, setting this variable doesn't usually mean only one channel, this article is not intended to write a big, comprehensive and mature function library, but to introduce how to use v4l, coupled with my own level is also limited, I am satisfied with the path I can give readers, therefore, the function related to the channel is not provided when this variable is not set.

 

Extern int v4l_open (char *, v4l_device *);

Extern int v4l_close (v4l_device *);

Extern intv4l_get_capability (v4l_device *);

Extern intv4l_get_picture (v4l_device *);

Extern int v4l_get_mbuf (v4l_device *);

Extern intv4l_set_picture (v4l_device *, Int ,);

Extern intv4l_grab_picture (v4l_device *, unsigned INT );

Extern intv4l_mmap_init (v4l_device *);

Extern int v4l_grab_init (v4l_device *, Int, INT );

Extern int v4l_grab_frame (v4l_device *, INT );

Extern int v4l_grab_sync (v4l_device *);

The above functions will be gradually completed in the next article, and the functions will be gradually introduced. Although it seems that there is no sense that you can only vaguely understand the functions from the function name, it may seem annoying, but it will be fine after reading the following.

 

As mentioned above, there is no essential difference between the process of using v4l video programming and file operations. The general process is as follows:

1. Open the video device (usually/dev/video0)

2. Obtain device information.

3. Change the device settings as needed.

4. Obtain the collected image data (v4l provides two methods to read data directly from an opened device and use MMAP memory ing to obtain the data ).

5. perform operations on the collected data (such as display on the screen, image processing, and storage as image files ).

6. Disable the video device.

After knowing the process, we need to complete the corresponding functions according to the process.

 

First, complete Step 1 to enable the video device., You need to complete int v4l_open (char *, v4l_device *);

The specific functions are as follows:

# Define default_device "/dev/video0"

Int v4l_open (char * Dev, v4l_device * vd)

{

If (! Dev) Dev = default_device;

If (vd-FD = open (Dev, o_rdwr) <0) {perror ("v4l_open:"); Return-1 ;}

If (v4l_get_capability (VD) Return-1;

If (v4l_get_picture (VD) Return-1; // these two functions are functions to obtain device information.

Return 0

}

It is also very simple for step 1.Int v4l_close (v4l_device.

The function is as follows:

Int v4l_close (v4l_device * vd)

{Close (vd-> FD); Return 0 ;}

Now we have completed the task of obtaining device information in step 1.The following describes the functions.

Int v4l_get_capability (v4l_device * vd)

{

If (IOCTL (vd-> FD, vidiocgcap, & (vd-> capability) <0 ){

Perror ("v4l_get_capability :");

Return-1;

}

Return 0;

}

Int v4l_get_picture (v4l_device * vd)

{

If (IOCTL (vd-> FD, vidiocgpict, & (vd-> picture) <0 ){

Perror ("v4l_get_picture :");

Return-1;

}

Return 0;

}

For the above two functions we are not familiar with, there can be VD-> capability and VD-> picture struct, and the most important statement IOCTL in these two functions. The IOCTL behavior is provided and defined by the driver. Here, of course, it is defined by v4l. The macro vidiocgcap and vidiocgpict indicate the capability and picture of the Video device respectively. Other macro function definitions can be found in/usr/include/Linux/videodev. h In your Linux system. This header file also contains the definition of capability and picture. For example:

Struct video_capability

{

Char name [32];

Int type;

Int channels;/* num channels */

Int audios;/* num audio devices */

Int maxwidth;/* supported width */

Int maxheight;/* and height */

Int minwidth;/* supported width */

Int minheight;/* and height */

}; Capability Structure: it includes the name, number of channels, number of audio devices, maximum and minimum width and height supported.

Struct video_picture

{

_ 16brightness;

_ B2hue;

_ 2010color;

_ 16contrast;

_ 2010whiteness;/* black and white only */

_ 2010depth;/* capture depth */

_ 2010palette;/* palette in use */

} The picture structure includes the brightness, contrast, color depth, and color palette. The header file also lists palette-related values, which are not provided here.

After learning about the above, we also learned about the functions of these two simple functions. Now we have obtained the capabilty and picture attributes of the relevant video devices.

Another function is provided here.

Int v4l_get_mbuf (v4l_device * vd)

{

If (IOCTL (vd-> FD, vidiocgmbug, & (vd-> mbuf) <0 ){

Perror ("v4l_get_mbuf :");

Return-1;

}

Return 0;

}

The structure video_mbuf is defined as follows in v4l. The video_mbuf structure is a struct set for the Service to use MMAP memory ing to obtain images, this structure can be used to obtain the memory size of the image stored by the camera device. The specific definitions are as follows. The usage of each variable is described in detail below.

Struct video_mbuf

{

Int size; memory size of the mapped camera

Int frames; number of frames that the camera can store simultaneously

Int offsets [video_max_frame]; The offset of each frame image

};

Follow these steps to change the device settings as needed,In fact, there are many settings that can be changed. This document describes how to change a picture attribute as an example.

So let's complete the extern intv4l_set_picture (v4l_device *, Int,) function.

Int v4l_set_picture (v4l_device * vd, int Br, int hue, intcol, int cont, int white)

{

If (BR) VD-> picture. brightnesss = BR;

If (Hue) VD-> picture. Hue = hue;

If (COL) VD-> picture. Color = Col;

If (cont) VD-> picture. Contrast = cont;

If (white) VD-> picture. Whiteness = white;

If (IOCTL (vd-> FD, vidiocspict, & (vd-> picture) <0)

{Perror ("v4l_set_picture:"); Return-1 ;}

Return 0;

}

The preceding function is an example of modifying the attributes of picture. The core function is the call of IOCTL provided by v4l. You can use this function to modify values such as brightness and contrast.

Step 3: Obtain the collected image data.

This step is an important step to use v4l and involves writing several functions. Of course, v4l is used to obtain images, so this step is critical. However, when you obtain image data, you need to perform further processing based on the purpose and actual situation you want to achieve, that is, what we did in step 1 will be mentioned in the third part. Here we will explain how to obtain the collected data.

As described above, there are two ways to obtain an image:Read Device directlyAndMMAP memory ingThe method is usually the latter.

1). Directly read the device

The method for Directly Reading a device is to use the read () function, which we previously defined

Extern intv4l_grab_picture (v4l_device *, unsigned INT); the function is used to complete this task, and its implementation is also very simple.

 

Int v4l_grab_picture (v4l_device * vd, unsighed intsize)

{

If (read (vd-FD, & (vd-> map), size) = 0) Return-1;

Return 0;

}

This function is easy to use, that is, to give the size of the image data. The data directed by VD-> map is the image data. The size of the image data must be calculated based on the properties of the device.

2). Use MMAP memory ing to obtain images

The following functions are involved in this section. They are used together to complete the final image acquisition function.

Extern int v4l_mmap_init (v4l_device *); this function maps the camera image data to the process memory, that is, as long as the VD-> map pointer is used, the collected image data can be used (detailed description below)

Extern intv4l_grab_init (v4l_device *, Int, INT); this function completes the initialization before image collection.

Extern intv4l_grab_frame (v4l_device *, INT); this function is a step to complete image acquisition. In this article, we use a common trick, we can collect data from the next frame when processing one frame of data, because the camera we use can store at least two frames of data.

Extern intv4l_grab_sync (v4l_device *); this function is used to synchronize captured images. It is called after an image is captured, and the end of a screenshot is returned.

 

The following describes the functions.

 

MMAP () system calls allow processes to share memory by ing to the same common file. After a common file is mapped to the process address space, the process can access the file like the common memory, without calling read (), write () and other operations. The shared memory of two different processes A and B means that the same physical memory is mapped to the process address space of process a and process B. Process A can instantly access process B's updates to data in the shared memory, and vice versa.

One obvious advantage of using shared memory communication is to reduce I/O operations to improve Read efficiency, because after using MMAP, the process can directly read the memory without any data copying.

The MMAP function prototype is as follows:

Void * MMAP (void * ADDR, size_tlen, int Prot, int flags, int FD, off_t offset)

ADDR: the starting address of the shared memory. Generally, it is set to 0, indicating that it is allocated by the system.

Len: Specifies the size of the ing memory. Here, this value is the size value of the camera mbuf struct, that is, the total size of the image data.

Port: Specifies the access permission for shared memory: prot_read (readable), prot_write (writable)

Flags: Set to map_shared.

FD: file descriptor of shared files.

 

After using MMAP, we can introduce the previously defined function extern intv4l_mmap_init (v4l_device. The code of this function is provided first and then explained.

Int v4l_mmap_init (v4l_device * vd)

{

If (v4l_get_mbuf (VD) <0)

Return-1;

If (vd-> map = MMAP (0, VD-> mbuf. Size, prot_read | prot_write, map_shared, VD-> FD, 0) <0 ){

Perror ("v4l_mmap_init: MMAP ");

Return-1;

}

Return 0;

}

This function first uses v4l_get_mbuf (VD) to obtain an important camera parameter, that is, the size of the memory to be mapped, that is, VD-> mbuf. size, and then call MMAP. When we program to call v4l_mmap_init, VD. the memory space pointed to by the map pointer is the image data we will collect.

Initialization before obtaining the imageV4l_grab_init (); this function is very simple and can be attached directly. VD-> frame_using [0] and VD-> frame_using [1] are both set to false, indicating that the truncation of both frames has not started.

Int v4l_grab_init (v4l_device * vd, int width, int height)

{

VD-> MMAP. width = width;

VD-> MMAP. Height = height;

VD-> MMAP. format = VD-> picture. palette;

VD-> frame_current = 0;

VD-> frame_using [0] = false;

VD-> frame_using [1] = false;

Returnv4l_grab_frame (VD, 0 );

}

Extern intv4l_grab_frame (v4l_device *, INT );

Int v4l_grab_frame (v4l_device * vd, int frame)

{

If (vd-> frame_using [frame]) {

Fprintf (stderr, "v4l_grab_frame: frame % d is already used. \ n", frame );

Return-1;

}

VD-> MMAP. Frame = frame;

If (IOCTL (vd-> FD, vidiocmcapture, & (vd-> MMAP) <0 ){

Perror ("v4l_grab_frame ");

Return-1;

}

VD-> frame_using [frame] = true;

VD-> frame_current = frame;

Return0;

}

Here, we should think this function is quite simple. The most critical step is to call IOCTL (vd-> FD, vidiocmcapture, & (vd-> MMAP). After the call, the corresponding image has been obtained. Other code is used to capture two frames of images for dual buffering.

After the image is captured, the synchronization operation is performed, that is, the extern intv4l_grab_sync (v4l_device *) is called. The function is as follows:

Int v4l_grab_sync (v4l_device * vd)

{

If (IOCTL (vd-> FD, vidiocsync, & (vd-> frame_current) <0 ){

Perror ("v4l_grab_sync ");

}

VD-> frame_using [VD-> frame_current] = false;

Return0;

}

This function returns 0, indicating that the image frame you want to obtain has been obtained.

Where does the image exist?

In the end, we use v4l to obtain images from the device. Where is the image? As you can see from the above Article, the VD. Map pointer refers to the first image you want to obtain. The position of the image exists in VD. MAP + VD. mbuf. offsets [VD. frame_current. VD. frame_current = 0 indicates the position of the first frame. VD. frame_current = 1 indicates the position of the second frame.

 

2. How to use the above v4l Library

The above code is given, and some simple code is used to show how to use it. As mentioned above, we have put the definitions of relevant struct and functions in a file named v4l. H. Write the relevant functions in a file named v4l. C.

Now we need to use them.

The method is simple. You can create a. c file named test. C, and test. C is as follows:

// Test. c

Include "v4l. h"

...

V4l_device VD;

 

Void main ()

{

V4l_open (default_device, & Vd );

V4l_mmap_init (& Vd );

V4l_grab_init (& VD, 320,240 );

V4l_grab_sync (& Vd); // At this time, a frame of image is obtained, which exists in VD. Map.

While (1)

{

VD. frame_current ^ = 1;

V4l_grab_frame (& VD, VD. frame_current );

V4l_grab_sync (& Vd );

Image processing functions (VD. MAP + VD. VD. MAP + VD. mbuf. offsets [VD. frame_current]);

// Collect images cyclically and call the image processing function you designed to process images

// VD. MAP + VD. VD. MAP + VD. mbuf. offsets [VD. frame_current] indicates the position of the image.

}

}

 

3. Questions about the obtained images

Question: What is the length of the image I have obtained?

Answer:: The formats of image data obtained by each camera may be different and can be obtained through picture. palette. The obtained images are black and white, YUV, RGB, and JPEG. You need to process the image according to the actual situation and your needs. For example, if you want to display on an Embedded LCD and assume that the LCD is rgb24, but you get the image in YUV format, you can convert it to rgb24. The specific conversion method can be searched online, or you can refer to the Code in the TTV mentioned above.

 

Question: How to display or save an image?

Answer:: Suppose the image you have collected is in the rgb24 format. I have used the SDL library for display. (This is the most popular software on the network called spcaview, however, after compressing the image data into JPEG format, the software is often transplanted to some embedded platforms, such as arm ). Of course, you can also use the framebuffer of Embedded Linux to directly write the screen display. You can use libjpeg to save images as JPEG images for direct storage. You can also use some video encoding to save it. (I want to learn the related technologies because I don't know about this. If you have some information, I recommend it to you, I 'd like to take a look ).

 

While writing an article, I found myself very good, because many of them refer to others' articles, and I can't write anything when I want to write it on the keyboard. So much, because I will only write so much. The experts smiled, and we discussed each other like new users and me.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.