Preparations and basic framework for migration of camera driver

Source: Internet
Author: User

 

Video Monitoring:

1. Install the SDL library in Ubuntu

2. Run and interpret the source program

3. Video Monitoring between two PCs

4. USB video driver porting ON THE DEVELOPMENT BOARD

5. monitoring between the Development Board and PC

 

SDL Library: abbreviation of the simple direct control media layer. It is used to directly control the underlying multimedia hardware interfaces and framebuffer interfaces. It supports Windows and Linux and supports many languages.

 

SDL installation: copy the SDL-1.2.14.tar to Linux, unzip, enter, execute the. config file (configuration) generate MAKEFILE file, make

 

1) Data Structure of v4l */

 

/* The following data structure is defined in the video4linux API. For detailed data structure definition, refer to the v4l API documentation. Here we will describe the data structure that is frequently used in programming. */

 

/* First, we define a data structure that describes the device. It contains all the data structures defined in v4l :*/

Typedef struct _ v4ldevice

{

Int FD; // device number

Struct video_capability capability;

Struct video_channel [10];

Struct video_picture picture;

Struct video_clip clip;

Struct video_window window;

Struct video_capture capture;

Struct video_buffer buffer;

Struct video_mmap MMAP;

Struct video_mbuf mbuf;

Struct video_unit unit;

Unsigned char * map; // The first address of the data when the MMAP method is used to obtain data.

Pthread_mutex_t mutex;

Int frame;

Int framestat [2];

Int overlay;

} V4ldevice;

 

 

 

/* The following describes the data structures contained in the above data structure. These structures are all defined in. */

/** Struct video_capability */

/* Name [32] canonical name for this interface */

/* Type of interface */

/* Channels number of radio/TV channels if appropriate */

/* Audios number of audio devices if appropriate */

/* Maxwidth maximum capture width in pixels */

/* Maxheight maximum capture height in pixels */

/* Minwidth minimum capture width in pixels */

/* Minheight minimum capture height in pixels */

 

 

 

/* In the program, this structure has been obtained through the vidiocgcap control command of the ioctl function to read and write the device channel. The use of IOCTL is complicated and will not be mentioned here. The following code retrieves the data structure :*/

Int v4lgetcapability (v4ldevice * vd)

{

If (IOCTL (vd-> FD, vidiocgcap, & (vd-> capability) <0 ){

V4lperror ("v4lopen: vidiocgcap ");

Return-1;

}

Return 0;

}

/** Struct video_picture */

/* Brightness picture brightness */

/* Hue picture hue (color only )*/

/* Color picture color (color only )*/

/* Contrast picture contrast */

/* Whiteness the whiteness (greyscale only )*/

/* Depth the capture depth (may need to match the frame buffer depth )*/

/* Palette reports the palette that shoshould be used for this image */

/* This data structure mainly defines image attributes, such as brightness, contrast, and so on. This structure is obtained through the vidiocgpict Control Command issued by IOCTL. */

 

/** Struct video_mbuf */

/* Size the number of bytes to map */

/* Frames the number of frames */

/* Offsets the offset of each frame */

 

/* This data structure is very important when obtaining data using MMAP :*/

 

/* Size indicates the image size. For a 640*480 color image, size = 640*480*3 */

 

/* Frames indicates the number of frames */

 

/* Offsets indicates the offset address of each frame in the memory. This value can be used to obtain the address of the data in the image. */

 

 

 

/* The vidiocgmbuf command of IOCTL can be used to obtain the data of this structure. Source code :*/

Int v4lgetmbuf (v4ldevice * vd)

{

If (IOCTL (vd-> FD, vidiocgmbuf, & (vd-> mbuf) <0 ){

V4lperror ("v4lgetmbuf: vidiocgmbuf ");

Return-1;

}

Return 0;

}

/* The data address can be calculated as follows :*/

Unsigned char * v4lgetaddress (v4ldevice * vd)

{

Return (vd-> map + VD-> mbuf. offsets [VD-> frame]);

}

 

/* The following describes the simplest way to obtain continuous images (to simplify the process, you can remove some attribute operations ):*/

Char * devicename = "/dev/video0 ";

Char * buffer;

V4ldevice device;

Int width = 640;

Int Height = 480;

Int frame = 0;

V4lopen ("/dev/video0", & device); // open the device

V4lgrabinit (& device, width, height); // initialize the device and define the size of the obtained Image

V4lmmap (& device); // memory ing

V4lgrabstart (& device, frame); // get the image

While (1 ){

V4lsync (& device, frame); // wait for a frame to be uploaded

Frame = (frame + 1) % 2; // frame of the next frame

V4lcapture (& device, frame); // obtain the next frame

Buffer = (char *) v4lgetaddress (& device); // obtain the address of this frame

// Buffer provides the first address of the image. You can choose to display or save the image ......

// The image size is width * height * 3

..........................

}

 

/* To better understand the source code, the function implementation is provided here. To simplify it, all error handling is removed. */

Int v4lopen (char * Name, v4ldevice * vd)

{

Int I;

If (vd-> FD = open (name, o_rdwr) <0 ){

Return-1;

}

If (v4lgetcapability (VD ))

Return-1;

}

Int v4lgrabinit (v4ldevice * vd, int width, int height)

{

VD-> MMAP. width = width;

VD-> MMAP. Height = height;

VD-> MMAP. format = VD-> picture. palette;

VD-> frame = 0;

VD-> framestat [0] = 0;

VD-> framestat [1] = 0;

Return 0;

}

Int v4lmmap (v4ldevice * vd)

{

If (v4lgetmbuf (VD) <0)

Return-1;

If (vd-> map = MMAP (0, VD-> mbuf. Size, prot_read | prot_write, map_shared, VD-> FD, 0) <0 ){

Return-1;

}

Return 0;

}

Int v4lgrabstart (v4ldevice * vd, int frame)

{

VD-> MMAP. Frame = frame;

If (IOCTL (vd-> FD, vidiocmcapture, & (vd-> MMAP) <0 ){

Return-1;

}

VD-> framestat [frame] = 1;

Return 0;

}

Int v4lsync (v4ldevice * vd, int frame)

{

If (IOCTL (vd-> FD, vidiocsync, & frame) <0 ){

Return-1;

}

VD-> framestat [frame] = 0;

Return 0;

}

Int v4lcapture (v4ldevice * vd, int frame)

{

VD-> MMAP. Frame = frame;

If (IOCTL (vd-> FD, vidiocmcapture, & (vd-> MMAP) <0 ){

Return-1;

}

VD-> framestat [frame] = 1;

Return 0;

}

 

V4l2 Development Process

 

General operation procedure (Video device ):

 

1. Open the device file. Int FD = open ("/dev/video0", o_rdwr );

2. Obtain the capability of the device to see what functions the device has, such as whether it has video input or audio input and output. Vidioc_querycap, struct v4l2_capability

3. Select video input. A video device can have multiple video inputs. Vidioc_s_input, struct v4l2_input

4. Set the video format and frame format, including PAL, NTSC, and width and height.

Vidioc_s_std, vidioc_s_fmt, struct v4l2_std_id, struct v4l2_format

5. Apply for frame buffering from the driver. Generally, there are no more than five frames. Struct v4l2_requestbuffers

6. Map the requested frame buffer to the user space, so that you can directly operate on the captured frames without having to copy them. MMAP

7. add all the requested frame buffers to the queue to store the collected data. vidioc_qbuf, struct v4l2_buffer

8. Start video collection. Vidioc_streamon

9. Get out of the queue to get the frame buffer of the collected data and obtain the original collected data. Vidioc_dqbuf

10. re-import the buffer to the end of the queue to collect data cyclically. Vidioc_qbuf

11. Stop video collection. Vidioc_streamoff

12. Disable the video device. Close (FD );

 

Common struct (see/usr/include/Linux/videodev2.h ):

 

Struct v4l2_requestbuffers reqbufs; // request to request a frame buffer from the driver, which contains the number of requests

Struct v4l2_capability cap; // the features of this device, such as whether it is a video input device.

Struct v4l2_input input; // Video Input

Struct v4l2_standard STD; // video standard, such as PAL and NTSC

Struct v4l2_format FMT; // frame format, such as width and height

 

Struct v4l2_buffer Buf; // represents a frame in the driver

V4l2_std_id stdid; // video system, for example, v4l2_std_pal_ B

Struct v4l2_queryctrl query; // a type of control

Struct v4l2_control control; // specific Control Value

 

Extern int IOCTL (INT _ FD, unsigned long int _ Request ,...) _ Throw ;__ FD: The device ID. For example, the camerafd returned after the video channel is opened using the open function;

 

_ Request: a specific command identifier.

 

During v4l2 development, the following command identifier is generally used:

 

Vidioc_reqbufs: Memory Allocation

Vidioc_querybuf: converts the data cache allocated in vidioc_reqbufs to a physical address.

Vidioc_querycap: Query driver Function

Vidioc_enum_fmt: obtains the video formats supported by the current driver.

Vidioc_s_fmt: sets the frequency capture format of the current driver.

Vidioc_g_fmt: reads the current drive's frequency capture format

Vidioc_try_fmt: Verify the display format of the current driver

Vidioc_cropcap: Query driver pruning capability

Vidioc_s_crop: Specifies the border of the video signal.

Vidioc_g_crop: border for reading Video Signals

Vidioc_qbuf: reads data from the cache

Vidioc_dqbuf: puts the data back into the cache queue

Vidioc_streamon: video display function

Vidioc_streamoff: function for ending video display

Vidioc_querystd: checks the standards supported by the current video device, such as pal or NTSC.

Some of these Io calls are required and some are optional.

 

Check the standards supported by the current video device

In Asia, Pal (720x576) cameras are generally used, while NTSC (720x480) is generally used in Europe, and vidioc_querystd is used for detection:

V4l2_std_id STD;

Do {

Ret = IOCTL (FD, vidioc_querystd, & STD );

} While (ret =-1 & errno = eagain );

Switch (STD ){

Case v4l2_std_ntsc:

//......

Case v4l2_std_pal:

//......

}

Set the video capture format

After detecting the standards supported by the video device, you also need to set the video capture format:

Struct v4l2_format FMT;

Memset (& FMT, 0, sizeof (FMT ));

FMT. type = v4l2_buf_type_video_capture;

FMT. FMT. pix. width = 720;

FMT. FMT. pix. Height = 576;

FMT. FMT. pix. pixelformat = v4l2_pix_fmt_yuyv;

FMT. FMT. pix. Field = v4l2_field_interlaced;

If (IOCTL (FD, vidioc_s_fmt, & FMT) =-1 ){

Return-1;

}

The v4l2_format struct is defined as follows:

Struct v4l2_format

{

Enum v4l2_buf_type; // data stream type, which must always be v4l2_buf_type_video_capture

Union

{

Struct v4l2_pix_format pix;

Struct v4l2_window win;

Struct v4l2_vbi_format VBI;

_ U8 raw_data [200];

} FMT;

};

Struct v4l2_pix_format

{

_ U32 width; // width, which must be a multiple of 16

_ U32 height; // height, which must be a multiple of 16

_ U32 pixelformat; // video data storage type, for example, yuv4: 2: 2 or RGB

Enum v4l2_field field;

_ U32 bytesperline;

_ U32 sizeimage;

Enum v4l2_colorspace colorspace;

_ U32 priv;

};

Allocate memory

Next, you can allocate memory for Video Capture:

Struct v4l2_requestbuffers req;

If (IOCTL (FD, vidioc_reqbufs, & req) =-1 ){

Return-1;

}

V4l2_requestbuffers is defined as follows:

Struct v4l2_requestbuffers

{

_ U32 count; // The number of cached images, that is, the number of images in the cache queue.

Enum v4l2_buf_type; // data stream type, which must always be v4l2_buf_type_video_capture

Enum v4l2_memory memory; // v4l2_memory_mmap or v4l2_memory_userptr

_ U32 reserved [2];

};

Obtain and record the cached physical space

With vidioc_reqbufs, we get req. count caches. Next, call the vidioc_querybuf command to obtain the cached addresses. Then, use the MMAP function to convert them to the absolute address in the application. Finally, put the cached address in the cache queue:

 

Typedef struct videobuffer {

Void * start;

Size_t length;

} Videobuffer;

 

Videobuffer * buffers = calloc (req. Count, sizeof (* buffers ));

Struct v4l2_buffer Buf;

 

For (numbufs = 0; numbufs <Req. Count; numbufs ++ ){

Memset (& Buf, 0, sizeof (BUF ));

Buf. type = v4l2_buf_type_video_capture;

Buf. Memory = v4l2_memory_mmap;

Buf. Index = numbufs;

// Read the cache

If (IOCTL (FD, vidioc_querybuf, & BUF) =-1 ){

Return-1;

}

 

Buffers [numbufs]. Length = Buf. length;

// Convert to relative address

Buffers [numbufs]. Start = MMAP (null, Buf. length,

Prot_read | prot_write,

Map_shared,

FD, Buf. M. offset );

 

If (buffers [numbufs]. Start = map_failed ){

Return-1;

}

 

// Put it into the cache queue

If (IOCTL (FD, vidioc_qbuf, & BUF) =-1 ){

Return-1;

}

}

Video collection methods

The operating system generally divides the memory used by the system into user space and kernel space, which are managed by applications and operating systems respectively. Applications can directly access the memory address, while the kernel space stores the code and data for the kernel to access, and users cannot directly access it. The data captured by v4l2 is initially stored in the kernel space, which means that the user cannot directly access the memory of this segment and must use some means to change the address.

There are three video collection methods: Read and Write; Memory ing and user pointer modes.

The read and write Methods continuously copy data in user space and kernel space, occupying a large amount of user memory space and reducing efficiency.

Memory ing method: this is an effective method to map the memory in the device to the memory control in the application and directly process the device memory. The above MMAP function uses this method.

User pointer mode: Memory segments are allocated by the application itself. In this case, set the memory field to v4l2_memory_userptr in v4l2_requestbuffers.

Process collected data

V4l2 has a data cache that stores the number of cached data in Req. Count. The data cache uses the FIFO mode. When an application calls the cached data, the cache queue caches and sends the first collected video data and then collects a new video data. This process requires two IOCTL commands, vidioc_dqbuf and vidioc_qbuf:

Struct v4l2_buffer Buf;

Memset (& Buf, 0, sizeof (BUF ));

Buf. type = v4l2_buf_type_video_capture;

Buf. Memory = v4l2_memory_mmap;

Buf. Index = 0;

// Read the cache

If (IOCTL (camerafd, vidioc_dqbuf, & BUF) =-1)

{

Return-1;

}

//............ Video Processing Algorithm

// Re-import to the cache queue

If (IOCTL (camerafd, vidioc_qbuf, & BUF) =-1 ){

 

Return-1;

}

Disable video devices

Use the close function to close a video device.

Close (camerafd)

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.