First, let's take a look at the custom data structure of the Video device. The source code is in spcav4l:
Struct vdin {
Int FD;
Char * videodevice;
Struct video_mmap vmmap; // memory shot
Struct video_capability videocap;
Int mmapsize;
Struct video_mbuf videombuf;
Struct video_picture videopict;
Struct video_window videowin;
Struct video_channel videochan;
Struct video_param videoparam;
Int cameratype;
Char * cameraname;
Char bridge [9];
Int sizenative; // available size in JPEG
Int sizeothers; // others palette
Int palette; // available palette
Int norme; // set spca506 USB video Grabber
Int channel; // set spca506 USB video Grabber
Int grabmethod;
Unsigned char * pframebuffer; // Data Pointer after memory injection
Unsigned char * ptframe [4]; // pointer to the data to be sent to the network after frameconverse
Int framelock [4]; // identify the occupied or not
Pthread_mutex_t grabmutex;
Int framesizein; // determined in init
Volatile int frame_cour; // the frame to be transmitted to the network
Int bppin; // depth
Int hdrwidth;
Int hdrheight;
Int formatin; // palette
Int signalquit;
};
The value of each format is defined in/include/Linux/videodev. h In the Linux kernel source code.
Starting from the main function, trace int format
1.1 initialize int format
Int format = video_palette_yuv420p;
1.2 identify user-required formats based on input parameters
If (strcmp (argv [I], "-F") = 0 ){
If (I + 1> = argc ){
Printf ("No parameter specified with-F, aborting./N ");
Exit (1 );
}
Mode = strdup (argv [I + 1]);
If (strncmp (mode, "R32", 3) = 0 ){
Format = video_palette_rgb32;
} Else if (strncmp (mode, "R24", 3) = 0 ){
Format = video_palette_rgb24;
} Else if (strncmp (mode, "R16", 3) = 0 ){
Format = video_palette_rgb565;
} Else if (strncmp (mode, "YUV", 3) = 0 ){
Format = video_palette_yuv420p;
} Else if (strncmp (mode, "jpg", 3) = 0 ){
Format = video_palette_jpeg;
} Else {
Format = video_palette_yuv420p;
}
}
2. When initializing the device
Init_videoin (& videoin, videodevice, width, height, format, grabmethod)
2.1
VD-> formatin = format;
VD-> bppin = getdepth (vd-> formatin); // determine VD. Depth Based on the format value.
2.2 enter init_v4l (VD)
2.2.1
Probepalette (VD) // upload the five palette types to the video_picture data structure. After the set type is set, the palette values are compared before and after the get type. If the two values are consistent, the palette type is available.
Probesize (VD) // same as above. Upload the seven width * Height structures to video_window to check whether they are available.
Check_palettesize (VD) // first convert the size int needsize = convertsize (vd-> hdrwidth, VD-> hdrheight), convertsize (), and return 7 types based on W * H: VGA, pal, Sif, CIF, qpal, qsif, qcif, the above seven macros are defined in spcav4l. h:
# Define masq 1
# Define VGA masq
# Define pal (masq <1)
# Define SIF (masq <2)
# Define CIF (masq <3)
# Define qpal (masq <4)
# Define qsif (masq <5)
# The Wisdom of define qcif (masq <6) int needsize should be one of the seven values
Int needpalette = 0, needpalette = checkpalette (VD). In checkpalette (VD), convertpalette (vd-> formatin); see is the palette available? Returns JPEG yuv420p rbg24 rgb565 and rgb32 Based on VD-> formatin.
# Define JPG masq // JPEG 1
# Define yuv420p (masq <1)
# Define rgb24 (masq <2)
# Define rgb565 (masq <3)
# Define rgb32 (masq <4)
Check whether available is based on the value of needpalette? Write the test result to the palette = paletteconvert (needpalette) function, if (palette), and assign it to VD-> vmmap. Height = VD-> hdrheight to the palette's return value;
VD-> vmmap. width = VD-> hdrwidth;
VD-> vmmap. format = palette; Set vidiocmcapture and test whether the collection can be performed. If OK, VD-> formatin = palette;
According to needsize and VD. sizeother check whether there are other palettesize supported, test is palette and size are available otherwhise return the next available palette and size palette is set by preference order JPEG yuv420p rbg24 rgb565 and rgb32
2.2.2
VD-> videopict. palette = VD-> formatin; sets the video_picture data structure.
VD-> videopict. Depth = getdepth (vd-> formatin );
VD-> bppin = getdepth (vd-> formatin );
2.2.3
VD-> framesizein = (vd-> hdrwidth * vd-> hdrheight * vd-> bppin)> 3; set the framesize.
Erreur = setvideopict (VD );
Erreur = getvideopict (VD );
If (vd-> formatin! = VD-> videopict. palette |
VD-> bppin! = VD-> videopict. Depth)
Exit_fatal ("cocould't set video palette abort! ");
If (erreur <0)
Exit_fatal ("cocould't set video palette abort! ");
2.2.4
Starts to collect video data from two frames. The pointer to the data is pframebuffer.
3. Open the video collection thread pthread_create (& W1, null, (void *) Grab, null) and enter the grab function.
3.1 VD-> vmmap. format = VD-> formatin;
Vidiocsync: Start to check whether the data collected during init has been completed
3.2 complete collection and JPEG compression. There are many articles in it
Required size = convertframe (vd-> ptframe [VD-> frame_cour] + sizeof (struct frame_t ),
VD-> pframebuffer + VD-> videombuf. offsets [VD-> vmmap. Frame],
VD-> hdrwidth, VD-> hdrheight, VD-> formatin, qualite );
In int convertframe (unsigned char * DST, unsigned char * SRC, int width, int height, int formatin, int qualite)
Switch (formatin) compresses Data Based on Different palette values and returns the compressed data size value. If it is in JPEG format, it means that the hardware has compressed recent data, you do not need to use software for compression. In addition to video_palette_jpeg, palette requires encode. The function is uint32 encode_image (uint8 * input_ptr, uint8 * output_ptr, uint32 quality_factor, uint32 image_format, uint32 image_width, uint32 image_height), uint32 image_format is the input palette, compressed using Huffman encoding, detailed code in Huffman. C and encode. c Medium
4. Open the remote data transmission thread and run the pthread_create (& server_th, null, (void *) service, & new_sock) thread on behalf of the accept blocking thread ),
First, read the content of the frame_t message data structure at the connection, read (sock, (unsigned char *) & message, sizeof (struct client_t), and decide how to transmit according to the message content, next, send the unoccupied compressed frame according to frame_lock and frame_cour.