Most of the time I 've been studying usb cameras recently. By the way, I 've written jpg encoding to most of the following functions. I 've searched for them online and changed them a little bit. But it takes a lot of time to find these functions. there are a lot of instructions on jpg encoding on the Internet. The general process is the same. I haven't studied it in depth. I won't talk about it here. I followed the previous articles on Camera. Here I mainly compress the obtained yuv data through jpg and compressing the video stream into mjpeg indicates that the data format obtained by the camera is yuv422 (p16) this is when you initialize the camera to set pixelformat. We set V4L2_PIX_FMT_YUYV. The actual format is yuv422. You can find it online. The specific process is to convert yuv422 to rgb888 and then to convert rgb888 to jpg, you can directly convert yuv422 to jpg. However, if I didn't implement it, we can also put the relevant code below and I will directly go to the code.
First, convert yuv422 to rgb888
static void YUV422toRGB888(int width, int height, unsigned char *src, unsigned char *dst){int line, column;unsigned char *py, *pu, *pv;unsigned char *tmp = dst;/* In this format each four bytes is two pixels. Each four bytes is two Y's, aCb and a Cr.Each Y goes to one of the pixels, and the Cb and Cr belong to bothpixels. */py = src;pu = src + 1;pv = src + 3;#define CLIP(x) ( (x)>=0xFF ? 0xFF : ( (x) <= 0x00 ? 0x00 : (x) ) )for (line = 0; line < height; ++line) {for (column = 0; column < width; ++column) {*tmp++ = CLIP((double)*py + 1.402*((double)*pv-128.0));*tmp++ = CLIP((double)*py - 0.344*((double)*pu-128.0) -0.714*((double)*pv-128.0));*tmp++ = CLIP((double)*py + 1.772*((double)*pu-128.0));// increase py every timepy += 2;// increase pu,pv every second timeif ((column & 1)==1) {pu += 4;pv += 4;}}}}
Then convert rgb888 to jpg
static int jpeg_mem_copy(unsigned char* img,unsigned char *dest){struct jpeg_compress_struct cinfo;struct jpeg_error_mgr jerr;JSAMPROW row_pointer[1]; unsigned char *pbuf = NULL;int jpglen = 0;// create jpeg datacinfo.err = jpeg_std_error( &jerr );jpeg_create_compress(&cinfo);//jpeg_stdio_dest(&cinfo, fp);jpeg_mem_dest(&cinfo, &pbuf, &jpglen);// set image parameterscinfo.image_width = mwidth;cinfo.image_height = mheight;cinfo.input_components = 3;cinfo.in_color_space = JCS_RGB;// set jpeg compression parameters to defaultjpeg_set_defaults(&cinfo);// and then adjust quality settingjpeg_set_quality(&cinfo, 80, TRUE);// start compressjpeg_start_compress(&cinfo, TRUE);// feed datawhile (cinfo.next_scanline < cinfo.image_height) {row_pointer[0] = &img[cinfo.next_scanline * cinfo.image_width * cinfo.input_components];jpeg_write_scanlines(&cinfo, row_pointer, 1);}// finish compressionjpeg_finish_compress(&cinfo);// destroy jpeg datajpeg_destroy_compress(&cinfo); memcpy(dest,pbuf,jpglen);//LOGD("++++++++++++++++len is %d\n",jpglen);if(pbuf)free(pbuf); return jpglen;}
Here I am using the latest jpeg library 9a, which has integrated the mongo_mem_dest function libjpeg. There are also a lot of instructions on the port.
The following is an interface that I provide to the upper layer for calling.
JNIEXPORT jint JNICALL Java_com_hclydao_usbcamera_Fimcgzsd_writefile(JNIEnv * env, jclass obj,jbyteArray yuvdata,jbyteArray filename)//jintArray rgbdata{jbyte *ydata = (jbyte*)(*env)->GetByteArrayElements(env, yuvdata, 0);jbyte *filedir = (jbyte*)(*env)->GetByteArrayElements(env, filename, 0);FILE * outfile; if ((outfile = fopen(filedir, "wb")) == NULL) { LOGE("++++++++++++open %s failed\n",filedir); return -1; }//yuv422_to_jpeg(ydata,mwidth,mheight,outfile,80);unsigned char* src = (unsigned char*)ydata;unsigned char* dst = malloc(mwidth*mheight*3*sizeof(char));unsigned char* jpgdata = malloc(mwidth*mheight*3*sizeof(char));YUV422toRGB888(mwidth,mheight,src,dst);int size=jpeg_mem_copy(dst,jpgdata);fwrite(jpgdata,size,1,outfile);if(dst)free(dst);if(jpgdata)free(jpgdata);fclose(outfile);(*env)->ReleaseByteArrayElements(env, yuvdata, ydata, 0);(*env)->ReleaseByteArrayElements(env, filename, filedir, 0);}
This is the obtained yuv data and the path of the jpg file to be saved. Some parameters are the global variables declared by me. For details, refer to the previous articles.
The following are video stream interfaces.
FILE * video_file;/* *put in frame buffer to queue */JNIEXPORT jint JNICALL Java_com_hclydao_usbcamera_Fimcgzsd_videoopen(JNIEnv * env, jclass obj,jbyteArray filename){jbyte *filedir = (jbyte*)(*env)->GetByteArrayElements(env, filename, 0); if ((video_file = fopen(filedir, "wb")) == NULL) { LOGE("++++++++++++open %s failed\n",filedir); return -1; }(*env)->ReleaseByteArrayElements(env, filename, filedir, 0);}JNIEXPORT jint JNICALL Java_com_hclydao_usbcamera_Fimcgzsd_videostart(JNIEnv * env, jclass obj,jbyteArray yuvdata){jbyte *ydata = (jbyte*)(*env)->GetByteArrayElements(env, yuvdata, 0);unsigned char* src = (unsigned char*)ydata;unsigned char* dst = malloc(mwidth*mheight*3*sizeof(char));unsigned char* jpgdata = malloc(mwidth*mheight*3*sizeof(char));YUV422toRGB888(mwidth,mheight,src,dst);int size=jpeg_mem_copy(dst,jpgdata);fwrite(jpgdata,size,1,video_file);//fwrite(dst,(mwidth*mheight*3*sizeof(char)),1,video_file);if(dst)free(dst);if(jpgdata)free(jpgdata);(*env)->ReleaseByteArrayElements(env, yuvdata, ydata, 0);}JNIEXPORT jint JNICALL Java_com_hclydao_usbcamera_Fimcgzsd_videoclose(JNIEnv * env, jclass obj){fclose(video_file);}
It means that the files stored in the same file cannot be directly played after being converted to avi by the format factory.
The following is a function to convert yuv to jpg directly. This is to convert yuv420p to jpg. I changed yuv422 many times and found that the saved images are incorrect. It seems that we still need to study the differences between these formats.
/* put_jpeg_yuv420p_memory converts an input image in the YUV420P format into a jpeg image and puts * it in a memory buffer. * Inputs: * - input_image is the image in YUV420P format. * - width and height are the dimensions of the image * Output: * - dest_image is a pointer to the jpeg image buffer * Returns buffer size of jpeg image */static int put_jpeg_yuv420p_memory(unsigned char *dest_image, unsigned char *input_image, int width, int height){ int i, j, jpeg_image_size; JSAMPROW y[16],cb[16],cr[16]; // y[2][5] = color sample of row 2 and pixel column 5; (one plane) JSAMPARRAY data[3]; // t[0][2][5] = color sample 0 of row 2 and column 5 struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; char *pbuf = NULL;int jpglen = 0; data[0] = y; data[1] = cb; data[2] = cr; cinfo.err = jpeg_std_error(&jerr); // errors get written to stderr jpeg_create_compress(&cinfo); cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = 3; jpeg_set_defaults (&cinfo); jpeg_set_colorspace(&cinfo, JCS_YCbCr); cinfo.raw_data_in = TRUE; // supply downsampled data cinfo.do_fancy_downsampling = FALSE; // fix segfaulst with v7 cinfo.comp_info[0].h_samp_factor = 2; cinfo.comp_info[0].v_samp_factor = 2; cinfo.comp_info[1].h_samp_factor = 1; cinfo.comp_info[1].v_samp_factor = 1; cinfo.comp_info[2].h_samp_factor = 1; cinfo.comp_info[2].v_samp_factor = 1; jpeg_set_quality(&cinfo, 80, TRUE); cinfo.dct_method = JDCT_FASTEST; jpeg_mem_dest(&cinfo, &pbuf, &jpglen); // data written to mem jpeg_start_compress (&cinfo, TRUE); for (j = 0; j < height; j += 16) { for (i = 0; i < 16; i++) { y[i] = input_image + width * (i + j); if (i%2 == 0) { cb[i/2] = input_image + width * height + width / 2 * ((i + j) / 2); cr[i/2] = input_image + width * height + width * height / 4 + width / 2 * ((i + j) / 2); } } jpeg_write_raw_data(&cinfo, data, 16); } jpeg_finish_compress(&cinfo); jpeg_destroy_compress(&cinfo); memcpy(dest_image,pbuf,jpglen);if(pbuf)free(pbuf); return jpglen;}
Recently, I am confused, so I have not studied this in depth. I have been wondering if there is any need for further research. Next, I want to see how to compress ffmpeg h264.