There are some problems with the image format of Windows and the image formats on Android mobile devices. In simple terms
The data from the camera is YUV, and the data displayed on the mobile device is RGBA, but the data processed in the C + + program is RGB. So you need to do the data conversion. The following are the specific actions:
0. Preparation prior to use.
Camera usage requires that the camera's permissions be added to the Androidmanifest.xml file first:
<uses-permission android:name= "Android.permission.CAMERA"/>
<uses-feature android:name= "Android.hardware.camera"/>
<uses-feature android:name= "Android.hardware.camera.autofocus"/>
Some small tips:
after Android 2.3, you can use camera.open (int) to get a specific camera.
after API level 9, you can use the Camera.getCameraInfo()
to see if the camera is in front of the device or behind it, you can also get the orientation of the image.
The camera is a device resource that is shared by all applications and should be released in a timely manner when the application does not use the camera, and should be released in Activity.onpause () .
If not released in time, subsequent camera requests (including your own app and other apps) will fail and cause the app to quit.
1. Access to data:
You need to turn on the camera first:
Camera Mcamera == mcamera.getparameters ();p. Setpreviewformat (IMAGEFORMAT.NV21); /* This is a unique value or can not be set. */
Mcamera.setparameters (P); Mcamera.startpreview ();
The camera provides this interface, using the following: (Take care of the function formathere):
Mcamera.setpreviewcallback (New Previewcallback () { @Override public void Onpreviewframe (byte[] data, Camera Camera) { //Your Actions } });
In this callback we are able to obtain the current frame of data, we can preprocess it, such as compression, encryption, special effects processing, etc., but byte[] the data in the buffer is YUV format, generally yuv420sp, and Android provides Surfaceview , Glsurfaceview, Textureview and other controls only support rendering in RGB format, so we need an algorithm to decode. That is, the conversion of data.
2. Data conversion (there are three ways to +jni):
The first way:
@Override public void Onpreviewframe (byte[] data, camera camera) { Size size = Camera.getparameters (). Getpreviewsize (); try{ yuvimage image = new Yuvimage (data, imageformat.nv21, size.width, size.height, null); if (image!=null) { Bytearrayoutputstream stream = new Bytearrayoutputstream (); Image.compresstojpeg (New Rect (0, 0, Size.width, size.height), N, stream); Bitmap bmp = Bitmapfactory.decodebytearray (Stream.tobytearray (), 0, Stream.size ()); Stream.Close (); } } catch (Exception ex) { log.e ("Sys", "Error:" +ex.getmessage ()); } }
In fact, there is no need to compress after obtaining the data stream, it will reduce the speed:
An example of a video transmission of the same size
Scheme |
Compression ratio |
Compression/transmission mode |
Real-time sex |
Average flow consumption |
Transmission distance |
Send the original yuv420 data using the camera's callback function |
0 |
No compression, transfer by frame |
High (20~30 fps) |
Very high (6.5 Mbps) It's horrible, o_o. |
Cable or wireless near-distance |
H264 hard-coded yuv420 with Mediarecorder to send |
High (95%) |
Inter-frame compression, video stream transmission |
High (FPS) |
Low (30~70 Kbps) |
can be long distance |
Call local H264 Encoding library (JNI) to encode a frame YUV420 data after sending |
High (97%) |
Inter-frame compression, transfer by frame |
Low (2 fps) |
Low (Kbps) |
can be long distance |
Compression of a frame of data with Gzip library (very wonderful practice) |
Higher (70%~80%) |
Intra-frame compression, transfer by frame |
Low (5 fps) |
Higher (Kbps) |
can be long distance |
Compressing a frame of data in JPEG mode after transmission |
General (60% or so) |
Intra-frame compression, transfer by frame |
High (FPS) |
High (in Kbps) |
can be long distance (bandwidth permitting) |
<span style= "Background-color:rgb (255, 255, 255); >BitmapFactory.decodeByteArray</span>
It's also said to be slow!
You can actually use it to get data:
byte [] tmp = stream.tobytearray ();
And then deal with it in other ways.
The second type:
Public Bitmap RAWBYTEARRAY2RGBABITMAP2 (byte[] data, int width, int height) {int framesize = width * height; int[] Rgba = new Int[framesize]; for (int i = 0, i < height; i++) for (int j = 0; J < width; j + +) {int y = (0xFF & Amp ((int) data[i * width + j])); int u = (0xFF & ((int) data[framesize + (i >> 1) * width + (J & To) + 0])); int v = (0xFF & ((int) data[framesize + (i >> 1) * width + (J & To) + 1])); y = y < 16? 16:y; int r = Math.Round (1.164f * (y-16) + 1.596f * (v-128)); int g = Math.Round (1.164f * (y-16)-0.813f * (v-128)-0.391f * (u-128)); int b = Math.Round (1.164f * (y-16) + 2.018f * (u-128)); R = R < 0? 0: (R > 255 255:r); g = G < 0? 0: (g > 255 255:g); B = B < 0? 0: (b > 255 255:b); Rgba[i * width + j] = 0xff000000 + (b << +) + (g << 8) + R; } Bitmap BMP = Bitmap.createbitmap (width, height, Bitmap.Config.ARGB_8888); Bmp.setpixels (RGBA, 0, width, 0, 0, width, height); return BMP; }
The third type:
Jniexport void Jnicall Java_com_dvt_peddetec_peddetec_yuv2rgb (jnienv* env, jobject, jint width, jint height, jbytearray y Uv,jintarray BGR)//jniexport void Jnicall java_<package>_<class>_<function> (JNIEnv* env, Jobject, <Args>) { jbyte* _yuv =env->getbytearrayelements (yuv,0); jint* _bgr =env->getintarrayelements (bgr,0); Mat Myuv (HEIGHT+HEIGHT/2, Width, cv_8uc1, (Uchar *) _YUV); Mat mbgr (height, width, cv_8uc3, (Uchar *) _BGR); Cvtcolor (MYUV, MBGR, CV_YUV420SP2BGR); Cvtcolor (MBGR, Mbgra,cv_bgr2bgra); For display}
Correct:
Cvtcolor (MYUV, MBGR, CV_YUV420SP2BGR)
The correction inside is: CV_YUV420SP2BGR.
OpenCV developing image formats on Android devices