the concept of ZSL
The ZSL (zero shutter lag) Chinese name is the time delay photography, is in order to reduce the photo delay, lets the photograph & Echo instantaneous completes one kind of technique.
Single Shot
When the preview is started, sensor and VFE produce preview and snapshot frames, and the latest snapshot frame data is stored in buffer. When the camera is triggered, the system calculates the actual camera time, finds the corresponding frame in the buffer, and returns the frame to the user, which is called "ZERO".
The system calculates the time of the shutter lag, and then takes a frame as the real-time frame data. the realization mechanism of ZSL
Because the ZSL implementation needs to implement a few points:
1. A Surfaceview for previewing
2. One queue caches snapshot data
3. Take a picture of a frame data as a camera data output
4. The output of the photo requires Yuv->jpeg data transfer code
First of all, the ZSL function on the android4.4 and android5.0 to achieve the difference.
The implementation of the Android4.4 for 2 steps and 3 steps are implemented in the HAL layer, the HAL layer in the maintenance cache queue, when receiving the inverted take_picture command to obtain a frame cache data, for transcoding, and then take the normal picture of the process of using @link Android.hardware.Camera.PictureCallback notifies the application layer of the data being photographed.
Android5.0 implementation for 2 steps and 3 steps are implemented in the application layer, the application layer at the start of the preview to the HAL layer to pass 2 surface to the HAL layer, the HAL layer to use one of the surface to preview the data fill, a surface to fill the snapshot data population. The application layer continuously reads the snapshot data in the surface to maintain a cache queue, and when the user executes the take_picture, reads the cached queue's data as the camera data.
The application layer in Android5.0 already has the implementation ZSL class:
Src/com/android/camera/one/v2/onecamerazslimpl.java
The default method is not invoked because the HAL layer is not supported by default because the HAL layer does not implement code, and requires different vendors to implement the implementation and then support them differently.
Temporarily do not consider how the application layer to call Onecamerazslimpl.java, directly to understand how onecamerazslimpl use camera API2.0 to achieve ZSL camera function. ZSL preview of Camera API2.0
The code to start Startpreview can be found in the file Onecamerazslimpl.java file, as follows:
@Override
public void Startpreview (Surface previewsurface, Capturereadycallback listener) {
Mpreviewsurface = Previewsurface;
Setupasync (mpreviewsurface, listener);
}
Passing two arguments in Setupasync, the first parameter mpreviewsurface as a preview of the surface, and the second is a callback, which can be seen as a callback to be prepared for the photo.
* In the android5.0 of the camera application, the camera framework around this similar implementation mechanism, seems to be deliberately to make people read code.
In the Setupasync method, Setup is invoked asynchronously, starting the preview:
private void Setupasync (final Surface Previewsurface,
Final Capturereadycallback listener) {
Mcamerahandler.post (New Runnable () {
@Override
public void Run () {
Setup (previewsurface, listener);
}
});
}
Now you can look at the Setup method, which is the key to interacting with the HAL layer and the application layer that starts caching the camera queue data.
private void Setup (Surface previewsurface, final Capturereadycallback listener) {
.......
list<surface> outputsurfaces = new arraylist<surface> (2);
Outputsurfaces.add (previewsurface);//For preview surface
Outputsurfaces.add (Mcaptureimagereader.getsurface ()); A surface for taking pictures
Mdevice is the Cameradeviceimpl.java object of the framework,
Also an object that interacts with the app layer and the HAL layer
Mdevice.createcapturesession (Outputsurfaces,
New Cameracapturesession.statecallback () {
@Override
public void Onconfigurefailed (cameracapturesession session) {
......
}
@Override
public void Onconfigured (cameracapturesession session) {
..//Successful start of operation
}
View the Createcapturesession method directly to the framework, in which a new capuresession is created, and the Onconfigured method that creates a callback to the lisnter after the success of the method is created. The new sesssion is also available for this application, and the following is the Createcapturesession method for creating capuresession:
Frameworks/base/core/java/android/hardware/camera2/impl/cameradeviceimpl.java
@Override
public void Createcapturesession (list<surface> outputs,
Cameracapturesession.statecallback callback, Handler Handler)
Throws Cameraaccessexception {
.........
Create a new session
Cameracapturesessionimpl newsession =
New Cameracapturesessionimpl (mnextsessionid++,
Outputs, callback, Handler, this, Mdevicehandler,
configuresuccess);
......
}
}
View the applied Onconfigured method, which calls Sendrepeatingcapturerequest in the method:
Mcapturesession = session;
.......
Performing processing of Cache queues
Boolean success = Sendrepeatingcapturerequest ();
if (success) {
Mreadystatemanager.setinput (readystaterequirement
. Capture_not_in_progress,
true);
Mreadystatemanager.notifylisteners ();
Listener.onreadyforcapture ();
} else {
Listener.onsetupfailed ();
}
The Sendrepeatingcapturerequest method uses Cameradeviceimpl to create a photo request and set up a duplicate photo command to ensure that the cache is started.
Private Boolean sendrepeatingcapturerequest () {
Builder = Mdevice.
Createcapturerequest (Cameradevice.template_zero_shutter_lag);
Template_zero_shutter_lag This parameter is very important the HAL layer is based on this parameter
Confirm that you want to start ZSL data format
Builder.addtarget (Mpreviewsurface);
Builder.addtarget (Mcaptureimagereader.getsurface ());
Notifies the HAL to perform duplicate requests
Mcapturesession.setrepeatingrequest (Builder.build (), Mcapturemanager,
Mcamerahandler);
return true;
}
Camera API2.0 's ZSL photo shoot
Follow the instructions above to execute the photo command in fact, to get the data in the cache queue, the ZSL cache data is the tool class ImageReader provided by the framework.
To view the instantiation of ImageReader:
Mcaptureimagereader = Imagereader.newinstance (Picturesize.getwidth (),
Picturesize.getheight (),
Scaptureimageformat, max_capture_images);
The definition of Scaptureimageformat is as follows:
/**
* Set to Imageformat.jpeg to use the hardware encoder, or
* imageformat.yuv_420_888 to use the software encoder. No Other image
* formats are supported.
*/
private static final int scaptureimageformat = imageformat.yuv_420_888;
Where scaptureimageformat can be defined as JPEG, can also be defined as yuv_420_888, where JEPG need a HAL transcoding, transcoding related to efficiency problems, set to yuv_420_888 will need to apply a layer of transcoding, if the application allocation of small resources, May also directly lead to application over.
Max_capture_images is the number of caches defined.
/**
* The maximum number of images to store in the full-size ZSL ring buffer.
* <br>
* Todo:determine This number dynamically based on available memory and the
* Size of frames.
*/
private static final int max_capture_images = 10;
The file Onecamerazslimpl.java file has a takepicture method, as described in the following ways:
@Override
public void Takepicture (final photocaptureparameters params,
Final capturesession session) {
Stop Read cache
Mreadystatemanager.setinput (
Readystaterequirement.capture_not_in_progress, false);
Direct Read Cache picture
Boolean capturedpreviousframe = Mcapturemanager.trycaptureexistingimage (
New Imagecapturetask (params, session), zslconstraints);
}
Imagecapturetask implements the Imagecapturelistener interface and realizes the Onimagecaptured method:
@Override
public void onimagecaptured (image image, Totalcaptureresult
Captureresult) {
......
Mreadystatemanager.setinput (
Readystaterequirement.capture_not_in_progress, True);
Msession.startempty ();
SavePicture (image, Mparams, msession);
......
MParams.callback.onPictureTaken (msession);
}
Now to see how to implement the picture in the Imagecapturemanager class, the method is as follows:
Public boolean trycaptureexistingimage (final Imagecapturelistener onimagecaptured,
Final list<capturedimageconstraint> constraints) {
......
Create a selected Image object
selector = new Selector<imagecapturemanager.capturedimage> () {
@Override
Public Boolean Select (Capturedimage e) {
......
};
This is where you get the data for the camera.
Final Pair<long, capturedimage> tocapture =
mcapturedimagebuffer.trypingreatestselected (selector);
Return Tryexecutecaptureorrelease (Tocapture, onimagecaptured);
}
Callback Imagecapturetask's Onimagecaptured method in the Tryexecutecaptureorrelease method, and then call the SavePicture method to save the data in the Onimagecaptured method. To view the SavePicture method:
private void SavePicture (image image, final photocaptureparameters captureparams,
Capturesession session) {
Todo:add more EXIF tags here.
//
Session.saveandfinish (acquirejpegbytes (image), width, height, rotation, EXIF,
New Onmediasavedlistener () {
@Override
public void onmediasaved (Uri uri) {
CaptureParams.callback.onPictureSaved (URI);
}
});
}
Because the image is generated by the application, the application should be responsible for the header and tag information of the file, and in the SavePicture method, fill in the picture's GSP, angle, height, and width information.
When you save the data, you need to turn the YUV into Jpeg,google. To do a so library for this problem. The code is in the camera JNI directory, in camera android.mk file
Local_jni_shared_libraries: = Libjni_tinyplanet libjni_jpegutil
Then look at the android.mk file under JNI
# Jpegutil
Include $ (clear_vars)
Local_cflags: =-std=c++11
Local_ndk_stl_variant: = c++_static
Local_ldflags: =-LLOG-LDL
Local_sdk_version: = 9
Local_module: = Libjni_jpegutil
Local_src_files: = jpegutil.cpp jpegutilnative.cpp Camera API2.0 problem Point
It is undeniable that the camera API2.0 gives more operational space to the application, and has more operability for the later real-time rendering. However, the following problems may exist:
1. Occupy the application layer too much memory, ZSL need to store 10 buffer in the application layer to save the picture, will easily cause resource problems, so Google in the code to force the addition of ZSL for the picture size is different than 1080P.
The
YUV inverted JPEG transcoding is required for CPU computing power, which can cause slow problems if the CPU does not have strong computing power. Google is here to append the cached data is the processing of JPEG.