玩轉Android Camera開發(五):基於Google內建演算法即時檢測人臉並繪製人臉框(網路首發,附完整demo)

來源:互聯網
上載者:User

玩轉Android Camera開發(五):基於Google內建演算法即時檢測人臉並繪製人臉框(網路首發,附完整demo)

本文主要介紹使用Google內建的FaceDetectionListener進行臉部偵測,並將檢測到的人臉用矩形框繪製出來。本文代碼基於PlayCameraV1.0.0,在Camera的open和preview流程上進行了改動。原先是放在單獨線程裡,這次我又把它放到Surfaceview的生命週期裡進行開啟和開預覽。

首先要反省下,去年就推出了靜態圖片的臉部偵測demo,當時確保一周內推出Camera預覽即時檢測並繪製的demo,結果拖到現在才整。哎,屌絲一天又一天,蹉跎啊。在demo製作過程中還是遇到了一些麻煩的,第一個問題是檢測到人臉rect預設是以預覽介面為座標系,這個座標系是經過變換的,中心點為(0, 0),左上頂點座標是(-1000, -1000),右下頂點是(1000, 1000).也就是說不管預覽預覽Surfaceview多大,檢測出來的rect的座標始終對應的是在這個變換座標系。而android裡預設的view的座標系是,左上頂點為(0, 0),橫為x軸,豎為y軸。這就需要把rect座標變換下。另一個痛點是,這個臉部偵測必須在camera開啟後進行start,如果一旦拍照或停預覽,則需要再次啟用。啟用時需要加個延遲,否則的話就不起作用了。

另外,仍要交代下,在預覽介面即時檢測人臉並繪製(基於Google內建演算法),還是有兩個思路的。一是在PreviewCallback裡的onPreviewFrame裡得到yuv資料後,轉成rgb後再轉成Bitmap,然後利用靜態圖片的臉部偵測流程,即利用FaceDetector類進行檢測。另一個思路是,直接實現FaceDetectionListener介面,這樣在onFaceDetection()裡就得到檢測到的人臉Face[] faces資料了。這裡只需控制何時start,何時stop即可,這都是android標準介面。毫無疑問,這種方法是上選。從Android4.0後android源碼裡的camera app都是用的這個介面進行臉部偵測。下面上源碼:

一、GoogleFaceDetect.java

考慮到下次準備介紹JNI裡用opencv檢測人臉,為此雜家建立了一個包org.yanzi.mode裡面準備放所有的關於映像的東西。建立檔案GoogleFaceDetect.java實現FaceDetectionListener,在建構函式裡傳進來一個Handler,將檢測到的人臉資料發給Activity,經Activity中轉再重新整理UI.

package org.yanzi.mode;import org.yanzi.util.EventUtil;import android.content.Context;import android.hardware.Camera;import android.hardware.Camera.Face;import android.hardware.Camera.FaceDetectionListener;import android.os.Handler;import android.os.Message;import android.util.Log;public class GoogleFaceDetect implements FaceDetectionListener {private static final String TAG = "YanZi";private Context mContext;private Handler mHander;public GoogleFaceDetect(Context c, Handler handler){mContext = c;mHander = handler;}@Overridepublic void onFaceDetection(Face[] faces, Camera camera) {// TODO Auto-generated method stubLog.i(TAG, "onFaceDetection...");if(faces != null){Message m = mHander.obtainMessage();m.what = EventUtil.UPDATE_FACE_RECT;m.obj = faces;m.sendToTarget();}}/*private Rect getPropUIFaceRect(Rect r){Log.i(TAG, "臉部偵測  = " + r.flattenToString());Matrix m = new Matrix();boolean mirror = false;m.setScale(mirror ? -1 : 1, 1);Point p = DisplayUtil.getScreenMetrics(mContext);int uiWidth = p.x;int uiHeight = p.y;m.postScale(uiWidth/2000f, uiHeight/2000f);int leftNew = (r.left + 1000)*uiWidth/2000;int topNew = (r.top + 1000)*uiHeight/2000;int rightNew = (r.right + 1000)*uiWidth/2000;int bottomNew = (r.bottom + 1000)*uiHeight/2000;return new Rect(leftNew, topNew, rightNew, bottomNew);}*/}

上面代碼注釋掉的一部分是我最初想自己寫矩陣變換演算法的過程,一番努力感覺變換後座標還是有問題,後來參考Android4.0裡的Camera APP源碼才解決.這個變換轉移到了FaceView裡。

二、FaceView.java 這個類繼承ImageView,用來將Face[] 資料的rect取出來,變換後重新整理到UI上。

package org.yanzi.ui;import org.yanzi.camera.CameraInterface;import org.yanzi.playcamera.R;import org.yanzi.util.Util;import android.content.Context;import android.graphics.Canvas;import android.graphics.Color;import android.graphics.Matrix;import android.graphics.Paint;import android.graphics.Paint.Style;import android.graphics.RectF;import android.graphics.drawable.Drawable;import android.hardware.Camera.CameraInfo;import android.hardware.Camera.Face;import android.util.AttributeSet;import android.widget.ImageView;public class FaceView extends ImageView {private static final String TAG = "YanZi";private Context mContext;private Paint mLinePaint;private Face[] mFaces;private Matrix mMatrix = new Matrix();private RectF mRect = new RectF();private Drawable mFaceIndicator = null;public FaceView(Context context, AttributeSet attrs) {super(context, attrs);// TODO Auto-generated constructor stubinitPaint();mContext = context;mFaceIndicator = getResources().getDrawable(R.drawable.ic_face_find_2);}public void setFaces(Face[] faces){this.mFaces = faces;invalidate();}public void clearFaces(){mFaces = null;invalidate();}@Overrideprotected void onDraw(Canvas canvas) {// TODO Auto-generated method stubif(mFaces == null || mFaces.length < 1){return;}boolean isMirror = false;int Id = CameraInterface.getInstance().getCameraId();if(Id == CameraInfo.CAMERA_FACING_BACK){isMirror = false; //後置Camera無需mirror}else if(Id == CameraInfo.CAMERA_FACING_FRONT){isMirror = true;  //前置Camera需要mirror}Util.prepareMatrix(mMatrix, isMirror, 90, getWidth(), getHeight());canvas.save();mMatrix.postRotate(0); //Matrix.postRotate預設是順時針canvas.rotate(-0);   //Canvas.rotate()預設是逆時針 for(int i = 0; i< mFaces.length; i++){mRect.set(mFaces[i].rect);mMatrix.mapRect(mRect);            mFaceIndicator.setBounds(Math.round(mRect.left), Math.round(mRect.top),                    Math.round(mRect.right), Math.round(mRect.bottom));            mFaceIndicator.draw(canvas);//canvas.drawRect(mRect, mLinePaint);}canvas.restore();super.onDraw(canvas);}private void initPaint(){mLinePaint = new Paint(Paint.ANTI_ALIAS_FLAG);//int color = Color.rgb(0, 150, 255);int color = Color.rgb(98, 212, 68);//mLinePaint.setColor(Color.RED);mLinePaint.setColor(color);mLinePaint.setStyle(Style.STROKE);mLinePaint.setStrokeWidth(5f);mLinePaint.setAlpha(180);}}
注意事項有兩個

1.就是Rect變換問題,通過Util.prepareMatrix(mMatrix, isMirror, 90, getWidth(), getHeight());進行變換,為瞭解決臉部偵測座標系和實際繪製座標系不一致問題。第三個參數90,是因為前手網路攝影機都設定了mCamera.setDisplayOrientation(90);

接下來的Matrix和canvas兩個旋轉我傳的都是0,所以此demo只能在手機0、90、180、270四個標準角度下得到的人臉座標是正確的。其他情況下,需要將OrientationEventListener得到的角度傳過來。為了簡單,我這塊就麼寫,OrientationEventListener的用法參見我的前文,後續將再推出一個demo。

最終是通過mMatrix.mapRect(mRect);來將mRect變換成UI座標系的人臉Rect.

Util.prepareMatrix()代碼如下:

package org.yanzi.util;import android.graphics.Matrix;public class Util {    public static void prepareMatrix(Matrix matrix, boolean mirror, int displayOrientation,            int viewWidth, int viewHeight) {        // Need mirror for front camera.        matrix.setScale(mirror ? -1 : 1, 1);        // This is the value for android.hardware.Camera.setDisplayOrientation.        matrix.postRotate(displayOrientation);        // Camera driver coordinates range from (-1000, -1000) to (1000, 1000).        // UI coordinates range from (0, 0) to (width, height).        matrix.postScale(viewWidth / 2000f, viewHeight / 2000f);        matrix.postTranslate(viewWidth / 2f, viewHeight / 2f);    }}

2.得到實際UI裡的人臉rect怎麼畫的問題。之前都是通過paint直接畫,但實際上也可以通過Drawable.draw(canvas)來畫。後者的好處是將一個圖片畫上去,而通過paint繪製基礎圖行如Rect、Circle比較方面。代碼裡把兩種方法的代碼都寫了,供大家參考。

三.何時開啟Camera,何時開預覽?

本次將這兩個流程放到了Surfaceview的兩個生命週期裡,因為之前放在單獨Thread還是會有一些問題。如個別手機上,Surfaceview建立的很慢,這時的SurfaceHolder還沒準備好,結果Camera已經走到開預覽了,導致黑屏問題。

@Overridepublic void surfaceCreated(SurfaceHolder holder) {// TODO Auto-generated method stubLog.i(TAG, "surfaceCreated...");CameraInterface.getInstance().doOpenCamera(null, CameraInfo.CAMERA_FACING_BACK);}@Overridepublic void surfaceChanged(SurfaceHolder holder, int format, int width,int height) {// TODO Auto-generated method stubLog.i(TAG, "surfaceChanged...");CameraInterface.getInstance().doStartPreview(mSurfaceHolder, 1.333f);}

四.何時註冊並開始臉部偵測?

若要開啟臉部偵測,必須要在Camera已經startPreview完畢之後。本文暫時採用在onCreate裡延遲1.5s開啟臉部偵測,1.5s基本上camera已經開預覽了。後續準備將Handler傳到Surfaceview裡,在開預覽後通過Handler通知Activity已經開啟預覽了。

自訂的MainHandler:
private  class MainHandler extends Handler{@Overridepublic void handleMessage(Message msg) {// TODO Auto-generated method stubswitch (msg.what){case EventUtil.UPDATE_FACE_RECT:Face[] faces = (Face[]) msg.obj;faceView.setFaces(faces);break;case EventUtil.CAMERA_HAS_STARTED_PREVIEW:startGoogleFaceDetect();break;}super.handleMessage(msg);}}

在onCreate裡:

protected void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.activity_camera);initUI();initViewParams();mMainHandler = new MainHandler();googleFaceDetect = new GoogleFaceDetect(getApplicationContext(), mMainHandler);shutterBtn.setOnClickListener(new BtnListeners());switchBtn.setOnClickListener(new BtnListeners());mMainHandler.sendEmptyMessageDelayed(EventUtil.CAMERA_HAS_STARTED_PREVIEW, 1500);}
這裡寫了兩個重要的方法分別是 開始檢測和停止檢測:

private void startGoogleFaceDetect(){Camera.Parameters params = CameraInterface.getInstance().getCameraParams();if(params.getMaxNumDetectedFaces() > 0){if(faceView != null){faceView.clearFaces();faceView.setVisibility(View.VISIBLE);}CameraInterface.getInstance().getCameraDevice().setFaceDetectionListener(googleFaceDetect);CameraInterface.getInstance().getCameraDevice().startFaceDetection();}}private void stopGoogleFaceDetect(){Camera.Parameters params = CameraInterface.getInstance().getCameraParams();if(params.getMaxNumDetectedFaces() > 0){CameraInterface.getInstance().getCameraDevice().setFaceDetectionListener(null);CameraInterface.getInstance().getCameraDevice().stopFaceDetection();faceView.clearFaces();}}

五.臉部偵測如何和拍照及前後網路攝影機切換協調同步?

先來看下官方對startFaceDetection()一段注釋:

    /**     * Starts the face detection. This should be called after preview is started.     * The camera will notify {@link FaceDetectionListener} of the detected     * faces in the preview frame. The detected faces may be the same as the     * previous ones. Applications should call {@link #stopFaceDetection} to     * stop the face detection. This method is supported if {@link     * Parameters#getMaxNumDetectedFaces()} returns a number larger than 0.     * If the face detection has started, apps should not call this again.     *     * 

When the face detection is running, {@link Parameters#setWhiteBalance(String)}, * {@link Parameters#setFocusAreas(List)}, and {@link Parameters#setMeteringAreas(List)} * have no effect. The camera uses the detected faces to do auto-white balance, * auto exposure, and autofocus. * *

If the apps call {@link #autoFocus(AutoFocusCallback)}, the camera * will stop sending face callbacks. The last face callback indicates the * areas used to do autofocus. After focus completes, face detection will * resume sending face callbacks. If the apps call {@link * #cancelAutoFocus()}, the face callbacks will also resume.

* *

After calling {@link #takePicture(Camera.ShutterCallback, Camera.PictureCallback, * Camera.PictureCallback)} or {@link #stopPreview()}, and then resuming * preview with {@link #startPreview()}, the apps should call this method * again to resume face detection.

* * @throws IllegalArgumentException if the face detection is unsupported. * @throws RuntimeException if the method fails or the face detection is * already running. * @see FaceDetectionListener * @see #stopFaceDetection() * @see Parameters#getMaxNumDetectedFaces() */
相信大家都能看懂,雜家就不一句一句翻了。關鍵資訊是,在調用takePicture和stopPreview時,必須重新start來恢複臉部偵測。而在拍照前是不需要手動stop的。經雜家測試,手動stop反而會壞事。另外就是takePicture之後(實際上camera做了stopPreview和startPreview),不能立即startFaceDetection(),如果立即做是沒有效果的,必須加個延時。

private void takePicture(){CameraInterface.getInstance().doTakePicture();mMainHandler.sendEmptyMessageDelayed(EventUtil.CAMERA_HAS_STARTED_PREVIEW, 1500);}

第二個問題是在Camera切換之後,Camera的執行個體發生了變化。必須調用stopFaceDetection(),在此之前調用setFaceDetectionListener(null)將其監聽置為null。再切換過來重新預覽後,再次start。

private void switchCamera(){stopGoogleFaceDetect();int newId = (CameraInterface.getInstance().getCameraId() + 1)%2;CameraInterface.getInstance().doStopCamera();CameraInterface.getInstance().doOpenCamera(null, newId);CameraInterface.getInstance().doStartPreview(surfaceView.getSurfaceHolder(), previewRate);startGoogleFaceDetect();}

其他代碼變化不大,雜家就不一一貼出來了,想看的請看源碼。下面上:

為預覽介面,拍照圖片和切換圖片直接換成了Android4.4原生的,原來的實在太醜了。


為直接把Camera對著電視劇的檢測效果:
再來一張,對著電腦裡的圖片:下載連結:http://download.csdn.net/detail/yanzi1225627/7674929--------------------本文系原創,轉載請註明作者:yanzi1225627


聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.