Summarize frequently asked questions in Android video recording _android

Source: Internet
Author: User
Tags abs prepare

This article shares some of the problems you have encountered during video recording and playback, including:

    • Video recording process
    • Video preview and Surfaceholder
    • Video sharpness and file size
    • Video file rotation

First, the video recording process
in the case of a micro-letter, the recording triggers a press (live) recording button, and the trigger condition for ending the recording is to release the recording button or to end the recording time, and the process may be described in the following figure.

1.1. Start recording
Based on the above process and programming conventions of the project, the following functions can be defined in OnCreate () to complete the function:

The initialization process consists mainly of view,data and listener three parts. When you initialize view, add a camera preview, add a countdown text component, set the visibility of the initial state UI component, get the initial data from the intent when you initialize it, and, when initializing listener, the Record trigger button, save/ Cancel the video recording button and the video preview interface to add listening.
When the system is initialized successfully, wait for the user to press the recording button, so in the recording button listening, you need to complete the following functions: recording, timing, updating the interface components.

if (isrecording) {
  mmediarecorder.stop ();
  Releasemediarecorder ();
  Mcamera.lock ();
  IsRecording = false;
}
if (Startrecordvideo ()) {
  starttimevideorecord ();
  IsRecording = true;
}

First, determine the current recording status, if you are recording, stop recording, release Mediarecorder resources, lock the camera, place the recording status, and then start the video recording Startrecordvideo, whose Boolean return value represents whether the startup is successful, after the startup succeeds, Start the video recording timing, and position recording status. Startrecordvideo involves Mediarecorder configuration, preparation, and start-up.

Translated into code as follows:

Private Boolean Startrecordvideo () {
  configuremediarecorder ();
  if (!prepareconfiguredmediarecorder ()) {return
    false;
  }
  Mmediarecorder.start ();
  return true;
}

1.2. End Recording
According to the flow diagram above, the trigger condition for ending the recording is to release the recording button or the timing time. In the end recording method, you need to release the Mediarecorder, start looping the recorded video, set the interface update, and so on.

Translated into code as follows:

private void Stoprecordvideo () {

    releasemediarecorder ();
    Record video file processing
    if (Currentrecordprogress < min_record_time) {
      toast.maketext (videoinputactivity.this, " Recording time is too Short ", Toast.length_short). Show ();
    else {
      startvideoplay ();
      IsPlaying = true;
      Setuidisplayaftervideorecordfinish ();
    }
    currentrecordprogress = 0;
    UpdateProgressBar (currentrecordprogress);
    Releasetimer ();
    state setting
    isrecording = false;
  }

Second, video preview and Surfaceholder
video preview uses Surfaceview, compared to ordinary view,surfaceview in a new separate thread to draw the screen, the advantage of this implementation is that the update screen does not block the UI main thread, the disadvantage is that it will bring event synchronization problems. Of course, this involves the delivery of UI events and thread synchronization.

In the implementation, the custom preview control is implemented by inheriting the Surfaceview component. First, the Surfaceview Getholder () method returns Surfaceholder, which needs to add a Surfaceholder.callback callback for Surfaceholder, and secondly, rewrite surfacecreated, Surfacechanged and surfacedestroyed implementations.

2.1. Construction Device
The constructor contains the process of initializing the domain and adding the callback above.

Public Camerapreview (context, Camera Camera) {
    super (context);

    Mcamera = camera;
    msupportedpreviewsizes = Mcamera.getparameters (). getsupportedpreviewsizes ();
    Mholder = Getholder ();
    Mholder.addcallback (this);
    Mholder.settype (surfaceholder.surface_type_push_buffers);
  }

Here's a description of msupportedpreviewsizes, because the camera-supported preview size is determined by the parameters of the camera itself, so you need to get the preview size that it supports first.

2.2, preview the size of the settings
as you can see from Google's official camera sample program, The criteria for selecting a preview size are (1) The absolute difference between the width and height ratio of the preview size supported by the camera and the surfaceview ratio is less than 0.1, and (2) in the dimensions obtained (1), the highest difference of the Surfaceview is selected. These two standards have been implemented through code, with the official code attached here:

 public camera.size getoptimalpreviewsize (list<camera.size> sizes, int w, int h) {FINA
    L Double aspect_tolerance = 0.1;
    Double targetratio = (double) w/h;
    if (sizes = = NULL) {return null;
    } camera.size optimalsize = null;
    Double Mindiff = Double.max_value;
    int targetheight = h;
      for (Camera.size size:sizes) {double ratio = (double) size.width/size.height;
      if (Math.Abs (ratio-targetratio) > Aspect_tolerance) continue;
        if (Math.Abs (size.height-targetheight) < Mindiff) {optimalsize = size;
      Mindiff = Math.Abs (size.height-targetheight);
      } if (optimalsize = null) {Mindiff = Double.max_value; for (Camera.size size:sizes) {if (Math.Abs (size.height-targetheight) < Mindiff) {optimalsize =
          Size
        Mindiff = Math.Abs (size.height-targetheight);
  }} return optimalsize; }

When loading the preview screen, you need to consider the dimensions (getsupportedpreviewsizes) supported by camera and the Surfaceview dimensions (layout_width/layout_height) that load the preview, and in the preview phase, The relationship between the two directly affects clarity and image stretching. For camera sizes, depending on the device's hardware differences, the sizes supported by the different devices differ, but under the default (Orientation=landscape), their width>height. Take htc609d as an example, the resolution supported by camera is 1280*720 (16:9). 640*480 (4:3) ... 480*320 (3:2) and so on more than 10 kinds, and its screen resolution is 960*540 (16:9). Therefore, it is easy to get the following conclusions: (1) when the camera preview size is less than surfaceview size, the preview screen is not clear; (2) The preview picture will stretch when the camera preview is larger than the width and height ratio of the surfaceview.

The above code when the phone is set to horizontal screen is not a problem, when set to the vertical screen, in order to obtain the best preview size, you need to call this method to compare the width of the surfaceview.

if (msupportedpreviewsizes!= null) {
  mpreviewsize = getoptimalpreviewsize (msupportedpreviewsizes, 
            Math.max ( width, height), math.min (width, height));

After you get the preview size that matches the current Surfaceview, you can set it by Camera.parameters.

Camera.parameters mparams = Mcamera.getparameters ();
Mparams.setpreviewsize (Mpreviewsize.width, mpreviewsize.height);
Mcamera.setdisplayorientation (m);

list<string> focusmodes = Mparams.getsupportedfocusmodes ();
if (Focusmodes.contains ("Continuous-video")) {
  Mparams.setfocusmode (Camera.Parameters.FOCUS_MODE_CONTINUOUS_ Video);
}
Mcamera.setparameters (Mparams);

Third, video sharpness and file size
in the first section, we talked about Startrecordvideo, including configuring Mediarecorder, preparing for mediarecorder and booting, where configuring Mediarecorder is the focus of video recording, and you need to understand the role of each configuration parameter. Flexible configuration based on business scenarios. Here, refer to the official Google example to give a workable configuration, and then explain it.

private void Configuremediarecorder () {//Begin_include (configure_media_recorder) Mmediarecorder = new Mediarec

    Order ();
    Step 1:unlock and set camera to Mediarecorder Mcamera.unlock ();
    Mmediarecorder.setcamera (Mcamera);

    Mmediarecorder.setorientationhint (90);
    Step 2:set sources Mmediarecorder.setaudiosource (MediaRecorder.AudioSource.VOICE_RECOGNITION);

    Mmediarecorder.setvideosource (MediaRecorder.VideoSource.CAMERA);
    Step 3:set a Camera Parameters mmediarecorder.setoutputformat (MediaRecorder.OutputFormat.MPEG_4);
    /* Fixed Video size:640 * 480*/mmediarecorder.setvideosize (640, 480);
    /* Encoding bit rate:1 * 1024 * 1024*/mmediarecorder.setvideoencodingbitrate (1 * 1024 * 1024);
    Mmediarecorder.setvideoencoder (MediaRecorder.VideoEncoder.H264);

    Mmediarecorder.setaudioencoder (MediaRecorder.AudioEncoder.AAC);
    Step 4:set Output File Mmediarecorder.setmaxfilesize (maxfilesizeinbytes); MmediarecOrder.setoutputfile (Videofilepath); End_include (Configure_media_recorder)//Set mediarecorder errorlistener mmediarecorder.setonerrorlistener (th
  IS);

 }

Step 1:
1.setCamera ParametersThe ability to quickly switch between previews and recordings to avoid reloading the camera object. In some of the Android phone's own camera programs, switching between preview and recording of the short cotton, readers can experience.
2.mmediarecorder.setorientationhint (m)Used when the recording direction is vertical (portrait), it enables the video file to rotate clockwise 90 degrees, and if this is not set, the screen will rotate 90 degrees when the video is played. But what's even more important here is that even if you set this up, the screen will still rotate 90 degrees on some players (such as importing video from your phone to the PC for playback, or embedding it in a H5 clip), but why? Note Setorientationhint's note: This is some video players may choose to ignore the compostion matrix in a video during . So how do you play on all the players in the normal direction? Wait a minute, follow up specifically to explain it.
Step 2:
1.setAudioSource (MediaRecorder.AudioSource.VOICE_RECOGNITION), voice_recognition compared to the mic will be based on the needs of speech recognition to do some tuning, of course, This needs to be supported by the system.
2.setVideoSource is naturally videosource.camera, except that these two settings must be set before the encoder is set, which need not be explained.
Step 3:
The 1.setOutputFormat needs to be after step 2 and before prepare (). The outputformat.mpeg_4 format is used here.
2.setVideoSize need to weigh a number of factors, mainly include three aspects: Mediarecorder support Recording size, video file size and compatible with different Android models. Here the 640 * 480 (micro-letter Small video size is 320*240), the file size between 500-1000kb, and more than 99% models on the market support this recording size.
3.setVideoEncodingBitRate is related to the sharpness of the video, setting this parameter requires a trade-off between clarity and file size. Too high, the file is not easy to transmit, too low, low file clarity, low recognition rate. Needs to be flexibly adjusted to the actual business scenario.
4.setVideoEncoder using H264 coding, MPEG4, H263, H264 and other different coding differences, the actual use of H264 compression rate is high, recommended.
5.setAudioEncoder adopts AUDIOENCODER.AAC, this setting is mainly to consider its universality and compatibility.
Step 4:
SETMAXFILESIZE Specifies the size limit of the recording file and, of course, limits its maximum recording time.
SETOUTPUTFILE specifies the path to output video.
SETONERRORLISTENER Specifies the error listener.
After you complete the configuration above, you can prepare for Mediarecorder and start the video recording after the return is successful.

Private Boolean Prepareconfiguredmediarecorder () {
    //step 5:prepare configured Mediarecorder
    try {
      Mmediarecorder.prepare ();
    } catch (Exception e) {
      releasemediarecorder ();
      return false;
    }
    return true;
  }

four, video file rotation
Step 1 in section III refers to the rotation of video files, because some players ignore the configuration parameters when recording video, so you can try to rotate the video file through a Third-party library, such as OPENCV,FASTCV, In the camera.previewcallback of the camera object, each frame of data byte[] is intercepted, processed, and then output. This method needs to consider the efficiency of the processing method, in the programming generally uses NDK, completes the key processing in the C + +, here pastes the FASTCV the processing method the logic.

public void Onpreviewframe (byte[] data, Camera c) {
     //Increment FPS counter for Camera.
     Util.cameraframetick ();

     Perform processing on the camera preview data.
     Update (data, mdesiredwidth, mdesiredheight);

     Simple IIR filter in time.
     Mprocesstime = Util.getfastcvprocesstime ();

     if (c!= null)
     {
      //with the buffer requires addbuffer each callback frame.
      C.addcallbackbuffer (mpreviewbuffer);
      C.setpreviewcallbackwithbuffer (this);
     }

     Mark dirty for render.
     Requestrender ();
   }
  ;

Where update is the native method, its implementation is done by the corresponding file in Jni, which invokes the corresponding API in LIBFASTCV.A. This involves the basic method of NDK Programming: (1) The development environment, (2) Writing Java code, C + + code, (3) compiling C/D + + file generation. So library, (4) recompiling engineering, generating apk. Since this chapter does not focus on NDK, this is no longer unfolding.

In addition to the above methods, the author uses another kind of thinking to explore, the above method processing data for each frame image data can be understood as online processing, and if the recording is completed after processing, can be understood as off-line processing. The Third-party library Mp4parser,mp4parser is a library that supports video segmentation in Android, where video is rotated. As for the specific effect, readers are interested in their own attempt, here to leave a suspense.

Private Boolean Rotatevideofilewithclockwisedegree (String sourcefilepath, int degree) {
    if (!isfileanddegreevalid (Sourcefilepath, degree)) {return
      false;
    }
    Rotatevideofile (Sourcefilepath, degree);
    return true;
  }

After the validation of the input parameters, the rotation is judged according to the test results.

Private Boolean Isfileanddegreevalid (String sourcefilepath, int degree) {
    if (Sourcefilepath = = NULL | | (!sourcefilepath.endswith (". mp4")) 
                 | | (!new File (Sourcefilepath). Exists ())) {return
      false;
    }
    if (degree = 0 | | (degree% 90!= 0)) {return
      false;
    }
    return true;
  }

private void Rotatevideofile (String sourcefilepath, int degree) {
    list<trackbox> trackboxes = Gettrackboxesofvideofilebypath (Sourcefilepath);
    Movie Rotatedmovie = Getrotatedmovieoftrackbox (trackboxes);
    Writemovietomodifiedfile (Rotatedmovie);
  }

Rotating video through Mp4parser is mainly divided into three steps:

(1) obtaining the corresponding trackboxes of the video file;

(2) Acquiring the movie object after rotation according to Trackboxes;

(3) Write the movie object to the file.

Private list<trackbox> Gettrackboxesofvideofilebypath (String sourcefilepath) {
    isofile isofile = null;
    List<trackbox> trackboxes = null;
    try {
      isofile = new Isofile (sourcefilepath);
      Trackboxes = Isofile.getmoviebox (). getboxes (Trackbox.class);
      Isofile.close ();
    } catch (IOException e) {
      e.printstacktrace ();
    }
    return trackboxes;
  }

Private Movie Getrotatedmovieoftrackbox (list<trackbox> trackboxes) {
    Movie Rotatedmovie = new Movie ();
    Rotate for
    (Trackbox trackbox:trackboxes) {
      trackbox.gettrackheaderbox (). SetMatrix (matrix.rotate_90);
      Rotatedmovie.addtrack (New Mp4trackimpl (Trackbox));
    }
    return rotatedmovie;
  }

private void Writemovietomodifiedfile (Movie Movie) {
    Container Container = new Defaultmp4builder (). Build (Movie); C12/>file modifiedvideofile = new File (Videofilepath.replace (". mp4", "_mod.mp4"));
    FileOutputStream Fos;
    try {
      fos = new FileOutputStream (modifiedvideofile);
      Writablebytechannel BB = channels.newchannel (FOS);
      Container.writecontainer (BB);
      Close file stream
      fos.close ();
    } catch (Exception e) {
      e.printstacktrace ()}
    } 
  

In this paper, the common problems in the Android video recording are explained, I hope to help you learn.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.