Mediamuxer and Mediacodec use cases in Android-Audio+video

Source: Internet
Author: User

In the Android multimedia class, Mediamuxer and Mediacodec are relatively young, they are JB 4.1 and JB 4.3 only introduced. The former is used to mix audio and video to generate multimedia files. The disadvantage is that only one audio track and one video track can be supported at this time, and only MP4 output is supported. But since it is a new thing, I believe that after the version should be a big improvement. Mediacodec is used to encode audio and video, and it has a comparison of the bull x where surface content can be encoded, such as the KK 4.4 screen recording function is implemented with it.

Note the relationships and differences between them and some other multimedia-related classes: Mediaextractor is used for audio and video taps, and mediamuxer is just the reverse process. Mediaformat is used to describe the format of multimedia data. Mediarecorder used for video + compression coding, generated encoded files such as MP4, 3GPP, video is mainly used for recording camera preview. The MediaPlayer is used to play compressed encoded audio and video files. The Audiorecord is used to record PCM data. The audiotrack is used to play PCM data. PCM is the raw audio sample data that can be played with a VLC player. Of course, the channel sample rate and the like to set their own, because the original sample data is no file header, such as:
VLC--demux=rawaud--rawaud-channels 2--rawaud-samplerate 44100 AUDIO.PCM

Go back to the two classes of Mediamuxer and Mediacodec, and their reference documents see Http://developer.android.com/reference/android/media/MediaMuxer.html and HTTP ://developer.android.com/reference/android/media/mediacodec.html, there is a frame used inside. This combination enables many functions, such as editing audio and video files (combined with mediaextractor), drawing surface with OpenGL and generating mp4 files, screen recording, and similar camera The video function in the app (although this is more suitable for mediarecorder).

Here's an example of a very boring feature, which is to draw a video on a surface, generate audio with the MIC recording code, and then mix audio and video to generate a MP4 file. The program itself is useless, but the basic usage of mediamuxer and Mediacodec is examples. This procedure is mainly based on two test programs: one is the softinputsurfaceactivity and hwencoderexperiments in Grafika. One is to generate video, one to generate audio, to combine them here, and to generate audio and video. The basic framework and process are as follows:


The first is the recording thread, the main reference hwencoderexperiments. Receive sampled data from the microphone through the Audiorecord class and then drop it to encoder to prepare the encoding:

Audiorecord Audio_recorder;audio_recorder = new Audiorecord (MediaRecorder.AudioSource.MIC,               sample_rate, CHANNEL _config, Audio_format, buffer_size);                        ... audio_recorder.startrecording (); while (is_recording) {    byte[] This_buffer = new Byte[frame_buffer_size];    Read_result = Audio_recorder.read (this_buffer, 0, frame_buffer_size); Read Audio raw data    //...    Presentationtimestamp = System.nanotime ()/+;    Audioencoder.offeraudioencoder (This_buffer.clone (), presentationtimestamp);  Feed to Audio Encoder}
It is also possible to set the Audiorecord callback (via Setrecordpositionupdatelistener ()) to trigger the reading of the audio data. Offeraudioencoder () Encodes the audio sample data into the Mediacodec InputBuffer:

bytebuffer[] inputbuffers = Maudioencoder.getinputbuffers (); int inputbufferindex = Maudioencoder.dequeueinputbuffer ( -1); if (inputbufferindex >= 0) {    bytebuffer inputbuffer = Inputbuffers[inputbufferindex];    Inputbuffer.clear ();    Inputbuffer.put (this_buffer);    ...    Maudioencoder.queueinputbuffer (inputbufferindex, 0, This_buffer.length, presentationtimestamp, 0);}
Below, refer to grafika-softinputsurfaceactivity and add audio processing. The main loop is broadly divided into four parts:

try {    //Part 1    prepareencoder (outputFile);    ...    Part 2 for    (int i = 0; i < num_frames; i++) {        generateframe (i);        Drainvideoencoder (false);        Drainaudioencoder (FALSE);    }    Part 3    ...    Drainvideoencoder (true);    Drainaudioencoder (True);}  catch (IOException IoE) {    throw new runtimeexception (IOE);} finally {    //Part 4    releaseencoder ();}
The 1th part is the preparatory work, in addition to the video Mediacodec, here also initialized the audio Mediacodec:

Mediaformat Audioformat = new Mediaformat (); Audioformat.setinteger (Mediaformat.key_sample_rate, 44100); Audioformat.setinteger (Mediaformat.key_channel_count, 1);        Maudioencoder = Mediacodec.createencoderbytype (Audio_mime_type); Maudioencoder.configure (audioFormat, NULL, NULL, Mediacodec.configure_flag_encode); Maudioencoder.start ();
The 2nd part enters the main loop, the app draws directly on surface, because this surface is applied from Mediacodec with Createinputsurface (), so you don't have to explicitly use Queueinputbuffer () when you're finished. Give it to encoder. Drainvideoencoder () and Drainaudioencoder () take the encoded audio and video out of buffer (via Dequeueoutputbuffer ()), respectively, The mediamuxer is then mixed (via Writesampledata ()). Note Audio Video is synchronized by using PTS (Presentation time stamp, which determines when a frame's audio and video data is displayed or played), Stamp need to be in the Audiorecord from the mic acquisition of data and put into the corresponding bufferinfo, video because it is on surface painting, so directly with the Dequeueoutputbuffer () out of the bufferinfo in the line, Finally, the coded data is sent to Mediamuxer for multi-path mixing.

Note that the muxer will have to wait until the audio track and video track are added to the start. MEDIACODEC returns a info_output_format_changed message at the beginning of the call to Dequeueoutputbuffer (). We just need to get the format of the MEDIACODEC here and register it in Mediamuxer. Then determine if the current audio track and video track are all ready, and if so, start muxer.

In summary, the main logic of Drainvideoencoder () is roughly the following, Drainaudioencoder is similar, just change the video mediacodec to audio Mediacodec.
while (true) {int encoderstatus = Mvideoencoder.dequeueoutputbuffer (Mbufferinfo, timeout_usec); if (Encoderstatus = = Mediacodec.info_try_again_later) {...} else if (Encoderstatus = = Mediacodec.info_output_b    uffers_changed) {encoderoutputbuffers = Mvideoencoder.getoutputbuffers (); } else if (encoderstatus = = mediacodec.info_output_format_changed) {Mediaformat Newformat = MAUDIOENCODER.GETOUTPU        Tformat ();        Maudiotrackindex = Mmuxer.addtrack (Newformat);        mnumtracksadded++;        if (mnumtracksadded = = total_num_tracks) {Mmuxer.start (); }} else if (Encoderstatus < 0) {...} else {Bytebuffer encodeddata = Encoderoutputbuffers[encode        Rstatus];        ... if (mbufferinfo.size! = 0) {mmuxer.writesampledata (Mvideotrackindex, Encodeddata, Mbufferinfo);        } mvideoencoder.releaseoutputbuffer (Encoderstatus, false); if ((Mbufferinfo.flags & Mediacodec.buffer_flag_end_Of_stream)! = 0) {break; }    }}
The 3rd part is to end the recording and send the EOS information so that in Drainvideoencoder () and drainaudioencoder you can exit the inner loop based on EOS. The 4th part is the clean-up work. Release the surface and Mediamuxer objects for audio and video Mediacodec,mediacodec.

The last few notes:
1. Add recording permissions to Androidmanifest.xml, otherwise the Audiorecord object will fail when it is created:
<uses-permission android:name= "Android.permission.RECORD_AUDIO"/>
2. Audio and video are synchronized by PTS, and two units are consistent.
3. Mediamuxer is used in the order of Stop, Writesampledata, start, Addtrack, constructor. If both audio and video are available, Writesampledata () will be in the top two of the stop.

Code References:
Grafika:https://github.com/google/grafika
bigflake:http://bigflake.com/mediacodec/
Hwencoderexperiments:https://github.com/onlyinamerica/hwencoderexperiments/tree/audioonly/hwencoderexperiments /src/main/java/net/openwatch/hwencoderexperiments
Android test:http://androidxref.com/4.4.2_r2/xref/cts/tests/tests/media/src/android/media/cts/
Http://androidxref.com/4.4.2_r2/xref/pdk/apps/TestingCamera2/src/com/android/testingcamera2/CameraRecordingStream.java

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.