The address of this project, the name is "audio-visual (some players can not put, and no time long display)", to seek star
Https://github.com/979451341/Audio-and-video-learning-materials
1.MediaMuser description
Mediamuser: The encapsulated encoded video stream and audio stream into the MP4 container, plainly able to integrate audio and video into a MP4 file, mediamuxer support only one video track and one audio track, So if you have multiple audio track, you can mix them into an audio track and then use Mediamuxer to encapsulate them in the MP4 container.
Mediamuxer muxer = new Mediamuxer ("Temp.mp4", outputformat.muxer_output_mpeg_4);
//More often, the Mediaformat would be retrieved from Mediacodec.getoutputformat ()
//or MEDIAEXTRACTOR.GETTRACKFO Rmat ().
Mediaformat Audioformat = new Mediaformat (...);
Mediaformat Videoformat = new Mediaformat (...);
int audiotrackindex = Muxer.addtrack (Audioformat);
int videotrackindex = Muxer.addtrack (Videoformat);
Bytebuffer inputbuffer = bytebuffer.allocate (buffersize);
Boolean finished = false;
Bufferinfo bufferinfo = new Bufferinfo ();
Muxer.start ();
while (!finished) {
Getinputbuffer () would fill the inputbuffer with one frame of encoded
Sample from either Mediacodec or Mediaextractor, set Isaudiosample to
True when the sample was audio data, set up all the fields of Bufferinfo,
and return True if there is no more samples.
Finished = Getinputbuffer (InputBuffer, Isaudiosample, Bufferinfo);
if (!finished) {
int currenttrackindex = isaudiosample? Audiotrackindex:videotrackindex;
Muxer.writesampledata (Currenttrackindex, InputBuffer, Bufferinfo);
}
};
Muxer.stop ();
Muxer.release ();
2. Video recording process
I'll put a picture on it, because I think I'm going to get myself dizzy and tidy up.
The data collected by the camera is first displayed in the Surfaceview
surfaceHolder = surfaceView.getHolder(); surfaceHolder.addCallback(this);@Overridepublic void surfaceCreated(SurfaceHolder surfaceHolder) { Log.w("MainActivity", "enter surfaceCreated method"); // 目前设定的是,当surface创建后,就打开摄像头开始预览 camera = Camera.open(); try { camera.setPreviewDisplay(surfaceHolder); camera.startPreview(); } catch (IOException e) { e.printStackTrace(); }}
Then start recording video, turn on two threads to process audio and video data separately
private void initMuxer() { muxerDatas = new Vector<>(); fileSwapHelper = new FileUtils(); audioThread = new AudioEncoderThread((new WeakReference<MediaMuxerThread>(this))); videoThread = new VideoEncoderThread(1920, 1080, new WeakReference<MediaMuxerThread>(this)); audioThread.start(); videoThread.start(); try { readyStart(); } catch (IOException e) { Log.e(TAG, "initMuxer 异常:" + e.toString()); }}
Add two track to Mediamuxer
Mediamuxer.writesampledata (track, Data.bytebuf, Data.bufferinfo);
Let's take a look at how the video data is handled.
Mediacodec Initialization and configuration
mediaFormat = MediaFormat.createVideoFormat(MIME_TYPE, this.mWidth, this.mHeight); mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE); mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE); mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar); mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
Open Mediacodec
Mmediacodec = Mediacodec.createbycodecname (Mcodecinfo.getname ());
Mmediacodec.configure (Mediaformat, NULL, NULL, MEDIACODEC.CONFIGURE_FLAG_ENCODE);
Mmediacodec.start ();
Then Surfaceview incoming video data data import
@Overridepublic void onPreviewFrame(byte[] bytes, Camera camera) { MediaMuxerThread.addVideoFrameData(bytes);}
This data mediamuxerthread and passed to Mediathread.
public void add(byte[] data) { if (frameBytes != null && isMuxerReady) { frameBytes.add(data); }}
Then loop the data from the framebytes.
if (!framebytes.isempty ()) {
byte[] bytes = this.frameBytes.remove (0);
LOG.E ("ang-->", "Decoding video data:" + bytes.length);
try {
Encodeframe (bytes);
} catch (Exception e) {
LOG.E (TAG, "Decoding video data Failed");
E.printstacktrace ();
}
The data to be removed, that is to say, mframedata this data is the last encoded video
// 将原始的N21数据转为I420 NV21toI420SemiPlanar(input, mFrameData, this.mWidth, this.mHeight);private static void NV21toI420SemiPlanar(byte[] nv21bytes, byte[] i420bytes, int width, int height) { System.arraycopy(nv21bytes, 0, i420bytes, 0, width * height); for (int i = width * height; i < nv21bytes.length; i += 2) { i420bytes[i] = nv21bytes[i + 1]; i420bytes[i + 1] = nv21bytes[i]; }}
Mediacodec get data from Mframedata
Mmediacodec.queueinputbuffer (inputbufferindex, 0, Mframedata.length, system.nanotime ()/1000, 0);
And then took out the data to muxer.
Mediamuxer.addmuxerdata (New Mediamuxerthread.muxerdata (Mediamuxerthread.track_video, OutputBuffer, MBufferInfo));
Ah ah ah ah, crazy, the code may look very sticky, a lot of, but the vast majority of code is to coordinate in order to determine the current video is still recorded, but the real video is recorded in the operation of the code is two lines, Mediacodec use Queueinputbuffer to obtain data, And then encode Dequeueoutputbuffer to mediamuxer,audiocodec the same routine.
The source address in the article header, you study a lot, on the code has a problem, no display time, some players can not use, mobile phone should be no problem
Android Audio Video In-depth four video MP4 (with source download)