"Share" performance "1" for Android video recording editing effect

Source: Internet
Author: User

Objective

I have limited time to contact Android, if you have a better solution, welcome to The Spit Groove.

As we all know, Android platform development is divided into Java layer and C + + layer, namely Android SDK and Android NDK. General product functionality requires only the Java layer to be involved, unless the special needs are not introduced to the NDK. But what if it's a sound and video development? Android Java Layer API support for audio and video before Mediacodec, also stay at the level of very abstract API (that is, only provide simple parameters and methods, can control less behavior, get no intermediate data, not the development of complex functions, not to mention expansion). And after the launch of Mediacodec, but also failed to solve the problem completely, for these reasons: 1, Mediacodec appeared in the Android version is not low, the use is not compatible with the lower version of the machine and system version, 2, due to Android's open source and custom features, The major vendors realize the Mediacodec is also different, also led to the same piece of code a machine running is this kind, B machine running is another kind of. So the programmer children's shoes turned to the NDK, but the ndk inside Google did not provide any audio and video processing API (such as parsing generated files, codec frames), so the children are thinking of using the Open source C/s framework, the first is of course the most famous ffmpeg, x264, Mp3lame, FAAC these. The problem came again, ffmpeg the earliest support for x86 is the best, ARM platform or MIPS platform support is not so good (the author research ffmpeg2.0 after the situation has improved). Then you can only use soft to soft, speed can't keep up with what is the experience of the parents know? For a chestnut, if you want to record 640x480 video, audio and video all use soft code, x264 if the pure soft coding plus the processing performance of the phone CPU 50 milliseconds or even 100 milliseconds a frame, in short, is slow, but also to calculate the audio compression encoding. If you want to record 25 frame rate of video, the encoding time of a frame can not exceed 40 milliseconds, otherwise the speed can not keep up with, calculate the time of other business functions, this time at least to reduce to 30 milliseconds, and then use multi-threaded asynchronous coding to optimize the way should be able to reach the edge of the screen to generate video files. It is because of such an inconvenient, the author only after a few months of research, found a still not too perfect solution for everyone to refer to, this article will comprehensively introduce the technical realization of each link, and finally attach the project source code. By the way, my Android development experience is basically 1 (not 0 because I've written HelloWorld before), but C/c++,java has mastered it and has developed projects on iOS using OBJC, so I don't think Android is much different. , the language is different, the platform is different, the API is different, the system mechanism is different, the otherIt should be the same.


What APIs are available in the NDK?

Start by opening the NDK's include to see what interfaces the NDK can provide. Google is still human, in fact, in addition to Linux system-level API, in fact, there are some audio and video related APIs.

OPENSL can operate audio capture and playback devices directly on the C + + layer, recording and playback sound, starting with API9 support.

E GL, you can create OpenGL's drawing environment at the C + + layer for video image rendering, and can also be used for some image processing such as cropping, stretching, rotating, and even more advanced filter effects processing. It is also to be said that in C + + 's own creation of OpenGL's rendering environment, the glsurfaceview flexibility, controllability, and extensibility of the Java layer are directly increased by several orders of magnitude. EGL has also been supported in API9.

O Pengl (ES), the NDK provides an OpenGL interface on the Java layer, and a more native OpenGL header file on the NDK layer, and to use GLSL that has to be opengles2.0+. Fortunately the NDK also very early support, OpenGLES2.0 in API5 began to support, fortunately!!

O Penmaxal, which is a very unpleasant library found during the census, because it has its interface definition, for example, the ability to play video in a more abstract interface, and the ability to turn on the camera. Play video will not be used, the back of their own codec to render the implementation, see this open the camera interface, Heart was delighted at that time, the result is my MX3 actually told me the interface did not realize. The result is that the camera's data must be transmitted from the Java layer to the C + + layer. But Openmaxil, the former brother, is a good thing, but Google has no open interface for the time being.

In this way, image acquisition must be opened from the Java layer and then passed to the C + + layer via JNI after acquiring the data. The view of the rendered image is also to be created from the Java layer Surfaceview then pass the handle to the C + + layer and use EGL to initialize the OpenGL rendering environment. The sound collection and playback is nothing to do with Java, the bottom can be processed directly.


Choose an open source framework

FFmpeg file parsing, image stretching, pixel format conversion, most decoders, the author chooses 2.7.5 version, there are many optimizations for arm, decoding speed is good.

x264 H264 Encoder, the new version also has a lot of optimizations for arm, if using multithreaded encoding a frame 640x480 can be as low as 3-4 milliseconds.

Mp3lame MP3 Encoder, in fact, the test project is not used (test engineering use of MP4 (H264+AAC) combination), just habitual obsessive-compulsive disorder compiled into the encoder list

FAAC AAC encoder, is also a long time not updated, encoding speed is a drag, so there is a curve to the salvation of the design to solve the problem of audio coding.


Complete solution diagram


Problems with slow audio coding

x264 and FFmpeg Both download the newer version, and then turn on the optimization options such as Asm,neon to compile, and the codec speed is acceptable. But FAAC's coding speed is still a bit slow. I think of a way, is to store temporary files, in the recording of video data directly call x264 encoding, do not go ffmpeg relay (this can be more flexible configuration x264 parameters, to achieve faster purposes), and audio data directly into the file. This recording of temporary files and serious video file size gap, will not cause the slow speed of the card writing bottleneck problem, but also to solve the editing time to drag the progress bar of the accuracy of the problem, while solving the key frame pumping frame problem, because the temporary files are written by themselves, the document what content can be self-control. Have to say that a problem is the definition of abstract video file read Write interface reader and writer, and read write to the formal MP4 file implementation and read write temporary file implementation is to implement this reader and writer, So when you want to change to a direct recording in the future, the generation of MP4 only needs to be initialized when the other object is new. There is also a slow problem is the multi-threaded asynchronous write, the acquisition thread to get the data and then throw to another thread to write code, as long as the average speed of the coded write to the frame rate to meet the demand.


Introducing the OPENGL2D/3D engine

Once the OpenGL rendering environment has been created using EGL at the C + + layer, it is possible to use any OpenGL-based framework that is written in C + +. I have introduced cocos2d-x here to add some special effects to the video, such as sequence frames, particle effects and so on. The cocos2d-x itself has its own render thread and OpenGL rendering environment, which needs to be wiped out and a part of the code is written to let Cocos2d-x render to the EGL environment you create yourself. In addition Cocos2d-x object recycling mechanism is simulated OBJECTIVE-C reference count and automatic recovery pool mode, the project source of cocos2d-x recovery mechanism I also made a simplified modification, to tell the truth personally think its reference counting simulation can also, and COM similar principle, Unified base class can be implemented, but the automatic recovery pool will not be completely copied objective-c, there is no need to recycle pool pressure stack, a global recovery pool is enough. (Purely personal view)


Primary sub-Threading mode

OpenGL's glmakecurrent is thread-sensitive, as we all know. All actions related to OpenGL are thread-sensitive, that is, arts and sciences loading, GLSL script compiling links, context creation, and GLDRAW operations are required within the same thread. The Android platform does not have a similar mainoperationqueue on iOS, so I have designed a master sub-threading mode (the name I own), that is, the main thread is the Android UI thread, which is responsible for the UI-drawn response button action. All other operations are then handed to the secondary thread. It is said that each user's action response function is not directly to the Director, but to learn MFC in the way that post a message and data to the secondary thread. Then the sub-thread will have to use single-threaded scheduling message loop and multi-tasking way, message loop does not say, MFC mode. Single-threaded scheduling multitasking may be a lot of children's shoes have not touched, in fact, is the traditional single-threaded processing of the task, divided into a lot of time slices, so that the thread processing only one time slice at a time, and then cache processing state, to the next time it is time to continue processing.

For example, the task interface is imission {bool Onmissionstart (); BOOL Onmissionstep ();   void Onmissionstop ();} The dispatch thread executes the Onmissionstart first time if it returns false, executes the Onmissionstop end task, and if the former returns true, calls Onmissionstep continuously until it returns false, and then executes Onmissionstop , the task ends. The specific processing is encapsulated into the implementation class of the task interface and then thrown into the task list.

Imagine, such a design structure, is not all operations in the same thread, OpenGL calls are also in the same thread, and the accompanying effect is that MOM no longer worry about multi-threaded concurrency processing everywhere lock caused performance problems and bug problems, do not doubt its performance, Because even multithreaded to the CPU that level has become a single thread. Redis is not a single-threaded, fast leverage.


Summarize

Recording and broadcasting using OPENSL

Creating an OpenGL environment in the C + + layer using EGL

Transform cocos2d-x, use your own OpenGL environment

Direct use x264 instead of FFmpeg relay, according to the fastest encoding mode configuration parameters, you must remember to turn on the x264 multi-threaded encoding.

x264 and FFmpeg both download newer and compile with options such as Asm,neon. (The author is a cross-platform compilation on Ubuntu)

If recording the direct encoding video and audio speed can not keep up with the write temporary files, image encoding, sound directly stored PCM.

In addition to the main thread of the Android, in addition to a single sub-thread for scheduling, the specific small module time-consuming task on a separate thread, the framework of the body only two threads, a master one pair.


Complete Project Source code

The use of API15 development, in fact, can be as low as API9.

Source Address: http://download.csdn.net/detail/yangyk125/9416064

Operation Demo: http://www.tudou.com/programs/view/PvY9MMugbRw/

Where to render the generated video:/SD card/e4fun/video/*.mp4

What you need to explain is:

1. The first two private fields of the Com.android.video.camera.EFCameraView class define the camera resolution width and height currently selected, requiring the current camera to support this resolution.

2, jni/worker/efrecordworker.cpp in the Createrecordworker function, define the current recorded video of the various basic parameters, according to the performance of the test machine free configuration.

3, jni/worker/efrecordworker.cpp in the On_create_worker function, there is a set Setanimationinterval call, set the OpenGL frame rate, and video frame rate is different, please set as appropriate.

If you can't run, please add QQ 119508078 consultation.

I work in Chengdu, welcome to like-minded children's shoes harassment.


Thanks to a netizen who read this blog, I pointed out the place which can be optimized.

1, if the use of FFmpeg open source solution to process audio and video, then AAC should use FDK_AAC and should not be used for a long time not updated FAAC.

2, glreadpixels back reading data inefficient, the author is trying to upgrade to gles3.0 see if there is any way to quickly get rendering results image, if you know, please leave a message, thank you!


Do audio and video processing on Android, if you want faster codec, if it is Java layer will not open mediacodec, if it is C + + layer, you can study downward, such as openmaxil and so on.

"Share" performance "1" for Android video recording editing effect

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.