Overview of iOS and Android audio development

Source: Internet
Author: User

Recently, because the project needs to voice changes, so learning while doing, found that the audio processing ideas is not difficult, but it is still a bit cumbersome (than expected)

While the brain is still hot, hurriedly summed up the idea, recorded.

The main three parts of the story

1, how to voice changer 2, Android realize voice changer 3,ios realize voice changer

1.

If you want to write a voice changer function or library out, it is not easy, so adopted the common use of the library SoundTouch.

The library can be implemented to change the speed of the sound, beats, tones (the most important, can be the tone of the voice of the high-pitched low, make it into a boy girl, can refer to Tom Cat)

Use the idea to put the entire library to the bottom of different platforms, using only the header file soundtouch.h can be used.

The SoundTouch class provides a number of methods, the most important of which is the setpitch,setrate to adjust the sound parameters of the method, the use of self-setting parameters.

However, the parameters of several of these functions need to be pre-set before use:

                Msoundtouchinstance->setsetting (Setting_use_quickseek, 0);                Msoundtouchinstance->setsetting (Setting_use_aa_filter,!) ( 0));                Msoundtouchinstance->setsetting (setting_aa_filter_length, +);                Msoundtouchinstance->setsetting (Setting_sequence_ms, +);                Msoundtouchinstance->setsetting (Setting_seekwindow_ms, +);                Msoundtouchinstance->setsetting (Setting_overlap_ms, 8);

Then set the required parameters

                Msoundtouchinstance->setchannels (2);                Msoundtouchinstance->setsamplerate (8000);                Msoundtouchinstance->setpitch (2);

It is important to explain several parameters of audio processing here.

Channel: Channals, can be mono and two channels, respectively, corresponding to the

Sampling rate: samplerate 8000-44100, commonly used in several values, Android inside as if 44100 is supported by all devices, so set to 44100 more insurance bar

Number of bits per channel: Bitsperchannel is typically set to 16

Number of channels per frame Channelsperframe for PCM data, this is 1.

There are a few parameters, for Android and iOS may not be the same, the above several are to use, more important, must be mastered

Voice Changer in 2.Android

Because the project requires the recording to be played in real time, it needs to be played in the read audio stream (PCM format), with the API Audiorecorder and Audiotrack.

Specific use of relevant data, official documents are more detailed. The general idea is to initialize first:

        Initilize
Trbusize=audiotrack.getminbuffersize (Recorder_samplerate,audioformat.channel_out_stereo, audioformat.encoding_pcm_16bit); New Audiotrack (audiomanager.stream_music,recorder_samplerate, Audioformat.channel_in_stereo, Audioformat.encoding_pcm_16bit, trbusize, audiotrack.mode_stream); = audiorecord.getminbuffersize (recorder_samplerate, audioformat.channel_in_stereo,audioformat.encoding_pcm_ 16BIT); MaudiorecordNew Audiorecord (Mediarecorder.audiosource.mic,recorder_samplerate, Audioformat.channel_in_stereo, Audioformat.encoding_pcm_16bit, rebusize);

Here, because the parameters supported by different devices may be different, it is necessary to write a loop to try all the possible parameters all over again.

It is then recorded and played, and can be placed in two threads respectively. In general, the recording data is saved to a file, and then played back, so as to cope with the general recording needs. But the disadvantage is that the recording time is long, the file will be very large, if it is played in real-time on the network, so certainly not. The solution is to pass the recorded data to a buffer and then take the data directly from the buffer when playing. This buffer can be considered as a loop queue or in Java can be implemented directly with a linkedlist.

Then is the Voice changer part, the android inside wants to use the C + + library, can only through the JNI to realize, may write several functions.

                 while(isinstanceplaying) {if(l<21){                        byte[] mbyte=New byte[64]; Maudiorecord.read (MByte,0,64); Soundtouch.getsoundtouch (). Putsamples (MByte,0, input_length);                        Soundtouch.getsoundtouch (). Setpitchsemitones (Pitchtone);                        Soundtouch.getsoundtouch (). Receivesamples (Mbyte,input_length);                        Bytearray.add (MByte); L=bytearray.size (); }                    Else{maudiotrack.write (Bytearray.getfirst (),0,64);                            Bytearray.removefirst (); L=bytearray.size (); }

There are three function putsamples,setpitchsemitones in the code, Receivesamples. These three are all native methods, in the SoundTouch library respectively through the SoundTouch class provided by the corresponding function implementation, relatively simple, through these functions can be achieved sound voice changer.

The l variable is the length of the LinkedList (ByteArray in the code), when it is less than 20 o'clock added to the end of ByteArray, and Audiotrack continuously reads the first element in the array to play and then deletes the element.

Remember to release Maudiotrack and Maudiorecorder at the end of the play. Implemented by the stop and release methods.

3.IOS realizes Voice changer

Because I did not touch the iOS before so I met a lot of problems, fortunately, finally solved.

iOS audio processing is more troublesome than Android, the core API used is audioqueue, it must be understood before using the principle, and Android is different from the iOS play and recording is the use of this API. It's the equivalent of a thing. The wirelessly audiorecorder and Audiotrack functions are achieved, but the internal process changes during playback and recording.

Core idea:

Audio has a queue that comes with itself, first the user creates several (3-6 or so) buffers to reload the audio data, which is processed using a user-defined callback function after playing or recording in the self-contained queue, allowing the buffer to be reused. And in the callback function can implement some user-defined functions, such as voice changer, write files and so on. The official gave a detailed description of the diagram, need to focus on understanding.

The first is the flow chart of the recording:

Then there is the flowchart of the playback:

How does a voice changer?

The iOS voice changer does not require an Android JNI, because the OC language can be mixed with C + +, so this is relatively much simpler. The process is as follows:

First instantiate a SoundTouch class in your program, then set its parameters at initialization (setsetting), and then, in the callback function described above, you can process the recorded data stream and choose to save to a file or play it directly. This is the way of thinking, but the function inside the parameter is relatively cumbersome, the previous principle did not understand the words on this side is difficult to do down.

Live playback?

Thinking with Android, can write a loop queue used to cache audio data, and then audio recording to the inside of the data side to play, unlike Android, these operations need to be placed in the corresponding callback function to implement, there is a simple way to play in the callback function of the recording of PCM data directly. Because the data is a piece of the incoming, each time the buffer is used to call a callback function, you can directly in the callback function to play.

The above is the two platform for recording and real-time playback of the simple introduction, which is still quite a lot of things, it is worth in-depth study.

  

Overview of iOS and Android audio development

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.