iOS audio noise reduction/stitching

Source: Internet
Author: User

1. http://www.leiphone.com/news/201406/record.html

About cell phone recording and noise reduction.

The author of this paper is a He Shunyi engineer in the core city of Ke Tong.

Presumably we all have this experience: Received friends from the railway station, Subway, venue, KTV and other occasions to call the phone, sometimes difficult to hear clearly, and sometimes listen very clearly. Why is this?

Usually we think that the signal is not stable, so the quality of the call is good and bad. In fact, this environment can listen to each other's speech, mainly depends on the other mobile phone recording and noise reduction features. At the same time, this is also an important difference between high-end mobile phones and ordinary mobile phones.

The difference between any function, in the final analysis, is the difference between hardware and software. In this article, the author will spend a long time and netizens share the phone recording, noise reduction principle, the required hardware, algorithms, and different hardware, algorithms, the difference in the use of experience. I hope we can help you a little.

Recording process and hardware

First of all, why emphasize the recording function of mobile phone.

It's simple, the phone is used for calling. The process of the call, the first to record the voice of the speaker, and then the listener can hear. Therefore, the recording function for the call, is fundamental and important.

For the mobile phone recording process, simply speaking, need to go through three stages, two links. Three links are: "Sound-analog electrical signals-digital electrical signals." Two links are: "Mic" and "ADC (analog digital converter/analog Converter)". The microphone is responsible for translating "sound" into "analog electrical signals", which translates "analog electrical signals" into "digital electrical signals". So the quality of the microphone and ADC directly determines the quality of the recording function.

Microphone Everyone is familiar with, here no longer repeat, mainly about the ADC.

How to measure the quality of an ADC? In a nutshell, look at two parameters: sample speed and quantization bit. What is the sampling speed and quantization bit number? As you can see, the sampling speed represents the speed, and the quantization number represents the precision. Both of these values are bigger and better.

So, how do you know the "sample speed" and "quantization bit" of the ADC in your phone? There are ways:

First download a free app called "recforg", after the installation run, find the "Settings" menu, enter the rear interface as shown

, there are two red squares: "Sample rate" and "audio format". These two sub-menus correspond to the ADC's "Sample Speed" and "quantization bit" respectively.

On the author's mobile phone, click on the "sample rate" after entering the interface as follows

As you can see, there are three positions that are not selectable in gray: 12kHz, 24kHz, 48kHz. All other gears can be selected. This indicates that the author of the mobile phone ADC "sampling rate" has 5 gears, up to 44kHz. At the same time, I also tested the friend's Millet 2, found its highest sampling rate is 48kHz. This means that the ADC used by Xiaomi 2 is one level higher than the ADC of the author's phone.

In the "Settings" menu screen, click "Audio Format" submenu to enter, you will see

Indicates that the "quantization number" of the ADC of the author's mobile phone is 16 bits.

Very simple, huh? To illustrate, the author found that the app "recforg" can only be found on the Android platform phone, but not in iOS. If you want to see the parameters of the iphone's ADC, try to find something similar to the recording software and take your chances.

Noise reduction principle and algorithm

In the "Recording process and Hardware" section, the hardware required for recording and the impact of hardware performance on the recording quality are discussed.

In a quiet environment, the software has little impact on calls. However, mobile phone is a mobile communication device, the call scene is not sure, it is likely in noisy environment. In this case, the noise reduction algorithm is critical to the quality of the call.

How is noise reduction going? Simply put, it is through the algorithm, from the received voice to separate the vocal and noise, the vocal reinforcement, the noise suppression, thereby improving the call quality. The truth is very simple, but in the concrete implementation, the algorithm is very complex, the mobile phone companies generally do not do their own noise reduction algorithm, but the use of the relevant professional company program.

Speaking of noise reduction, we have to mention audience company. This is a company specializing in mobile audio technology, the world's leading company, popular Point, is to do audio noise reduction algorithm. Apple, Samsung, HTC, Google, LG, Huawei, Sharp, Meizu, Xiaomi are all audience customers, and if you want to list the models that use audience chips, it will be a very long list.

So, what is the difference between the different noise reduction algorithms that reflect the user experience?

We can do two kinds of experiments in noisy environments. (a), in noisy environment, the use of "hands-free call", the other side can hear very clearly, the speaker and the maximum distance between the phone is how far? (b), in noisy environment, mobile phone can be the maximum distance of speech recognition is how far? I have tested some high-end mobile phones and ordinary mobile phones, the results are quite big difference. If you are interested, you can try it.

The above is noise-cancelling performance in terms of experience. So apart from subjective feelings, can there be an objective, intuitive display? The answer is yes. See

is the frequency response curve of the iphone4s and Xiaomi 2 playback channels. It can be clearly seen that the iphone4s in the low frequency (<80hz) and high frequency (>1.4hz) have done the corresponding noise reduction processing, only the retention and vocal band. and the Millet 2 only in the high frequency band has the noise reduction processing. This also shows that compared with the iphone4s, millet in some details still have some room for improvement.

2. http://blog.163.com/l1_jun/blog/static/143863882013517105217611/

A tentative discussion on the development of IOS and Android im voice Chat--local audio processing (under) original link: http://cvito.net/index.php/archives/869

Before we talk about iOS how to record a WAV format audio, but now it is true that Android does not support WAV format, so there is crossing said, you a 250, you can not choose an Android support format recording it, I am responsible for saying that Apple and Google pinch rack, bitter is our hard-pressed technical staff ... Android format Apple all do not support, optimistic is all not, not all, in turn the format of Apple, Android is not accustomed to ....

Of course there are policies under the constant truth, the audio interoperability between iOS and Android is difficult for our great programmers, and there are many ways to solve this problem, but roughly the following 3 ways, and listen to me carefully.

The first scenario is that the server is heavily loaded, either Android or iOS, and transmits the audio to the server, which is converted and forwarded via the server. This approach can not be limited by the system, but the large amount of information on the server load, the server-side requirements are high. It is rumored that the voice IM interaction in this way

The second option is to use the same third-party audio library, regardless of iOS or Android, for codec processing, and then network transmission, the advantage is that there are many audio libraries to choose from, can choose a variety of audio formats according to their different needs, However, because both iOS and Android need to encode and decode them, and the project does not initially design this requirement, it is too large for both sides to make modifications. It is also rumored that the mature case of voice IM is based on the Speex three-party open Source Library to complete, this format is small, can reduce noise, is currently more respected way.

I used the third way, AMR format audio file is the default audio file in Android system, also is the Android support of the very convenient sound files, iOS system used to support this format of the file, since 4.3 after the cancellation of AMR support (reason should not need me to say ...) , it can be seen that AMR format audio files are not processed by iOS, because with this concept and the unconscious of the implant, I began to mindedly on the network to find a variety of examples and demo, all I have to do is to solve the problem as far as possible on the iOS side. Finally Kung Fu, finally let me succeed in the iOS to successfully convert the Android can use AMR files. Next, we will talk about how to complete the WAV and AMR files on the iOS side of the transfer.

First recommend to everyone to provide a demo in the following connection download. This demo is reproduced from the Chinese open source community, I from the heart to the publisher jeans adult to extend the highest respect.

http://www.oschina.net/code/snippet_562429_12400

After the demo download opens the project, drag the following four source files and two library files into your project, referencing Audiotoolbox.framework, Coreaudio.framework and Avfouncation.framework to complete class library import

Opening our imported header file will find that there are a large number of structs, and in the Open Arc project is forbidden to use the struct and union, and our project can actually open arc, here involves a knowledge point, I've seen a question on the web about how to use a struct after opening the arc, and I've been in touch with the project and started to get involved in mixing before I know how to solve this problem, just set the compile Sources as (set compilation source) to Ojbective-c++ Alternatively, replace the extension of the implementation file that contains the name and Union header file with the. mm to use the struct and union in the project, but note that the code you are writing is no longer a purely objective-c language, but involves objective-c+ +, here is related to the problem of mixed, we will be back again to explain the content of the mixed, but not many, interested crossing can find their own information. Back to the original question, if you have encountered a compilation error after importing the file, click Build settings under Project targets to find the following compilation settings and modify it according to the diagram content.

Note that if you are creating a new empty project, the settings for compile Sources as should also be compiled in according to file type (choose a compilation source according to the files type) and not set to ojbective-c++ for compilation. I am because the project contains other SDKs need to be used so it is set, once set to ojbective-c++ and we will later talk about the network transmission of the SDK used in a certain conflict is that solution. So it's best to keep according to File Type.

After the normal import of the file can be used directly after the conversion method, and the General SDK, the library is not in the form of tired packaging, but a number of function functions, so we do not need to construct the object, then we say the conversion of the specific method, although these methods are simple to use, But I am willing to provide you with a little convenience, so-called to help people in the end to send the Buddha to West, the following I each method of the name, function, parameter description and use examples posted for everyone to reference ~.

Oh, yes. Don't forget, our first step is always the import header file

#import "AmrFileCodec.h";

Next we start the first function Encodewavefiletoamrfile from the function name, this method is to convert WAV to AMR file, we first look at the example

The parameter list is not very complex. 4 parameters respectively Q: 1. WAV file address, 2.AMR file address, 3. The number of audio channels, which is the last parameter of the recorded audio mentioned in our previous article, the number of channels. 4. The number of encoded digits, also in the previous article we have already introduced not to repeat.

Next, the second function, Decodeamrfiletowavefile, is the opposite of the previous function, which is the conversion from AMR to WAV, here is the code example

This parameter can be said to be more simple, the first parameter is the need for an AMR file address is the source, the second parameter is the destination address is a WAV file address, a simple two parameters can be completed call. It is important to note that the address used here is different from the one we used avfouncation before, and it is neither a string to be nsstring nor a pointer to a Nsurl object but a const char, but this is not a problem. Instance code in the conversion of the method is not the simplest is just eager to demonstrate so dragged out, hoping that the heart of the crossing can filter, but the mind is a programming taboo.

Relative to the introduction can be said to use a simple mess, do not need us much effort, but also not so enigmatic, but the test is still a certain amount of effort, after the real-time detection of iOS recorded under the WAV converted to AMR and placed on the Android platform can play normally, And the Amr file that was recorded by Android to get to iOS under the conversion of WAV as can play completely without any problems. But this method is also a certain disadvantage, the speed of audio conversion is slow, if the time is longer audio files can be converted to a short time card, but for the realization of voice IM chat is fully satisfied

So far the content of our local audio processing is all over, but we are now only on the local machine to achieve the correct audio conversion and playback, want to complete the voice IM chat we are still the key link is the interaction with the server, detailed content, we will be introduced in the next article, Please focus on iOS, Android IM Voice chat development part of the experience--asynchronous socket transmission

iOS audio noise reduction/stitching

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.