Detect user blow to Mike

Source: Internet
Author: User

If you told me a few years ago that people could shake a cell phone or blow a cell phone to Mike, I would laugh. But now it is a fact.
It is very straightforward to check the shaking action, all of which are described in 3.0 "Motion event.
Testing is a little difficult to blow to Mike. This tutorial will create a simple single view program that will write record information to the console when you blow to Mike.
Source code/GitHub
The source code of the tutorial can be obtained from GitHub. You can clone a software repository or download a zip file directly.
Overview
The test can be divided into two parts: (1) obtain the microphone input (2) "Listen" to blow the sound.
We will use the new avaudiorecorder class in 3.0 to capture the microphone input. Using avaudiorecorder allows us to use objective-C without using C like other methods.
Noise/sound blowing into Mike is composed of low-frequency sounds. We will use low pass filter to reduce high-frequency sound from microphones. When the level of the filter signal suddenly increases, we will know that someone is blowing into Mike.
Create a project
Start xcode to create a view-based iPhone program called micblow:
Use the xcode menu File> new project... Create a new project
Select view-based applications from iPhone OS> application, and then press choose...
Name the project micblow and press save
Add avfoundation framework
To use the avaudiorecorder class, we need to add the avfoundation framework to the project:
Expand targets on the groups & files panel
Press Control-click or right-click micblow
Select Add> existing frameworks...
Click the + button in the lower left corner of linked libraries
Select avfoundation. Framework and press add
Avfoundation. framework appears under linked libraries. Close Window
Then, we introduce the avfoundation header file in the View Controller Interface and set the avaudiorecorder instance variable:
Expand micblow under the groups & files Panel of the project
Expand the classes folder
Select micblowviewcontroller. h for editing.
Update files. For details about the modification, see lines 2, 3, and 7:
# Import <uikit/uikit. h>
# Import <avfoundation/avfoundation. h>
# Import <coreaudio/coreaudiotypes. h>

@ Interface micblowviewcontroller: uiviewcontroller
{
Avaudiorecorder * recorder;
}
@ End
Introducing the coreaudiotypes header file is actually the next step. We also need to define more constants in setting avaudiorecorder.
Get microphone input
We set it in viewdidload and start "listen" to the microphone:
Uncomment sample viewdidload Method
Update as follows. See row 4-18:
-(Void) viewdidload
{
[Super viewdidload];
Nsurl * url = [nsurl fileurlwithpath: @ "/dev/null"];
Nsdictionary * settings = [nsdictionary dictionarywithobjectsandkeys:
[Nsnumber numberwithfloat: 44100.0], avsampleratekey,
[Nsnumber numberwithint: kaudioformatapplelossless],
Avformatidkey, [nsnumber numberwithint: 1], avnumberofchannelskey,
[Nsnumber numberwithint: avaudioqualitymax], avencoderaudioqualitykey, nil];

Nserror * error;
Recorder = [[avaudiorecorder alloc] initwithurl: URL settings: settings error: & amp; Error];
If (recorder ){
[Recorder preparetorecord];
Recorder. meteringenabled = yes;
[Recorder Record];
} Else
Nslog ([error description]);
}
Avaudiorecorder supports audio recording as the original name implies. Its second function is to provide audio level information. So here we just point the audio input to the/dev/null bit-I didn't find any documentation to support my opinion, but the consensus is like in any UNIX, /dev/null will enable the audio meter.
Note: If you want to use the above Code, remember to call preparetorecord (or record) before setting the meteringenabled attribute or audio metering ).
Remember to release recorder in dealloc. See Row 3:
-(Void) dealloc
{
[Recorder release];
[Super dealloc];
}
Audio sampling
We will use the timer to check the audio level 30 times per second. The nstimer instance variables and their callback functions are defined in micblowviewcontroller. h. For details about the modification, see lines 7 and 10:
# Import <uikit/uikit. h>
# Import <avfoundation/avfoundation. h>
# Import <coreaudio/coreaudiotypes. h>

@ Interface micblowviewcontroller: uiviewcontroller {
Avaudiorecorder * recorder;
Nstimer * leveltimer;
}

-(Void) leveltimercallback :( nstimer *) timer;

@ End
Update viewdidload in the. M file to enable the timer. For modifications, see lines 16 and 17:
-(Void) viewdidload
{
[Super viewdidload];
Nsurl * url = [nsurl fileurlwithpath: @ "/dev/null"];
Nsdictionary * settings = [nsdictionary dictionarywithobjectsandkeys:
[Nsnumber numberwithfloat: 44100.0], avsampleratekey,
[Nsnumber numberwithint: kaudioformatapplelossless], avformatidkey, [nsnumber numberwithint: 1],
Avnumberofchannelskey, [nsnumber numberwithint: avaudioqualitymax],
Avencoderaudioqualitykey, nil];

Nserror * error;
Recorder = [[avaudiorecorder alloc] initwithurl: URL settings: settings error: & amp; Error];
If (recorder ){
[Recorder preparetorecord];
Recorder. meteringenabled = yes;
[Recorder Record];
Leveltimer = [nstimer scheduledtimerwithtimeinterval: 0.03 target: Self
Selector: @ selector (leveltimercallback :) userinfo: Nil repeats: Yes];
} Else
Nslog ([error description]);
}
Now, we only directly sample the audio without filtering. Add leveltimercallback to the. M file:
-(Void) leveltimercallback :( nstimer *) timer {
[Recorder updatemeters];
Nslog (@ "average input: % F peak input: % F ",
[Recorder averagepowerforchannel: 0], [Recorder peakpowerforchannel: 0]);
}
Send an updatemeters message to refresh the average and peak power. This count is measured in a logarithm scale.-160 indicates completely quiet, and 0 indicates the maximum input value.
Do not forget to release the timer in dealloc. For modifications, see Row 3:
-(Void) dealloc
{
<Strong> [leveltimer release]; </strong>
[Recorder release];
[Super dealloc];
}
Listen to blow
As mentioned in the overview, we need to use low-pass filtering to eliminate the impact of high-frequency sound on the level. This algorithm creates a series of results that combine each previous sample input. We need an instance variable to save this result. Update the. h file. For modifications, see row 8:
# Import <uikit/uikit. h>
# Import <avfoundation/avfoundation. h>
# Import <coreaudio/coreaudiotypes. h>

@ Interface micblowviewcontroller: uiviewcontroller {
Avaudiorecorder * recorder;
Nstimer * leveltimer;
Double lowpassresults;
}
Replace leveltimercallback: method to implement this algorithm:
-(Void) leveltimercallback :( nstimer *) timer {
[Recorder updatemeters];
Constdouble alpha = 0.05;
Double peakpowerforchannel = POW (10, (0.05 * [Recorder peakpowerforchannel: 0]);
Lowpassresults = Alpha * peakpowerforchannel + (1.0-alpha) * lowpassresults;
Nslog (@ "average input: % F peak input: % F Low Pass results: % F ",
[Recorder averagepowerforchannel: 0],
[Recorder peakpowerforchannel: 0], lowpassresults );
}
We re-calculate the lowpassresults variable every time the timer calls back. For convenience, we convert it to 0-1. 0 indicates completely quiet, and 1 indicates the maximum volume.
However, when the low-pass filter value exceeds a threshold, we can determine that someone is blowing towards Mike. Setting threshold value is a skill. If it is set to too small, it is too easy to be triggered. If it is set to too high, it will take a long time to blow the air. In our program, I set it to 0.95. To change the log condition, see Row 6 and Line 7:
-(Void) listenforblow :( nstimer *) timer {
[Recorder updatemeters];
Constdouble alpha = 0.05;
Double peakpowerforchannel = POW (10, (0.05 * [Recorder peakpowerforchannel: 0]);
Lowpassresults = Alpha * peakpowerforchannel + (1.0-alpha) * lowpassresults;
If (lowpassresults> 0.95)
Nslog (@ "mic blow detected ");
}
Okay! You can check if someone blew Mike.
Thanks and descriptions
This method works well in most cases, but not in all cases. I wrote this article in flight. The engine sound of an airplane often triggers my algorithm. Similarly, in a room with a lot of noise, enough low-frequency sound will trigger my algorithm.
The algorithm is excerpted from this stack overflow post. The above Post uses the sclistener library for Audio level detection. Sclistener appears earlier than avaudiorecorder. It is used to hide the C language details and obtain the audio level code. Avaudiorecorder is undoubtedly easier to use.
Finally, this method does work properly in the simulator. But you need to find the Mac. To my surprise, the mike on the first-generation MacBook was in a small hole on the left of the camera.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.