The difference between automatic recording and general recording is that you don't need to press the recording in the micro-letter and then let it go, but automatically judge the recording and the point of the stop based on the size of the voice, and then play it out as soon as the recording is over. The effect is much like a talking Tom Cat.
In the initialization phase of an automatic recording, two recording objects need to be created, one that needs to be recorded as a listener, and the other to be recorded at the required time. The specific process is as follows
Preparatory work
This project is written using Swift, and the member variables set are as follows
If you are not in the Dong Platinum Blog Park to see this article please click to view the original.
Recording Device
var recoder:avaudiorecorder!
Listener
var monitor:avaudiorecorder!
Player
var player:avaudioplayer!
Timer
var timer:nstimer!
The URL of the recording device
var recordurl:nsurl!
URL of the listener
var monitorurl:nsurl!
Of course, these attributes can't be knocked out directly. You need to first introduce a bridging file and import #import <AVFoundation/AVFoundation.h>
Import if there is a problem can see this article: How to let OC and swift mixed development
The recording device, the listener, and the timer should be initialized together when the program is started.
Before that, you need to set the quality of the audio to save, which will be used in many libraries from the Key,avsampleratekey,avformatidkey,avnumberofchannelskey, Avencoderaudioqualitykey the values for these keys are typically double or int types. One by one explanation is not necessary, probably means to save the sound of the Hertz (similar to the lossless and ordinary QQ music), conversion rate, preserved channels, sound quality and so on. Interested can carefully go to the header paper research. I checked the top quality of all the parameters. The size can be accepted after the recording is finished with the highest quality. (But the micro-letter speech that should be the middle and lower quality, provincial flow and timeliness of the main)
Copy Code code as follows:
Avaudiosession.sharedinstance (). Setcategory (Avaudiosessioncategoryplayandrecord, Error:nil)
var recodersetting:nsdictionary = nsdictionary (OBJECTSANDKEYS:14400.0,AVSAMPLERATEKEY,KAUDIOFORMATAPPLEIMA4, Avformatidkey,2,avnumberofchannelskey,0x7f,avencoderaudioqualitykey)
One of these arguments should be the Avaudioquality.max type, but Swift can't recognize it, just look at the constants inside and fill them with 16. The general thing is to put all the key-value pairs in a dictionary first, and then the dictionary will be used to instantiate a parameter in the recorder later.
The way to initialize the recorder is as follows, the listener is exactly alike. Only need to change another URL
Copy Code code as follows:
Instantiating a recording device
var Recordpath = Nstemporarydirectory (). stringByAppendingPathComponent ("RECORD.CAF")
Recordurl = Nsurl.fileurlwithpath (Recordpath)
Recoder = Avaudiorecorder (Url:recordurl, settings:recodersetting as [Nsobject:anyobject], Error:nil)
Start recording
The core function is recording, the principle of recording is to monitor the size of the sound decibel, set the critical point to open and close the recording.
If the sound has been very small do not handle.
If the sound is big, determine if the recording is now or not, and if not, start recording.
If the sound is small, determine if the recording is now. If you are recording, stop recording.
Func Updatetimer () {
//update measurer
self.monitor.updateMeters ()
//Get spoke db var power
= Self.monitor.peakPowerForChannel (0);
println ("-----")
if (Power > -30) {
if (!self.recoder.recording) {
println ("Start recording")
Self.recoder.record ()
}
}else {
if (self.recoder.recording) {
println ("End recording")
Self.recoder.stop ()
self.play ()
}
The results of the attempt are printed as follows, where the value is the number of decibels that have been monitored. Extremely quiet situation is-160 noisy environment is generally-40.
Play sound
Once the recording is complete, you can immediately set the sound to play
Func Play () {
timer.invalidate ()
monitor.stop ()
///delete recording cache
monitor.deleterecording ()
Player = Avaudioplayer (Contentsofurl:recordurl, error:nil)
player.delegate = self
player.play ()
}
The timer in the diagram above stops-listener stops-the cache for removing listeners is reflected in this piece of code. It is recommended that you set up the agent, because even if it is played once, it is likely to do some extra work after the playback is complete, and the expectation of this project is to be able to loop the audio playback. That is, the listener restarts the total process after the timer has been played.
Extension operation
The agent abides by the avaudioplayerdelegate. and implements the proxy method, which is invoked before the method in the proxy method
Func audioplayerdidfinishplaying (player:avaudioplayer!, successfully flag:bool) {
//re-open timer
Self.setuptimer ()
}
func Setuptimer () {
Self.monitor.record ()
Self.timer = Nstimer.scheduledtimerwithtimeinterval (0.1, target:self, selector: "Updatetimer", Userinfo:nil, Repeats:true)
}
This concludes a complete recording process.
can also do some special operation is similar to the talking Tom Cat is not to say the words you said, but the sound did a certain deal to say. If you want to implement this feature, you need to start the sound playback before the sound is played, and then set some changes to the sound to play to achieve the goal. Most properties need to open a bool value before they are modified to operate. For example, replace the play (above) with the following code
Copy Code code as follows:
Allow change speed
Player.enablerate = True
Set speed
Player.rate = 2
Player.play ()
The range of values for this attribute rate is 0.5 to 2.0. The original seems to find this one other change tones should also need to refer to Third-party libraries.
The above content is to use Swift to achieve the function of automatic recording, I hope you can enjoy