Speech recognition system in order to facilitate people's lives, many aspects can be implemented by voice commands instead of manual input to execute the desired command. Now do some good speech recognition of the open platform for us to use, one is the Iflytek platform, one is the Baidu voice platform. I personally prefer Iflytek, because the advantage of Iflytek is the large segment of the word recognition, the accuracy rate is high. This is exactly what I need to enter the bank card number correctly. This blog is also mainly about the use of the voice of the CyberLink SDK. Let's take a detailed look at the Iflytek. 1. Iflytek open platform 2. Iflytek IOS-API Open Platform Iflytek operation Step 1. Registration ID Normal Registration step, step by step in order to
After registering, create a new app such as the first diagram, then add voice dictation, and in the Download SDK select iOS download. 2. Import the Flight SDK framework to download the SDK after decompression there are three folders: Doc folder: Do not say that must be the development of documents; The important thing is that the next two folders: One is the Lib folder: stored in the Iflytek SDK class library, this is the sdk;sample we want to import: The Iflytek Demo project for iOS. Let's create a project, copy the "Iflymsc.framework" under the Lib folder to the project directory, and then add a dependent library to the project, as shown in: 3. Start speech recognition There is an interface prompt for speech recognition, first create a controller with storyboard, Jump two interfaces are speech recognition and text conversion interface respectively. The speech recognition and the Firstviewcontroller connect the connection and the link property and the button method, the text recognition ibid. 3.1 Importing header files
#import " AppDelegate.h " #import " Iflymsc/iflyspeechutility.h " //
3.2 Landing Message Flight servicebefore using the voice resolution of the message, the user authentication is required, that is, the login to the flight server, which adds two lines of code in the Viewdidload () method. That is, your current user's AppID will be required to agree to your login. The code is as follows:
1-(BOOL) Application: (UIApplication *) application didfinishlaunchingwithoptions: (Nsdictionary *) launchoptions {2 //Override point for customization after application launch.3 4 //sign in to the Iflytek voice platform5NSString *appid = [NSString stringWithFormat:@"appid=%@",@"570f0a8b"];6 [iflyspeechutility createutility:appid];7 8 returnYES;9}3.3 The code to write speech recognition in FIRSTVIEWCONTROLLER.M is as follows
#import "FirstViewController.h"//First step: Introduce the library file//the interface file of the Iflytek voice recognition function callback method#import<iflyMSC/IFlyRecognizerViewDelegate.h>//The Voice recognition view of the Iflytek speech recognition function#import<iflyMSC/IFlyRecognizerView.h>//the constants defined in the Iflytek speech recognition function#import<iflyMSC/IFlySpeechConstant.h>//Follow the proxy agreement@interfaceFirstviewcontroller () <IFlyRecognizerViewDelegate>//Speech Recognition Objects@property (nonatomic,strong) Iflyrecognizerview *Iflyrecognizerview;//strings that accept related results@property (nonatomic,strong) nsmutablestring *result;//Show identifying content TextView@property (Weak, nonatomic) iboutlet Uitextview *Showcontenttextview;@end@implementationFirstviewcontroller- (void) viewdidload {[Super viewdidload]; //Do any additional setup after loading the view. //Create a sound Recognition view object, initialize the sound recognition controlself.iflyrecognizerview=[[Iflyrecognizerview alloc] initWithCenter:self.view.center]; //delegate needs to be set to ensure that the delegate callback returns normallySelf.iflyrecognizerview.Delegate=Self ;}#pragmaMark-Start identification-(ibaction) Beginrecognise: (IDsender {[self startlistenning];}- (void) startlistenning{//set speech recognition results to be used as plain text fields[Self.iflyrecognizerview Setparameter:@"IAT"forkey:[iflyspeechconstant Ifly_domain]]; //set the front point detection time to 6000ms[Self.iflyrecognizerview Setparameter:@"6000"forkey:[iflyspeechconstant Vad_bos]]; //after setting the endpoint detection time is 700ms[Self.iflyrecognizerview Setparameter:@" the"forkey:[iflyspeechconstant Vad_eos]]; //set the sample rate to 8000[Self.iflyrecognizerview Setparameter:@"8000"forkey:[iflyspeechconstant Sample_rate]]; //set to return included punctuation in results[Self.iflyrecognizerview Setparameter:@"1"forkey:[iflyspeechconstant Asr_ptt]]; //to set the return data structure type XML after speech recognition is complete[Self.iflyrecognizerview Setparameter:@"Plain"forkey:[iflyspeechconstant Result_type]]; //set the file name to be cached in the Documents folder named Temp.asr[Self.iflyrecognizerview Setparameter:@"Temp.asr"forkey:[iflyspeechconstant Asr_audio_path]]; //to set custom parameters[Self.iflyrecognizerview Setparameter:@"Custom"forkey:[iflyspeechconstant PARAMS]]; [Self.iflyrecognizerview start]; }3.4 Proxy MethodsThe method of agent callback used in the processing of the recognition result of Iflyspeechsynthesizerdelegate realizes the onresult:islast of the Protocol: method.
Attention!!!! Here is the Onresult, not the Onresults, which is the result callback function of the speech parsing without the interface hint.
callback when the default data is JSON data, we can not be puzzled to resolve the use of the way, Iflytek has for us to consider these issues, he officially provided a "isrdatahelper" with it to parse it. The code is as follows:
#pragmaMark-Proxy method//Success-(void) Onresult: (Nsarray *) resultarray islast: (BOOL) islast{Self.result=[[Nsmutablestring alloc] init]; Nsdictionary*dic = [Resultarray objectatindex:0]; for(NSString *keyinchdic) {[Self.result AppendFormat:@"%@", key]; } NSLog (@"%@---------", _result); //Customizing control display contentSelf.showContentTextView.text = [NSString stringWithFormat:@"%@%@", Self.showcontenttextview.text,self.result]; }//failed-(void) OnError: (Iflyspeecherror *) error{NSLog (@"error =%@", error);}- (void) didreceivememorywarning {[Super didreceivememorywarning]; //Dispose of any resources the can be recreated.}4. Speech recognition text (say a word let it show the words you say)
The whole process is the same as above. Write the code below.
#import "SecondViewController.h"//First step: Introduce the header file//callback method interface for text recognition#import<iflyMSC/IFlySpeechSynthesizerDelegate.h>//Text Recognition Object#import<iflyMSC/IFlySpeechSynthesizer.h>//the constant defined by the Iflytek voice framework#import<iflyMSC/IFlySpeechConstant.h>@interfaceSecondviewcontroller () <IFlySpeechSynthesizerDelegate>//Introducing Protocols@property (Weak, nonatomic) Iboutlet Uitextfield *input;//Receive Box Properties//Create a text recognition object@property (Strong, nonatomic) Iflyspeechsynthesizer *synthesizer;@end@implementationSecondviewcontroller- (void) viewdidload {[Super viewdidload]; //Do any additional setup after loading the view. //Create a text recognition objectSelf.synthesizer =[Iflyspeechsynthesizer sharedinstance]; //Specify a proxy object for a text-aware objectSelf.synthesizer.Delegate=Self ; //set key properties of a text-aware object[Self.synthesizer Setparameter:@" -"forkey:[iflyspeechconstant Speed]]; [Self.synthesizer setparameter:@" -"forkey:[iflyspeechconstant VOLUME]]; [Self.synthesizer setparameter:@"XIAOYAN"forkey:[iflyspeechconstant Voice_name]]; [Self.synthesizer setparameter:@"8000"forkey:[iflyspeechconstant Sample_rate]]; [Self.synthesizer setparameter:@"TEMP.PCM"forkey:[iflyspeechconstant Tts_audio_path]]; [Self.synthesizer setparameter:@"Custom"forkey:[iflyspeechconstant PARAMS]];}#pragmaMark-Identifying relevant content-(Ibaction) Beginrecgnise: (IDSender {[Self.synthesizer startspeaking:_input.text];}#pragmaMark-Proxy Method-(void) oncompleted: (Iflyspeecherror *) error{}This is very convenient a third-party approach to implementing speech recognition!
iOS voice recognition (CyberLink)