Today, I found an example on the Internet to implement speech recognition. I personally think it is quite interesting.CodePost it to share with you:
Android uses recognizerintent to implement speech recognition. In fact, the code is relatively simple. However, if the setting cannot be found, an exception activitynotfoundexception will be thrown, so we need to capture this exception. In addition, speech recognition cannot be tested on simulators. Because Speech Recognition accesses Google cloud data, if the mobile phone's network is not enabled, it cannot recognize the sound! You must enable the network of your mobile phone. If the mobile phone does not have the speech recognition function, it cannot be enabled!
The code in recognizerintentactivity is as follows:
Public class recognizerintentactivity extends activity {private button btnreconizer; Private Static final int Limit = 1234; @ overrideprotected void oncreate (bundle savedinstancestate) {// todo auto-generated method stubsuper. oncreate (savedinstancestate); setcontentview (R. layout. reconizer); btnreconizer = (button) This. findviewbyid (R. id. btnrecognizer); btnreconizer. setonclicklistener (New onclicklistener () {@ overridepublic void onclick (view v) {// todo auto-generated method stubtry {// Speech Recognition mode transmitted through intent, enable voice intent = new intent (recognizerintent. action_recognize_speech); // Speech Recognition intent in language mode and free mode. putextra (recognizerintent. extra_language_model, recognizerintent. language_model_free_form); // indicates that the voice starts intent. putextra (recognizerintent. extra_prompt, "Start speech"); // start Speech Recognition startactivityforresult (intent, voice_recognition_request_code);} catch (exception e) {// todo: handle finished tione. printstacktrace (); toast. maketext (getapplicationcontext (), "Voice device not found", 1 ). show () ;}}) ;}@ overrideprotected void onactivityresult (INT requestcode, int resultcode, intent data) {// todo auto-generated method stub // The callback retrieves data from Google if (requestcode = voice_recognition_request_code & resultcode = result_ OK) {// arraylist <string> Results = data. getstringarraylistextra (recognizerintent. extra_results); string resultstring = ""; for (INT I = 0; I <results. size (); I ++) {resultstring + = results. get (I);} toast. maketext (this, resultstring, 1 ). show ();} super. onactivityresult (requestcode, resultcode, data );}}
The main principle is to send the voice to the Google cloud, and then process it on the cloud, match the corresponding data, and send it to the client.
Do not forget to add the network access permission to manifest:
<Uses-Permission Android: Name = "android. Permission. Internet"/>
Effect after running:
Click the start voice button and start talking. (make sure that the network of your mobile phone is on ):
I am waiting for the cloud data. Because I am a 2 GB card, I can't load it for a long time. I will try using the company's Wi-Fi when I return to the company. If I get the cloud data, it is printed in toast mode.