Android based on the voice of Baidu Voice interactive function (recommended) _android

Source: Internet
Author: User
Tags assert auth documentation error code event listener file copy gettext int size

The project used the voice wake-up function, the front has been used in the voice of the speech recognition, was intended to be directly with the voice of the speech to wake up, but the voice of the flight wake-up fee, the trial version only 35 days valid. Had to switch to Baidu Voice, Baidu voice all functions free, function is also relatively simple and practical, including speech recognition, speech synthesis and voice wake-up, just can compose a complete set of voice interaction function.

Effect Chart:

The first is the voice wake-up function, say the keyword can be called speech recognition, wake-up success will have voice prompts, here uses the Baidu speech synthesis function. Then Baidu speech recognition will automatically switch to online or offline recognition based on WiFi, but offline recognition can only identify the keyword has been imported, and the first time offline identification needs networking, recognition of success, the same voice prompts. The effect image gif has no sound, and the toast display is the content of the voice prompt.

Here say a bit, Baidu voice demo in the Voice wake-up is in Onresume () began to wake up listening, wake up successfully in the OnPause () to stop the wake up listening. And I'm now going to pop the UI interface for speech recognition after a successful wake-up call, so the UI will pop up and stop the wake-up call. If speech recognition succeeds, the UI interface disappears, and wake-up listening starts again, and then the wake-up call is awakened. However, if the recognition fails, the encapsulated UI interface becomes the next picture, and it is time to manually click Retry or Cancel to fit the idea of full voice interaction. To resolve this situation, the stop Wake Listener is written to OnStop () so that it can be awakened even if speech recognition fails.

The specific integration steps are available in the official documentation, or refer to the following articles

Http://www.jb51.net/article/97329.htm

Note: I use speech recognition and speech synthesis here, so the two SDK will be imported into the project, there is a small problem, normally, the jar package imported to the project, but also the Assert and Jnilibs folder into the project, I only put the Assert folder of speech recognition, Jnilibs folder I did not put into the project, so that can be used. If I put both the assert and the jnilibs of speech recognition and speech synthesis into engineering, I would report the following error, I don't know why.

Java.lang.UnsatisfiedLinkError:Native method is not found:com.baidu.speech.easr.easrNativeJni.WakeUpFree: () I

Mainactivity:

Package com.example.administrator.baiduvoicetest;
Import android.content.Intent;
Import Android.os.Bundle;
Import android.os.Environment;
Import android.support.v7.app.AppCompatActivity;
Import Android.text.TextUtils;
Import android.util.AndroidRuntimeException;
Import Android.util.Log;
Import Android.view.View;
Import Android.widget.EditText;
Import Android.widget.TextView;
Import Android.widget.Toast;
Import Com.baidu.speech.EventListener;
Import Com.baidu.speech.EventManager;
Import Com.baidu.speech.EventManagerFactory;
Import Com.baidu.tts.auth.AuthInfo;
Import Com.baidu.tts.client.SpeechError;
Import Com.baidu.tts.client.SpeechSynthesizer;
Import Com.baidu.tts.client.SpeechSynthesizerListener;
Import Com.baidu.tts.client.TtsMode;
Import org.json.JSONException;
Import Org.json.JSONObject;
Import Java.io.File;
Import java.io.FileNotFoundException;
Import Java.io.FileOutputStream;
Import java.io.IOException;
Import Java.io.InputStream;
Import java.util.ArrayList; Import Java.util.HaShmap; public class Mainactivity extends Appcompatactivity {private TextView txtresult; private edittext minput; private Eventma
Nager Mwpeventmanager;
Private SpeechSynthesizer Mspeechsynthesizer;
Private String Msampledirpath;
private static final String Sample_dir_name = "Baidutts";
private static final String Speech_female_model_name = "Bd_etts_speech_female.dat";
private static final String Speech_male_model_name = "Bd_etts_speech_male.dat";
private static final String Text_model_name = "Bd_etts_text.dat";
private static final String License_file_name = "Temp_license";
private static final String English_speech_female_model_name = "Bd_etts_speech_female_en.dat";
private static final String English_speech_male_model_name = "Bd_etts_speech_male_en.dat";
private static final String English_text_model_name = "Bd_etts_text_en.dat";
private static final String TAG = "mainactivity"; @Override protected void OnCreate (Bundle savedinstancestate) {super.oncreate (savedinstancestate); SetcontEntview (R.layout.activity_main);
Txtresult = (TextView) Findviewbyid (R.id.txtresult); Txtresult.settext ("Please say wake-up words: small hello or Baidu" "+" away from online syntax recognition (first use requires networking authorization) \ n "+" speech recognition after the start you can say (can be defined offline according to the syntax): \ n "+" 1. Call John (offline) \ n "+" 2. Call Dick (offline) \ n "+" 3. Open calculator (offline) \ n "+" 4.
What's the weather like tomorrow (need to be networked) \ n "+" ... "+ \ n");
minput= (EditText) Findviewbyid (r.id.input);
Minput.setvisibility (View.gone);
Initialenv ();
Initialtts (); @Override protected void Onresume () {super.onresume ();//Wake up function Open Step//1) Create wake event Manager Mwpeventmanager = Eventmanagerfactor
Y.create (mainactivity.this, "WP"); 2 Register wake-up Event listener Mwpeventmanager.registerlistener (new EventListener () {@Override public void onEvent (string name, String p
Arams, byte[] data, int offset, int length) {log.d (TAG, String.Format ("event:name=%s, params=%s", name, params)); try {
Jsonobject json = new Jsonobject (params); if ("Wp.data". Equals (name)) {//Every time the wakeup succeeds, the name=wp.data time will be recalled, the awakened word activated is in params word field String Word = json.getstring ("word")
; Txtresult.append ("Wake up success, Wake up Word:" + word+ "\ r \ n");
Minput.settext ("wake-up success, please say the instructions");
Minput.settext ("succeed");
Toast.maketext (Mainactivity.this,minput.gettext (), Toast.length_long). Show ();
Speak ();
Delay by 3 seconds to prevent speech synthesis from being read by speech recognition try {Thread.Sleep (3000);} catch (Interruptedexception e) {e.printstacktrace ();}
Intent Intent = new Intent ("Com.baidu.action.RECOGNIZE_SPEECH"); Intent.putextra ("Grammar", "ASSET:///BAIDU_SPEECH_GRAMMARDEMO.BSG"); Set the offline authorization file (the offline module requires authorization), which can be generated with a custom semantic tool, linked HTTP://YUYIN.BAIDU.COM/ASR#M5//intent.putextra ("Slot-data", your slots);
Set the entry to be covered in the grammar, such as contact name Startactivityforresult (Intent, 1); else if ("Wp.exit". Equals (name)) {Txtresult.append (Wake stopped: + params + \ r \ n);}
catch (Jsonexception e) {throw new Androidruntimeexception (e);}}
});
3 Notify Wake Manager, start wake up function HashMap params = new HashMap (); Params.put ("Kws-file", "Assets:///wakeupdemo.bin"); Set wake-up resources, wake up resources to HTTP://YUYIN.BAIDU.COM/WAKE#M4 to evaluate and export Mwpeventmanager.send ("Wp.start", New Jsonobject (params).
ToString (), NULL, 0, 0); } @Override Protected void Onactivityresult (int requestcode, int resultcode, Intent data) {Super.onactivityresult (Requestcode, ResultCode, D
ATA);
if (ResultCode = = RESULT_OK) {Bundle results = Data.getextras ();
arraylist<string> results_recognition = results.getstringarraylist ("results_recognition");
Txtresult.append ("Recognition result (array form):" + results_recognition + "\ n");
Change the recognition result of the array form to a normal string type, for example: [Call John] into a call to John string str=results_recognition+ "";
String res=str.substring (Str.indexof ("[") +1,str.indexof ("]"));
Txtresult.append (res+ "\ n");
Minput.settext ("OK, execute immediately" +res);
Speak ();
Toast.maketext (Mainactivity.this,minput.gettext (), Toast.length_long). Show ();
}} private void Initialtts () {This.mspeechsynthesizer = Speechsynthesizer.getinstance ();
This.mSpeechSynthesizer.setContext (this); This.mSpeechSynthesizer.setSpeechSynthesizerListener (New Speechsynthesizerlistener () {@Override public void Onsynthesizestart (string s) {} @Override public void onsynthesizedataarrived (string s, byte[] bytes, int i) {} @OVerride public void Onsynthesizefinish (string s) {} @Override public void Onspeechstart (string s) {} @Override public Vo ID onspeechprogresschanged (string s, int i) {} @Override public void Onspeechfinish (string s) {} @Override public void O
Nerror (String s, Speecherror speecherror) {}}); Text Model file path (offline engine usage) this.mSpeechSynthesizer.setParam (speechsynthesizer.param_tts_text_model_file, Msampledirpath +
"/" + text_model_name); Acoustic model file path (offline engine usage) This.mSpeechSynthesizer.setParam (speechsynthesizer.param_tts_speech_model_file,
Msampledirpath + "/" + speech_female_model_name); The local authorization file path, if not set, will use the default path. Set the temporary authorization file path, licence_file_name replace the actual path to the temporary authorization file, only when using temporary license files, if you have a formal offline authorization in [Application management],
It is not necessary to set this parameter, it is recommended that the line code be deleted (offline engine)//If the result of the synthesis is prompted to expire the temporary authorization file, the temporary authorization file is used, please delete the temporary authorization. This.mSpeechSynthesizer.setParam (Speechsynthesizer.param_tts_licence_file, Msampledirpath + "/" + License_file_name
); Please replace the app ID (offline authorization) this.mSpeechSynthesizer.setAppId ("XXX"/*) that was registered on the Voice developer Platform for the demo to run the AppID, please replace it with your own ID.
*/);Please replace the Apikey and Secretkey (online licensing) this.mSpeechSynthesizer.setApiKey ("xxx", "XXX"/*) that are registered with the Voice developer platform.
Here just to let the demo normal operation using Apikey, please replace their own apikey*/); Pronunciation person (online engine), available parameters for 0,1,2,3 ... (The server side will be dynamically incremented, each value meaning reference documentation, whichever is the document description.) 0--Ordinary female, 1--ordinary male, 2--special male, 3--emotional male voice ...
) This.mSpeechSynthesizer.setParam (Speechsynthesizer.param_speaker, "0"); Set up a MIX-mode synthesis Strategy This.mSpeechSynthesizer.setParam (Speechsynthesizer.param_mix_mode, Speechsynthesizer.mix_mode_
DEFAULT); Authorization detection interface (only through Authinfo to verify that authorization is successful.)
//Authinfo interface is used to test whether the developer has successfully applied for online or offline authorization, and if the test authorization is successful, you can delete the authinfo part of the code (which is time-consuming when the interface is first validated) and will not affect normal use (the SDK automatically verifies the authorization when the composite is used)
AuthInfo AuthInfo = This.mSpeechSynthesizer.auth (Ttsmode.mix); if (authinfo.issuccess ()) {Toast.maketext (this, "auth success", Toast.length_long). Show ();} else {String errormsg = Auth
Info.getttserror (). Getdetailmessage ();
Toast.maketext (This, "Auth failed errormsg=" + Errormsg,toast.length_long). Show ();
//Initialization of TTS Mspeechsynthesizer.inittts (Ttsmode.mix); Load Offline English resources (provides offline English synthesis function) int result = Mspeechsynthesizer.loadenglishmodel (MsamplediRpath + "/" + English_text_model_name, Msampledirpath + "/" + english_speech_female_model_name);
Toast.maketext (This, "Loadenglishmodel result=" + Result,toast.length_long). Show ();
Print engine information and model basic information//printengineinfo (); if (Textutils.isempty (Minput.gettext)) {text = "Welcome to use Baidu Voice synthesis SDK, Baidu voice to provide you with support."
";
Minput.settext (text);
int result = This.mSpeechSynthesizer.speak (text); if (Result < 0) {Toast.maketext (this, "Error,please Look up error code in DOC or URL:HTTP://YUYIN.BAIDU.COM/DOCS/TTS/12
2 ", Toast.length_long). Show ();} private void Initialenv () {if (Msampledirpath = = null) {String Sdcardpath = Environment.getexternalstoragedirectory (). To
String ();
Msampledirpath = Sdcardpath + "/" + sample_dir_name;
} makedir (Msampledirpath);
Copyfromassetstosdcard (False, Speech_female_model_name, Msampledirpath + "/" + speech_female_model_name); Copyfromassetstosdcard (False, Speech_male_modEl_name, Msampledirpath + "/" + speech_male_model_name);
Copyfromassetstosdcard (False, Text_model_name, Msampledirpath + "/" + text_model_name);
Copyfromassetstosdcard (False, License_file_name, Msampledirpath + "/" + license_file_name); Copyfromassetstosdcard (False, "english/" + english_speech_female_model_name, Msampledirpath + "/" + ENGLISH_SPEECH_
Female_model_name); Copyfromassetstosdcard (False, "english/" + english_speech_male_model_name, Msampledirpath + "/" + English_speech_male_
Model_name);
Copyfromassetstosdcard (False, "english/" + english_text_model_name, Msampledirpath + "/" + english_text_model_name); } private void MakeDir (String dirpath) {File File = new file (Dirpath); if (!file.exists ()) {file.mkdirs ();}}/** * will SA Mple Project required resources file copy to SD card use (authorization file is temporary authorization document, please register official authorization) * * @param iscover Overwrite existing target file * @param source * @param dest/private void Copyfromassetstosdcard (Boolean iscover, String source, String dest) {File File = new file (dest); if (Iscover | | (!iscover &&!fIle.exists ()) {InputStream is = null;
FileOutputStream fos = null;
try {is = Getresources (). Getassets (). open (source);
String path = dest;
FOS = new FileOutputStream (path);
byte[] buffer = new byte[1024];
int size = 0; while (size = is.read (buffer, 0, 1024)) >= 0) {fos.write (buffer, 0, size);}} catch (FileNotFoundException e) {e.printstacktrace ();} catch (IOException e) {e.printstacktrace ();} finally {if (FOS != null) {try {fos.close ();} catch (IOException e) {e.printstacktrace ();} try {if (is!= null) {Is.close ();}} C
Atch (IOException e) {e.printstacktrace ();}} }} @Override protected void OnStop () {super.onstop ();//Stop Wakeup Listening mwpeventmanager.send ("Wp.stop", NULL, NULL, 0, 0);}

Note: The source code is the introduction of the demo to the integration, and delete some of the methods that are not used, Jane less the amount of coding.

Activity_main: Only one TextView and one editview, very simple. TextView used to display results, EditView text messages for speech synthesis

<?xml version= "1.0" encoding= "Utf-8"?> <linearlayout xmlns:android=
"http://schemas.android.com/apk/" Res/android "
xmlns:tools=" Http://schemas.android.com/tools "
android:id=" @+id/activity_main
" Android:layout_width= "Match_parent"
android:layout_height= "match_parent"
android:orientation= "vertical" "
tools:context=" com.example.administrator.baiduvoicetest.MainActivity ">
<textview
android: Layout_width= "Match_parent"
android:layout_height= "wrap_content"
android:textsize= "18DP"
android:padding= "8DP"
android:id= "@+id/txtresult"/>
<edittext android:id=
"@+id/input"
android:layout_width= "fill_parent"
android:layout_height= "wrap_content"
android:hint= "input"/ >
</LinearLayout>

Androidmanifest: Adding permissions and an activity as a UI for speech recognition

<?xml version= "1.0" encoding= "Utf-8"?> <manifest xmlns:android= "http://schemas.android.com/apk/res/" Android "package=" Com.example.administrator.baiduvoicetest "> <uses-permission android:name=" Android.permission.RECORD_AUDIO "/> <uses-permission android:name=" Android.permission.ACCESS_NETWORK_STATE " /> <uses-permission android:name= "Android.permission.INTERNET"/> <uses-permission " Android.permission.READ_PHONE_STATE "/> <uses-permission android:name=" android.permission.WRITE_EXTERNAL_ STORAGE "/> <uses-permission android:name=" Android.permission.ACCESS_WIFI_STATE "/> <uses-permission Android:name= "Android.permission.CHANGE_WIFI_STATE"/> <uses-permission android:name= " Android.permission.MODIFY_AUDIO_SETTINGS "/> <uses-permission android:name=" android.permission.WRITE_ SETTINGS "/> <application android:allowbackup=" true "android:icon=" @mipmap/ic_launcher "android:label=" @ String/app_name "android:supportsrtl="True" Android:theme= "@style/apptheme" > <!--begin:baidu speech sdk--> <!--off-line function guide: 1. In Baidu Voice open platform registration application, Http://yuyin.baidu.com/app 2. For your application "Request Offline Authorization page", fill in the package name 3. Fill in the appropriate app_id in the current application Androidmanifest.xml (or set the AppID parameter in code) 4. Download and integrate the appropriate resources according to the scenario, see http://yuyin.baidu.com/docs/asr/131 and http://yuyin.baidu.com/asr/download Another thing to note is that offline functionality is only "enhanced" for online functionality
and cannot be used permanently (especially for first use). --> <!--Please fill in the real app_id api_key secret_key--> <meta-data android:name= "Com.baidu.speech.APP_ID" Android: Value= "8888274"/> <meta-data android:name= "Com.baidu.speech.API_KEY" Fofognjferg3utzc4fddnxhm "/> <meta-data android:name=" Com.baidu.speech.SECRET_KEY "android:value=" 63830985f5b05d2863f13ad07c7feaa3 "/> <service android:name=" Com.baidu.speech.VoiceRecognitionService " Android:exported= "false"/> <activity android:name= " Com.baidu.voicerecognition.android.ui.BaiduASRDigitalDialog "android:configchanges=" orientation|keyboardhidden| Screenlayout "Android:theme=" @android: Style/theme.dialog"Android:exported=" false "android:screenorientation=" Portrait "> <intent-filter> <action android:name=" Com.baidu.action.RECOGNIZE_SPEECH "/> <category android:name=" Android.intent.category.DEFAULT "/> </ Intent-filter> </activity> <!--end:baidu speech sdk--> <activity android:name= ". Mainactivity "> <intent-filter> <action android:name=" Android.intent.action.MAIN "/> <category Android:name= "Android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application > </manifest>

The above is a small set to introduce the Android based on Baidu Voice interactive function, I hope to help you, if you have any questions please give me a message, small series will promptly reply to everyone. Here also thank you very much for the cloud Habitat Community website support!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.