Foreword: Recently did the project to use to the voice fly speech, thus has made a simple tutorial, for everybody to use.
Audio Voice use steps: Voice to text:
1, first to the Flying Open Platform (http://www.xfyun.cn/) registration, Account:
2, login after registration:
3, click Select My Voice Cloud:
4. Click on the left sidebar to create a new app:
5. After creating the app: Copy AppID:
6, Download SDK: Click on the sidebar to the left of the SDK Download Center:
Choose the features and platforms we need:
7, click Download SDK, save file, and open.
//-----------------------------------------------------------
8, create demo with Androidstudio:
Layout file Contents:
<button android:id= "@+id/btn_click" android:text= "click to open the message fly speech recognition" android:layout_width= "Wrap_ Content " android:layout_height=" wrap_content " /><edittext android:id=" @+id/result " android:layout_below= "@id/btn_click" android:layout_width= "match_parent" android:layout_height= "Wrap_ Content " android:hint=" here shows what you recorded " />
Effect:
9, will extract the files in the Libs of all the jar package, test into the Project Libs folder, refresh Gradle (Eclipse students can directly complete all the tests such as Libs, and add Class Library):
9.1 Create a new folder Jnilibs under the main folder. Copy all folders except. jar in Libs under All SDKs, which are. so files.
9,2, copy the Assets folder to main:
The effect is as follows:
Add permissions to the registration file:
<uses-permission android:name= "Android.permission.RECORD_AUDIO"/><uses-permission android:name= " Android.permission.INTERNET "/><uses-permission android:name=" Android.permission.ACCESS_NETWORK_STATE "/ ><uses-permission android:name= "Android.permission.ACCESS_WIFI_STATE"/><uses-permission android:name = "Android.permission.CHANGE_NETWORK_STATE"/><uses-permission android:name= "android.permission.READ_PHONE_ State "/><uses-permission android:name=" Android.permission.ACCESS_FINE_LOCATION "/><uses-permission Android:name= "Android.permission.READ_CONTACTS"/><uses-permission android:name= "android.permission.WRITE_ External_storage "/><uses-permission android:name=" Android.permission.CAMERA "/>
In: mainactivity, add the following code:
Note that you need to copy your own AppID into your code:
Package Zhaoq_qiang.xunfeidemo;import Android.support.v7.app.appcompatactivity;import Android.os.Bundle;import Android.view.menu;import Android.view.menuitem;import Android.view.view;import Android.widget.Button;import Android.widget.edittext;import Android.widget.toast;import Com.iflytek.cloud.recognizerresult;import Com.iflytek.cloud.speechconstant;import Com.iflytek.cloud.speecherror;import com.iflytek.cloud.SpeechUtility; Import Com.iflytek.cloud.ui.recognizerdialog;import Com.iflytek.cloud.ui.recognizerdialoglistener;import Org.json.jsonarray;import Org.json.jsonobject;import Org.json.jsontokener;public class MainActivity extends Appcompatactivity implements View.onclicklistener {private Button btn_click; Private EditText Mresulttext; @Override protected void OnCreate (Bundle savedinstancestate) {super.oncreate (savedinstancestate); Setcontentview (R.layout.activity_main); Btn_click = (Button) Findviewbyid (R.id.btn_click); Mresulttext = ((EditText) Findviewbyid (R.id.result)); Speechutility.createutility (This, speechconstant.appid + "= You need to fill in the APPID you apply for"); Btn_click.setonclicklistener (this); } @Override public void OnClick (View v) {btnvoice (); }//todo starts talking: private void Btnvoice () {recognizerdialog dialog = new Recognizerdialog (this,null); Dialog.setparameter (Speechconstant.language, "ZH_CN"); Dialog.setparameter (speechconstant.accent, "Mandarin"); Dialog.setlistener (New Recognizerdialoglistener () {@Override public void Onresult (Recognizerresult Recognizerresult, Boolean B) {Printresult (Recognizerresult); } @Override public void OnError (Speecherror speecherror) {}}); Dialog.show (); Toast.maketext (This, "Please start Talking", Toast.length_short). Show (); }//callback result: private void Printresult (Recognizerresult results) {String text = Parseiatresult (results.Getresultstring ()); Auto-fill address mresulttext.append (text); public static string Parseiatresult (String json) {stringbuffer ret = new StringBuffer (); try {jsontokener Tokener = new Jsontokener (JSON); Jsonobject Joresult = new Jsonobject (Tokener); Jsonarray words = Joresult.getjsonarray ("ws"); for (int i = 0; i < words.length (); i++) {//transliteration result word, default using first result jsonarray items = Words.get Jsonobject (i). Getjsonarray ("CW"); Jsonobject obj = items.getjsonobject (0); Ret.append (Obj.getstring ("w")); }} catch (Exception e) {e.printstacktrace (); } return ret.tostring (); }}
To run the program:
Effect
Demo:https://github.com/229457269/xunfeivoicedemo
Follow-up: The next need to go online to the application market steps, the author does not add, the developer completed the project packaging, through the audit, you can publish the program, and to obtain the users of the market.
Audio voice use steps (for Androidstudio): Voice to text: