Android Studio Fast integrated Flight SDK for text reading

Source: Internet
Author: User
Tags session id

Today, let's learn how to implement the text-reading function in the Android Studio Fast integrated Flight SDK, first look at:

Step one: Understand TTS voice services

TTS is all called text to Speech, which is "from text to Speech". It is the use of both linguistic and psychological excellence, with the support of the built-in chip, through the design of neural networks, the text intelligently into the natural voice stream.

TTS technology for real-time conversion of text files, the short conversion time can be calculated in seconds. In its unique intelligent voice controller, the text output of the voice rhythm smooth, so that the listener in listening to the message feel natural, no machine voice output of indifference and jerky feeling. Enables the user to hear crisp sound and a consistent, smooth tone

The language service is divided into online synthesis and local synthesis, where local synthesis needs to download the language pack, which is the same as Google TTS, but Google's TTS is not supported in some phones or does not support Chinese. Here we use the method of online synthesis, need to have a little bit of speed, otherwise there will be slow network, pause playback of the prompt.

Step two: Understand the main objects and methods
Speech Synthesis Object Private SpeechSynthesizer mtts;//Voice dictation object Private SpeechRecognizer miat;//Initialize Ttsmtts = speechsynthesizer.create Synthesizer (iatdemo.this, mttsinitlistener);//The Main method text is to be read by the mtts.startspeaking (text, mttslistener);//Voice object parameter Settings//        Set Dictation engine Miat.setparameter (Speechconstant.engine_type, Menginetype);        Sets the return result format miat.setparameter (Speechconstant.result_type, "json");        String lag = msharedpreferences.getstring ("Iat_language_preference", "Mandarin");        if (Lag.equals ("en_US")) {//Set Language Miat.setparameter (Speechconstant.language, "en_US");            } else {//Set Language Miat.setparameter (Speechconstant.language, "ZH_CN");        Set Language region Miat.setparameter (speechconstant.accent, lag); }//Set voice front end point: Mute timeout, that is, how long the user does not speak as time-out processing miat.setparameter (Speechconstant.vad_bos, msharedpreferences.getstring                ("Iat_vadbos_preference", "4000")); Set the voice after endpoint: After the endpoint mute detection time, that is, the user stopped sayingMiat.setparameter (Speechconstant.vad_eos, msharedpreferences.getstring ("Iat_vadeos_") is considered to be no longer entered, automatically stops recording                Preference "," 1000 ")); Set the punctuation mark, set to "0" to return the result without punctuation, set to "1" to return the result with punctuation miat.setparameter (SPEECHCONSTANT.ASR_PTT, msharedpreferences.getstring ("Iat                _punc_preference "," 1 ")); Set audio save path, save audio format support PCM, WAV, set path to SD card Please note Write_external_storage permissions//NOTE: Audio_format parameter language needs to be updated to take effect Miat.setpara        Meter (Speechconstant.audio_format, "wav");                Miat.setparameter (Speechconstant.asr_audio_path, environment.getexternalstoragedirectory () + "/msc/iat.wav"); Sets whether the dictation result is dynamically corrected, and "1" returns the result dynamically in the dictation process, or only returns the final result after the dictation has ended//NOTE: This parameter is temporarily valid for online dictation only Miat.setparameter (Speechconstan T.asr_dwa, msharedpreferences.getstring ("Iat_dwa_preference", "0"));
  Step three: Implement functions  
Package Com.jerehedu.administrator.mysounddemo;import Android.annotation.suppresslint;import android.app.Activity ; Import Android.content.sharedpreferences;import Android.os.bundle;import Android.os.environment;import Android.util.log;import Android.view.view;import Android.view.view.onclicklistener;import Android.view.Window; Import Android.widget.edittext;import Android.widget.radiogroup;import Android.widget.toast;import Com.iflytek.cloud.errorcode;import Com.iflytek.cloud.initlistener;import Com.iflytek.cloud.RecognizerListener; Import Com.iflytek.cloud.recognizerresult;import Com.iflytek.cloud.speechconstant;import Com.iflytek.cloud.speecherror;import Com.iflytek.cloud.speechrecognizer;import Com.iflytek.cloud.speechsynthesizer;import Com.iflytek.cloud.synthesizerlistener;import Com.iflytek.cloud.ui.recognizerdialog;import Com.iflytek.cloud.ui.recognizerdialoglistener;import Com.iflytek.sunflower.flowercollector;import Org.json.jsonexception;import Org.json.jsonobject;import Java.utiL.hashmap;import Java.util.linkedhashmap;public class Iatdemo extends Activity implements Onclicklistener {private STA    Tic String TAG = IatDemo.class.getSimpleName ();    Speech Synthesis Object private SpeechSynthesizer mTts;    Default pronunciation person private String voicer = "XiaoYan";    Buffering progress private int mpercentforbuffering = 0;    Playback progress private int mpercentforplaying = 0;    Cloud/Local radio button private Radiogroup mradiogroup;    Voice Dictation object Private SpeechRecognizer Miat;    Voice Dictation UI Private Recognizerdialog Miatdialog;    Store dictation results with HashMap private hashmap<string, string> miatresults = new linkedhashmap<string, string> ();    Private EditText Mresulttext;    Private Toast Mtoast;    Private Sharedpreferences msharedpreferences;    Engine Type private String menginetype = Speechconstant.type_cloud;        Language record installation Assistant class Apkinstaller Minstaller;         @SuppressLint ("Showtoast") public void OnCreate (Bundle savedinstancestate) {super.oncreate (savedinstancestate); RequEstwindowfeature (Window.feature_no_title);        Setcontentview (R.layout.iatdemo);        Initlayout ();  Initialize to identify non-UI-aware objects//Use the SpeechRecognizer object to customize the interface based on the callback message; Miat = Speechrecognizer.createrecognizer (Iatdemo.this,                Minitlistener);  To initialize Dictation dialog, if you only use UI dictation, without creating SpeechRecognizer//Using UI dictation, place the layout file and the picture resource according to the Notice.txt in the SDK file directory Miatdialog =        New Recognizerdialog (Iatdemo.this, Minitlistener);        Msharedpreferences = getsharedpreferences ("com.jredu.setting", activity.mode_private);        Mtoast = Toast.maketext (This, "", Toast.length_short);        Mresulttext = ((EditText) Findviewbyid (R.id.iat_text));        Minstaller = new Apkinstaller (iatdemo.this);        MTts = Speechsynthesizer.createsynthesizer (Iatdemo.this, Mttsinitlistener);        Msharedpreferences = getsharedpreferences ("com.jredu.setting", mode_private);    Mtoast = Toast.maketext (This, "", Toast.length_short);     }/** * Initialize layout. */Private void Initlayout () {Findviewbyid (r.id.iat_recognize). Setonclicklistener (Iatdemo.this);        Findviewbyid (R.id.read). Setonclicklistener (Iatdemo.this);    Choose Cloud or local menginetype = Speechconstant.type_cloud; } int ret = 0; Function call return value @Override public void OnClick (view view) {switch (View.getid ()) {//Start dictation//How to tell once            Dictation end: Onresult islast=true or onError case r.id.iat_recognize:mresulttext.settext (NULL);//Empty display contents            Miatresults.clear ();            Set parameter SetParam ();            Boolean isshowdialog = Msharedpreferences.getboolean (getString (r.string.pref_key_iat_show), true);                if (Isshowdialog) {//Display Dictation dialog box Miatdialog.setlistener (Mrecognizerdialoglistener);                Miatdialog.show ();            Showtip (getString (R.string.text_begin)); } else {//Do not Display Dictation dialog box RET = miat.startlistening (MrecognizErlistener);                if (ret! = errorcode.success) {Showtip ("Dictation failed, error code:" + ret);                } else {Showtip (getString (R.string.text_begin));            }} break;                When you start synthesizing//receiving the oncompleted callback, the composition ends and the synthesized audio//composition audio format is generated: Only the PCM format case R.id.read is supported:                String Text = ((EditText) Findviewbyid (R.id.tts_text)). GetText (). toString ();                LOG.D ("= =", text);                Set parameter SetParam ();              int code = mtts.startspeaking (text, mttslistener); LOG.D ("======", "" "+code);///**//* Only save audio do not play interface, call this interface please comment startspeaking interface//* Text: to fit Into the text, URI: need to save the full path of the audio, listener: Callback interface//*///String Path = environment.getexternalstoragedirectory () + "/t                TS.PCM ";//int code = Mtts.synthesizetouri (text, path, mttslistener);              if (Code! = errorcode.success) {      if (code = = errorcode.error_component_not_installed) {//not installed then jump to prompt installation page m                    Installer.install ();                    }else {Showtip ("Speech synthesis failed, error code:" + code);        }} break;        Audio stream recognition Default:break;     }}/** * Initialize the listener.            */private Initlistener Mttsinitlistener = new Initlistener () {@Override public void onInit (int code) {            LOG.D (TAG, "Initlistener init () code =" + code);            if (Code! = errorcode.success) {Showtip ("initialization failed, error code:" +code); } else {//initialization succeeds, then you can call the Startspeaking method//Note: Some developers call startspeaking immediately after creating a composition object in the OnCreate method    ,//Correct practice is to move the startspeaking call in OnCreate to Here}}};     /** * Initialize the listener. */private Initlistener Minitlistener = new Initlistener () {@Override public void onInit (int code) {LOG.D (TAG, "SpeechRecognizer init () code =" + code);            if (Code! = errorcode.success) {Showtip ("initialization failed, error code:" + code);    }        }    };     /** * Synthetic callback listener. */private Synthesizerlistener Mttslistener = new Synthesizerlistener () {@Override public void Onspeakbe        Gin () {Showtip ("start playing");        } @Override public void onspeakpaused () {showtip ("pause playback");        } @Override public void onspeakresumed () {Showtip ("continue playing");                                     } @Override public void onbufferprogress (int percent, int beginpos, int endpos,            String info) {//composition progress mpercentforbuffering = percent;         Showtip (String.Format (getString (R.string.tts_toast_format), mpercentforbuffering, mpercentforplaying)); } @Override public void onspeakprogress (int percent, int beginpos, int endpos{//Playback progress mpercentforplaying = percent;         Showtip (String.Format (getString (R.string.tts_toast_format), mpercentforbuffering, mpercentforplaying));                } @Override public void oncompleted (Speecherror error) {if (error = = null) {            Showtip ("Play Complete");            } else if (Error! = null) {Showtip (Error.getplaindescription (true)); }} @Override public void onEvent (int eventtype, int arg1, int arg2, Bundle obj) {//the following code Used to get the session ID with the cloud, provide the session ID to the support staff when the business goes wrong, can be used to query the session log, locate the cause of the error//if using local capabilities, the session ID is null//if (Speechevent.event_            session_id = = EventType) {//String SID = Obj.getstring (speechevent.key_event_session_id);            LOG.D (TAG, "Session id =" + SID);    //    }        }    };     /** * Dictation listener.     */private Recognizerlistener Mrecognizerlistener = new Recognizerlistener () {   @Override public void Onbeginofspeech () {///This callback means: The SDK internal recorder is ready, the user can start voice input showtip ("Start talking        "); } @Override public void OnError (Speecherror error) {//Tips://Error code: 10118 (you are not speaking), may be recording            You need to prompt the user to open the app's recording permission.            If you use local features (in-language), you need to prompt the user to turn on recording permissions for the notes.        Showtip (Error.getplaindescription (true)); } @Override public void Onendofspeech () {///This callback indicates that the tail endpoint of the speech was detected, entered the recognition process, and no longer accepts voice input SHOWTI        P ("End of Speech"); } @Override public void Onresult (Recognizerresult results, Boolean islast) {log.d (TAG, RESULTS.G            Etresultstring ());            Printresult (results); if (islast) {//TODO last result}} @Override public void onvolumechanged (int v            Olume, byte[] data) {Showtip ("Currently speaking, Volume Size:" + volume);        LOG.D (TAG, "Return audio data:" +data.length); } @Override public void onEvent (int eVenttype, int arg1, int arg2, Bundle obj) {//The following code is used to get the session ID with the cloud and to provide the session ID to the support staff when the business goes wrong, can be used to query the session log, locate the cause of the error  If local ability is used, the session ID is NULL//if (speechevent.event_session_id = = EventType) {//String SID =            Obj.getstring (speechevent.key_event_session_id);            LOG.D (TAG, "Session id =" + SID);    //    }        }    }; private void Printresult (Recognizerresult results) {String text = Jsonparser.parseiatresult (results.getresultstrin        g ());        String sn = null;            Read the SN field in the JSON result try {jsonobject Resultjson = new Jsonobject (results.getresultstring ());        sn = resultjson.optstring ("sn");        } catch (Jsonexception e) {e.printstacktrace ();        } miatresults.put (sn, text);        StringBuffer resultbuffer = new StringBuffer ();        For (String Key:mIatResults.keySet ()) {Resultbuffer.append (Miatresults.get (key)); } Mresulttext.setteXT (Resultbuffer.tostring ());    Mresulttext.setselection (Mresulttext.length ()); }/** * Dictation UI Listener */private Recognizerdialoglistener Mrecognizerdialoglistener = new Recognizerdialoglistener        () {public void Onresult (Recognizerresult results, Boolean islast) {Printresult (results);         }/** * identifies callback errors.        */public void OnError (Speecherror error) {Showtip (Error.getplaindescription (true));    }    };        private void Showtip (Final String str) {mtoast.settext (str);    Mtoast.show (); }/** * Parameter settings * * @param * @return */public void SetParam () {//Empty parameters Miat.setpar        Ameter (speechconstant.params, NULL);        Set Dictation engine Miat.setparameter (Speechconstant.engine_type, Menginetype);        Sets the return result format miat.setparameter (Speechconstant.result_type, "json"); String lag = msharedpreferences.getstring ("Iat_language_preference", "ManDarin ");        if (Lag.equals ("en_US")) {//Set Language Miat.setparameter (Speechconstant.language, "en_US");            } else {//Set Language Miat.setparameter (Speechconstant.language, "ZH_CN");        Set Language region Miat.setparameter (speechconstant.accent, lag); }//Set voice front end point: Mute timeout, that is, how long the user does not speak as time-out processing miat.setparameter (Speechconstant.vad_bos, msharedpreferences.getstring                ("Iat_vadbos_preference", "4000")); Set the voice after endpoint: After the endpoint mute detection time, that is, the user stops talking for how long that is not entered, automatically stop recording miat.setparameter (Speechconstant.vad_eos, Msharedpreferences.gets                Tring ("Iat_vadeos_preference", "1000")); Set the punctuation mark, set to "0" to return the result without punctuation, set to "1" to return the result with punctuation miat.setparameter (SPEECHCONSTANT.ASR_PTT, msharedpreferences.getstring ("Iat                _punc_preference "," 1 ")); Set audio save path, save audio format support PCM, WAV, set path to SD card Please note Write_external_storage permissions//NOTE: Audio_format parameter language needs to be updated to take effect Miat.setpara        Meter (Speechconstant.audio_format, "wav"); miat.seTparameter (Speechconstant.asr_audio_path, environment.getexternalstoragedirectory () + "/msc/iat.wav"); Sets whether the dictation result is dynamically corrected, and "1" returns the result dynamically in the dictation process, or only returns the final result after the dictation is finished//NOTE: This parameter is temporarily only valid for online dictation miat.setparameter (speechconstant.    Asr_dwa, msharedpreferences.getstring ("Iat_dwa_preference", "0"));        } @Override protected void OnDestroy () {Super.ondestroy ();        Release the connection Miat.cancel () when exiting;    Miat.destroy ();        } @Override protected void Onresume () {//Open Statistics Mobile data statistical analysis Flowercollector.onresume (iatdemo.this);        Flowercollector.onpagestart (TAG);    Super.onresume ();        } @Override protected void OnPause () {//Open Statistics Mobile data statistical analysis Flowercollector.onpageend (TAG);        Flowercollector.onpause (Iatdemo.this);    Super.onpause (); }    }

Android Studio Fast integrated Flight SDK for text reading

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.