3D voice tianballoon-use Android voice service in Unity

Source: Internet
Author: User

3D voice tianballoon-use Android voice service in Unity

Reprinted please indicate this article from the blog of the big glutinous rice (http://blog.csdn.net/a396901990). Thank you for your support!


Opening nonsense:


This project is prepared in four parts:

1. Create a rotated 3D ball ":3D voice sky Balloon (source code sharing)-create a rotated 3D ball

2. obtain real-time weather information from the network through the weather service and dynamically generate a "3D ball ":3D voice sky Balloon (source code sharing)-create a 3D ball dynamically using the weather service

3. Android voice service and Unity message delivery

4. Combination of Unity3D and Android

The first two articles have introduced how to create this 3D ball. This article describes how to use Android voice service in Unity, the last article will introduce how to use sound to control the 3D ball.

On the left is the result of running Unity on the computer (the effect to be achieved in this section)

On the right side, Unity runs on the mobile phone after combining Android and voice control (all the final results after introduction ):



Voice Service:

My speech service isKEDA xunfei speechTheir official website is http://open.voicecloud.cn/index.php/default/speechservice

Go to the official website to download the Android voice sdk (you need to register some bad things, a little trouble)

After the download, there are some Development kits and a Demo. The Demo runs as follows:



Introduction:

I only usedSpeech dictationAndSpeech SynthesisThe following describes how to use these two functions.


Some"Initialization"Work:

Set some permissions in AndroidManifest. xml:

    
     
     
     
     
     
     
     
     
 

Import the SDK:

ArmeabiSo dynamic library

Mac. Jar package


Set permissions in the Code:

SpeechUtility.createUtility(this, SpeechConstant.APPID + "=540dcea0");

Speech Dictation:

Is to convert the words into text. The recognition rate is very accurate, basically no error.

Initialize the recognition object:

// Initialize the object SpeechRecognizer mVoice = SpeechRecognizer. createRecognizer (this, mInitListener );

Set parameters:

// Set the language to mVoice. setParameter (SpeechConstant. LANGUAGE, "zh_cn"); // sets the LANGUAGE region mVoice. setParameter (SpeechConstant. ACCENT, "mandarin"); // sets the pre-Speech Endpoint mVoice. setParameter (SpeechConstant. VAD_BOS, "4000"); // sets the post-Speech Endpoint mVoice. setParameter (SpeechConstant. VAD_EOS, "1000"); // sets the mVoice punctuation. setParameter (SpeechConstant. ASR_PTT, "0"); // set the audio storage path to mVoice. setParameter (SpeechConstant. ASR_AUDIO_PATH, "/sdcard/iflytek/wavaudio. pcm ");
Set the listening listener:
Private RecognizerListener recognizerListener = new RecognizerListener () {@ Overridepublic void listener () {showTip ("start talking");} @ Overridepublic void onError (SpeechError error) {showTip (error. getPlainDescription (true) ;}@ Overridepublic void onEndOfSpeech () {showTip ("end talk") ;}@ Overridepublic void onResult (RecognizerResult results, boolean isLast) {Log. d (TAG, results. getResultString (); String text = JsonParser. parseIatResult (results. getResultString (); mResultText. append (text); mResultText. setSelection (mResultText. length (); if (isLast) {// TODO final result }}@ Overridepublic void onVolumeChanged (int volume) {showTip ("currently speaking, volume size: "+ volume) ;}@ Overridepublic void onEvent (int eventType, int arg1, int arg2, Bundle obj ){}};
Call:
mVoice.startListening(voiceListener);


Speech synthesis:

Read the text conversion idioms.

Usage andSpeech RecognitionSimilar to that, you can read the code. I will not waste your time here.

You can select the speaker gender and dialect when setting parameters.

I used to synthesize some cursing words in a dialect and listened to siao...

PS: I will just give you a very simple introduction. If you really want to use the recommended sample code in combination with the document (you can find it in the downloaded compressed package), Please study it.



Use Android voice service in Unity:

The preceding section briefly describes how to use this voice service. The question is how to call this service in Unity.

The idea is to take the Android project as a package, service, or plug-in and put it into the Unity project so that we can call the Android method in Unity.

Speaking of this, you need to understand the combination of Unity and Android projects. The related content is an article I wrote earlier:

Embedded Unity3D view in ANDROID applications (displaying 3D models)



Android code:

What we need to do is to make Android activities inherit from UnityPlayerActivity.

I will post the Android code below, and I believe you will understand it at a Glance Based on the content described above:

Public class MainActivity extends UnityPlayerActivity {// four buttons private Button voiceButton; private Button detailButton; private Button returnButton; private Button quitButton; private Map
 
  
MapAllNameID; boolean isFaild = false; // voice result String voiceResult = null; // All City private strings [] strNamePro; // All City private String [] [] strNameCity; // speech dictation object private SpeechRecognizer mVoice; // speech synthesis object private SpeechSynthesizer mTts; // default speaker private String voicer = "xiaoyan "; // engine type private String mEngineType = SpeechConstant. TYPE_CLOUD; @ Overrideprotected void onCreate (Bundle savedInstanceState) {super. oncres Ate (savedInstanceState); setContentView (R. layout. test); View playerView = mUnityPlayer. getView (); LinearLayout ll = (LinearLayout) findViewById (R. id. unity_layout); ll. addView (playerView); SpeechUtility. createUtility (this, SpeechConstant. APPID + "= 540dcea0"); // initialize the mVoice = SpeechRecognizer. createRecognizer (this, mInitListener); // initialize the merging object mTts = SpeechSynthesizer. createSynthesizer (this, mTtsInitListen Er); voiceButton = (Button) findViewById (R. id. voice_btn); voiceButton. setOnClickListener (new voiceListener (); returnButton = (Button) findViewById (R. id. return_btn); returnButton. setOnClickListener (new returnListener (); detailButton = (Button) findViewById (R. id. detail_btn); detailButton. setOnClickListener (new detailListener (); quitButton = (Button) findViewById (R. id. quit_btn); quitButton. setOnClickListe Ner (new quitListener (); initVar ();} public class voiceListener implements OnClickListener {@ Overridepublic void onClick (View arg0) {voiceResult = ""; // set the parameter setParam (); mVoice. startListening (voiceListener);} public class returnListener implements OnClickListener {@ Overridepublic void onClick (View arg0) {UnityPlayer. unitySendMessage ("Main Camera", "back", "") ;}} public class detailListener implements O NClickListener {@ Overridepublic void onClick (View arg0) {UnityPlayer. unitySendMessage ("Main Camera", "detail", "") ;}} public class quitListener implements OnClickListener {@ Overridepublic void onClick (View arg0) {System. exit (0) ;}} public void quitApp (String str) {Toast. makeText (getApplicationContext (), "quit", Toast. LENGTH_SHORT ). show (); System. exit (0);} private RecognizerListener voiceListener = new Re CognizerListener () {@ Overridepublic void onBeginOfSpeech () {Toast. makeText (getApplicationContext (), "Start talking", Toast. LENGTH_SHORT ). show () ;}@ Overridepublic void onError (SpeechError error) {Toast. makeText (getApplicationContext (), "error", Toast. LENGTH_SHORT ). show () ;}@ Overridepublic void onEndOfSpeech () {Toast. makeText (getApplicationContext (), "end talk", Toast. LENGTH_SHORT ). show () ;}@ Overridepublic void onR Esult (RecognizerResult results, boolean isLast) {voiceResult = voiceResult + JsonParser. parseIatResult (results. getResultString (); if (isLast) {setSpeakParam (); mTts. startSpeaking (checkResult (voiceResult), mTtsListener); // UnityPlayer. unitySendMessage ("Main Camera", "voice", getResults (voiceResult) ;}@overridepublic void onVolumeChanged (int volume) {// Toast. makeText (getApplicationContext (), "currently speaking, Volume: "+ volume, Toast. LENGTH_SHORT ). show () ;}@ Overridepublic void onEvent (int eventType, int arg1, int arg2, Bundle obj) {}};/*** merge callback listener. */Private SynthesizerListener mTtsListener = new SynthesizerListener () {@ Overridepublic void onSpeakBegin () {}@ Overridepublic void publish () {}@ Overridepublic void onSpeakResumed () {}@ Overridepublic void onBufferProgress (int percent, int beginPos, int endPos, String info) {}@ Overridepublic void onSpeakProgress (int percent, int beginPos, int endPos) {}@ Overridepublic void onCompleted (SpeechEr Ror error) {if (error = null) {if (! IsFaild) {// send the voice to Unity. The UnityPlayer. UnitySendMessage ("Main Camera", "voice", voiceResult) is obtained.} else if (error! = Null) {Toast. makeText (getApplicationContext (), "error", Toast. LENGTH_SHORT ). show () ;}@ Overridepublic void onEvent (int eventType, int arg1, int arg2, Bundle obj) {}}; // set the speech recognition parameter public void setParam () {// set the language mVoice. setParameter (SpeechConstant. LANGUAGE, "zh_cn"); // sets the LANGUAGE region mVoice. setParameter (SpeechConstant. ACCENT, "mandarin"); // sets the pre-Speech Endpoint mVoice. setParameter (SpeechConstant. VAD_BOS, "4000"); // sets the post-Speech Endpoint m. Voice. setParameter (SpeechConstant. VAD_EOS, "1000"); // sets the mVoice punctuation. setParameter (SpeechConstant. ASR_PTT, "0"); // set the audio storage path to mVoice. setParameter (SpeechConstant. ASR_AUDIO_PATH, "/sdcard/iflytek/wavaudio. pcm ");} // set the speech synthesis parameter private void setSpeakParam () {// set the merging if (mEngineType. equals (SpeechConstant. TYPE_CLOUD) {mTts. setParameter (SpeechConstant. ENGINE_TYPE, SpeechConstant. TYPE_CLOUD); // sets the speaker mTts. setParamet Er (SpeechConstant. VOICE_NAME, voicer);} else {mTts. setParameter (SpeechConstant. ENGINE_TYPE, SpeechConstant. TYPE_LOCAL); // set the speaker voicer to null. By default, the speaker is specified through the voice + interface. MTts. setParameter (SpeechConstant. VOICE_NAME, "") ;}// sets the language speed mTts. setParameter (SpeechConstant. SPEED, "50"); // sets the tone mTts. setParameter (SpeechConstant. PITCH, "50"); // sets the volume mTts. setParameter (SpeechConstant. VOLUME, "50"); // sets the audio stream type of the player, mTts. setParameter (SpeechConstant. STREAM_TYPE, "3");}/*** initialize the listener. */Private InitListener mInitListener = new InitListener () {@ Overridepublic void onInit (int code) {if (code! = ErrorCode. SUCCESS) {Toast. makeText (getApplicationContext (), "initialization failed, error code:" + code, Toast. LENGTH_SHORT ). show () ;}};/*** initial listener. */Private InitListener mTtsInitListener = new InitListener () {@ Overridepublic void onInit (int code) {if (code! = ErrorCode. SUCCESS) {Toast. makeText (getApplicationContext (), "initialization failed, error code:" + code, Toast. LENGTH_SHORT). show ();}}};}
 


Not all the code above. I have uploaded all the Android code to GitHub:

Https://github.com/a396901990/3D_Sphere/tree/feature/Voice_Weather_3D_Sphere

In the project, the 3DVoiceWeather file is an Android project. You can import it to Eclipse to view it.


The above code is complete. Follow the online tutorial to add the Android project to Unity as a plug-in, and build the Android project into an apk in Unity to use it on your mobile phone.

How to use voice to control 3D ball rotation will be introduced in the last article.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.