HoloLens input--unity Voice input _hololens

Source: Internet
Author: User
Description: This article translates from HoloLens Official document--unity voice input (Https://developer.microsoft.com/en-us/windows/mixed-reality/voice_input_in_ Unity) translated as follows: Frontier:

Unity exposes three of ways to add voice input to unity applications.

Using Keywordrecognizer (one of the two types of phraserecognizers), your application can be assigned a series of string commands to listen to. Using Grammarrecognizer (another type of Phraserecognizer), your application can be monitored with a srgs file that defines a specific syntax. With Dictationrecognizer, your application can listen to any word and provide the user with a note or other display of its voice.

Note: Only dictation or phrase recognition can be processed immediately. This means that if the syntax recognizer or keyword recognizer is active, Dictationrecognizer cannot be active, and vice versa.


The steps are as follows: Start speech function

Unity must enable the microphone feature to use speech features, enabling steps like the following:

Unity--edit--project settings--capabilitees--Tick Microphone option


Phrase recognition--keywordrecognizer

To make your application listen to a specific phrase that users use, and then take some action, you need to: Specify which phrases to use Keywordrecognizer or Grammarrecognizer to listen to to handle onphraserecognized events. and take the required namespaces for actions that correspond to the identified phrases: UnityEngine.Windows.Speech

Using UnityEngine.Windows.Speech;
Using System.Collections.Generic;
Using System.Linq;


Then use a dictionary to store keywords that require speech recognition:

Keywordrecognizer Keywordrecognizer;
dictionary<string, system.action> keywords = new dictionary<string, system.action> ();

Add any string as a keyword and add it to the dictionary (for example, activete as the keyword below):
Create keywords for keyword recognizer
keywords. ADD ("Activate", () =>
{
    //action to being performed when this keyword is spoken
});

Then create a keywordrecogizer to recognize it as:

Keywordrecognizer = new Keywordrecognizer (keywords. Keys.toarray ());

Finally, add a recognition event:
Keywordrecognizer.onphraserecognized + = keywordrecognizer_onphraserecognized;

Give an example:
private void keywordrecognizer_onphraserecognized (Phraserecognizedeventargs args)
{
    system.action keywordaction;
    If the keyword recognized is in my dictionary, call that Action.
    if (keywords. TryGetValue (Args.text, out keywordaction))
    {
        keywordaction.invoke ();
    }
}

To start speech recognition:
Keywordrecognizer.start ();

The overall simple case looks like this:
Using UnityEngine.Windows.Speech;
Using System.Collections.Generic;
Using System.Linq;

Using Unityengine; <summary>///Made by Bruce///</summary> public class Audiocommond:monobehaviour {Keywordrecogniz
    ER keywordrecognizer;
    dictionary<string, system.action> keywords = new dictionary<string, system.action> ();

    public Gameobject Cube;
        void Start () {var t = Coroutinewrapper.inst; Keywords.
        ADD ("MoveUp", () => {cube.transform.localPosition + = new Vector3 (0, 1, 0);

        }); Keywords.
        ADD ("MoveDown", () => {cube.transform.localPosition + = new Vector3 (0,-1, 0);

        }); Keywordrecognizer = new Keywordrecognizer (keywords.
        Keys.toarray ());
        Keywordrecognizer.onphraserecognized + = keywordrecognizer_onphraserecognized;
    Begin to identify Keywordrecognizer.start (); } private void Keywordrecognizer_onphraserecognized (PhraserecognizedeventaRGS args) {system.action keywordaction; if (keywords.
        TryGetValue (Args.text, out keywordaction)) {Keywordaction.invoke (); }
    }
}

Language input--dictation namespaces required--unityengine.windows.speech

Use Dictationrecognizer to convert the user's voice to text. Dictationrecognizer exposes the dictation function and supports registering and listening for assumptions and phrase completion events, so you can give feedback to your users in the speech and later. The Start () and Stop () methods enable and disable dictation recognition, respectively. When you are done using the recognizer, you should use the Dispose () method to release the resources it uses. If garbage collection is not released before, it will automatically release these resources at additional cost of performance. It takes only a few steps to start recording: Create a new Dictationrecognizer handle dictation event startup Dictationrecognizer enable the Support language dictation feature:

Unity--edit--project settings--capabilitees--tick internetclient option


Voice input--dictationrecognizer

Create a Dictationrecognizer, as follows:

Dictationrecognizer = new Dictationrecognizer ();

There are four dictation events that can be subscribed and processed to implement dictation behavior. Dictationresult dictationcomplete dictationhypothesis dictationerror dictationresult

This event is usually triggered at the end of a sentence after a user has paused. The fully recognized string is returned here.

Register Dictationresult Event:

Dictationrecognizer.dictationresult + = Dictationrecognizer_dictationresult;
And then process its callback:

private void Dictationrecognizer_dictationresult (string text,confidencelevel confidence)
{ 
    //do something
}
Dictationhypothesis

This event is continuously triggered when the user is talking. When the recognizer listens, it provides what has been heard to date. Dictationcomplete

The recognizer stops this event whether it is invoked from stop () or if a time-out or other error occurs. Dictationerror

This event is triggered when an error occurs.


Tips

The Start () and Stop () methods enable and disable dictation recognition, respectively. When you are done using the recognizer, you must use the Dispose () method to dispose of the resources it uses. If garbage collection is not released before, it will automatically release these resources at additional cost of performance. Timeout occurs after a period of time. You can check for these timeouts in the Dictationcomplete event. There are two timeouts to note: If the recognizer starts and does not hear any audio in the first 5 seconds, it will timeout. If the recognizer gives a result and then hears the silence for 20 seconds, it will timeout. Using phrase recognition and dictation

If you want to use phrase recognition and dictation in your application at the same time, you need to close one completely before you can start another. If you have multiple Keywordrecognizer running, you can turn them off at once:

Phraserecognitionsystem.shutdown ();

In order to restore all the recognizers to the previous state, after Dictationrecognizer stops, you can call:

Phraserecognitionsystem.restart ();

You can also start a keywordrecognizer, which will restart the Phraserecognitionsystem.


The simple case looks like this:

Using Unityengine;
Using UnityEngine.Windows.Speech;

Using Unityengine.ui;
    public class Audiodiction:monobehaviour {private Dictationrecognizer dictationrecognizer;

    Public Text Showresulttext;

        void Start () {dictationrecognizer = new Dictationrecognizer (); This event was fired after the user pauses and typically at the end of a sentence.
        The full recognized string are returned here.

        Dictationrecognizer.dictationresult + = Dictationrecognizer_dictationresult; This event is fired while the user is talking.
        As the recognizer listens, it provides text of what it ' s heard so far.

        Dictationrecognizer.dictationhypothesis + = dictationrecognizer_dictationhypothesis; This event is fired the recognizer stops, whether from Stop () being called, a timeout occurring, or some other Erro
        
        r. dictationrecognizer.dictationcomplete + = Dictationrecognizer_dictationcomplete; This event is fired as an error Occurs.

        Dictationrecognizer.dictationerror + = Dictationrecognizer_dictationerror;
    Start Dictation identification Dictationrecognizer.start ();
        } private void Dictationrecognizer_dictationresult (string text, Confidencelevel confidence) {//Custom behavior
    Showresulttext.text = text; } private void Dictationrecognizer_dictationhypothesis (string text) {//Custom behavior Showresulttext.tex

    t = text; } private void Dictationrecognizer_dictationcomplete (Dictationcompletioncause cause) {//Custom behavior S
    Howresulttext.text = "complete!"; } private void Dictationrecognizer_dictationerror (string error, int HRESULT) {//Custom behavior} void
        OnDestroy () {//release resource Dictationrecognizer.dictationresult-= Dictationrecognizer_dictationresult;
        Dictationrecognizer.dictationcomplete-= Dictationrecognizer_dictationcomplete; Dictationrecognizer.dictationhypothesis-= DictationrecognIzer_dictationhypothesis;
        Dictationrecognizer.dictationerror-= Dictationrecognizer_dictationerror;
    Dictationrecognizer.dispose (); }
}








Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.