Automatic microphone recording and microphone recording
Recently, we have been studying Speech Recognition Using Baidu's sdk. Only recognized parts are found, and I need to save the audio file and automatically generate the audio file when the sound is passed in.
First run the Code:
Public class EngineeCore {String filePath = "E: \ voice \ voice_cache.wav"; AudioFormat audioFormat; TargetDataLine targetDataLine; boolean flag = true;
Private void stopRecognize () {flag = false; targetDataLine. stop (); targetDataLine. close ();} private AudioFormat getAudioFormat () {float sampleRate = 16000; // 44100, int sampleSizeInBits = 16; // int channels = 1; // 1, 2 boolean signed = true; // true, false boolean bigEndian = false; // true, false return new AudioFormat (sampleRate, sampleSizeInBits, channels, signed, bigEndian );} // end getAudioFormat private void startRecognize () {try {// obtain the specified audio format audioFormat = getAudioFormat (); DataLine. info dataLineInfo = new DataLine. info (TargetDataLine. class, audioFormat); targetDataLine = (TargetDataLine) AudioSystem. getLine (dataLineInfo); // Create a thread to capture the microphone // data into an audio file and start the // thread running. it will run until the // Stop button is clicked. this method // will return after starting the thread. flag = true; new CaptureThread (). start ();} catch (Exception e) {e. printStackTrace ();} // end catch} // end captureAudio method class CaptureThread extends Thread {public void run () {AudioFileFormat. type fileType = null; File audioFile = new File (filePath); fileType = AudioFileFormat. type. WAVE; // audio input weight int weight = 2; // determines whether to stop counting int downSum = 0; ByteArrayInputStream bais = null; ByteArrayOutputStream baos = new ByteArrayOutputStream (); audioInputStream ais = null; try {targetDataLine. open (audioFormat); targetDataLine. start (); byte [] fragment = new byte [1024]; ais = new AudioInputStream (targetDataLine); while (flag) {targetDataLine. read (fragment, 0, fragment. length); // when the last bit of the array is greater than weight, bytes are stored (with sound input). if (Math. abs (fragment [fragment. length-1])> weight | baos. size ()> 0) {baos. write (fragment); System. out. println ("Guard:" + fragment [0] + ", end:" + fragment [fragment. length-1] + ", lenght" + fragment. length); // determines whether the voice is stopped if (Math. abs (fragment [fragment. length-1]) <= weight) {downSum ++;} else {System. out. println ("Reset odd"); downSum = 0 ;}
// If the count exceeds 20, it indicates that no sound is passed in for this period of time (the value can also be changed) if (downSum> 20) {System. out. println ("Stop input"); break ;}}// gets the audio input stream audioFormat = getAudioFormat (); byte audioData [] = baos. toByteArray (); bais = new ByteArrayInputStream (audioData); ais = new AudioInputStream (bais, audioFormat, audioData. length/audioFormat. getFrameSize (); // defines the final file name System. out. println ("start generating voice files"); AudioSystem. write (ais, AudioFileFormat. type. WAVE, audioFile); downSum = 0; stopRecognize ();} catch (Exception e) {e. printStackTrace ();} finally {// close stream try {ais. close (); bais. close (); baos. reset ();} catch (IOException e) {e. printStackTrace () ;}}// end run} // end inner class CaptureThread
Next, test
public static void main(String args[]) { EngineeCore engineeCore = new EngineeCore(); engineeCore.startRecognize(); }
When a sound is imported into the microphone, the first or last absolute value of the byte array read by targetDataLine increases (the position depends on some parameters in the audio format, such as bigEndian ). The input volume is low and the absolute value is smaller.
The recording starts. Audio Data read from targetDataLine is stored in ByteArrayOutputStream. When the volume is lower than the weight for a period of time, the recording is ended if there is no sound input. Retrieves a byte array from ByteArrayOutputStream,
Convert to audio and save it to a local file.
Note:The byte array read from targetDataLine cannot be directly used for Speech Recognition such as Baidu. You need to convert it into an audio file and then read the byte array generated by the audio file for speech recognition.