Python uses less than one hundred lines of code to implement a small siri, one hundred lines of siri

Source: Internet
Author: User
Tags modulus

Python uses less than one hundred lines of code to implement a small siri, one hundred lines of siri

Preface

If you want to easily understand the core of feature computing, it is recommended to first look at my previous listening to the song music article, portal: http://www.bkjia.com/article/97305.htm

This article mainly implements a simple command Word Recognition Program. The core of the algorithm is audio feature extraction and DTW algorithm for matching. Of course, such Code cannot be used for commercialization. It is good for everyone to make fun of it.

Design Concept

Even if it is a small thing, we need to clarify our ideas before doing so. Audio recognition is not difficult, but the difficulty of feature extraction can be seen in my article on Song recognition. Speech recognition is more difficult, because music is always fixed, and human speech is often changed. For example, some people say "Sesame opens the door", while others say "Sesame opens the door ". In addition, the time for speaking during the recording is not the same. It may be very urgent to start the recording, or the four words can be said at the end of the recording. This is difficult.

Algorithm flow:


Feature Extraction

Just like listening to songs, we divide each part into 40 parts in one second, perform Fourier transformation on each part, and then take the modulus length. However, this is not the feature value that is directly used as the feature value for further extraction from the previously heard songs.

If you don't understand what I'm talking about, you can read the source code below or the article about listening to songs.

DTW algorithm

DTW, Dynamic Time Warping, Dynamic Time reduction. The algorithm solves the problem of matching different pronunciation lengths and locations.

Algorithm input feature vectors of two sets of audio: A: [fp1, fp2, fp3 ,......, fpM1] B: [fp1, fp2, fp3, fp4 ,..... fpM2]
Group A has M1 features, and group B has M2 audios. The element in each feature vector is the one that we cut into 40 blocks per second and then calculate the modulus length of FFT. The cost for calculating each pair of fp is the Euclidean distance.

Set D (fpa, fpb) as the distance between two features.

Then we can draw the following figure.

 

We need to go from () to (M1, M2). There are many steps, and each method is a method that matches two audio locations. However, our goal is to minimize the cost of the process, which ensures that this alignment is the closest alignment to us.

Let's do this: first, each point on the two axes can be directly calculated to accumulate the cost and to find out. Then for the center point D (I, j) = Min {D (I-1, j) + D (fpi, fpj), D (I, J-1) + D (fpi, fpj), D (I-1, J-1) + 2 * D (fpi, fpj )}

Why does it cost twice to go straight from (I-1, J-1) to (I, j? Because someone else uses two Straight corners of a square, it uses the diagonal line of a square.

According to this principle, the distance is always D (M1, M2), which is the distance between two audios.

 

 

 

Source code and comments

# Coding = utf8import osimport waveimport dtwimport numpy as npimport pyaudiodef compute_distance_vec (vec1, vec2): return np. linalg. norm (vec1-vec2) # Calculate the Euclidean distance between two features class record (): def record (self, CHUNK = 44100, FORMAT = pyaudio. paInt16, CHANNELS = 2, RATE = 44100, RECORD_SECONDS = 200, WAVE_OUTPUT_FILENAME = "record.wav"): # method of recording songs p = pyaudio. pyAudio () stream = p. open (format = FORMAT, channels = CHANNELS, rate = RATE, Input = True, frames_per_buffer = CHUNK) frames = [] for I in range (0, int (RATE/CHUNK * RECORD_SECONDS): data = stream. read (CHUNK) frames. append (data) stream. stop_stream () stream. close () p. terminate () wf = wave. open (WAVE_OUTPUT_FILENAME, 'wb ') wf. setnchannels (CHANNELS) wf. setsampwidth (p. get_sample_size (FORMAT) wf. setframerate (RATE) wf. writeframes (''. join (frames) wf. close () class voice (): def loadd Ata (self, filepath): try: f = wave. open (filepath, 'rb') params = f. getparams () self. nchannels, self. sampwidth, self. framerate, self. nframes = params [: 4] str_data = f. readframes (self. nframes) self.wav e_data = np. fromstring (str_data, dtype = np. short) self.wav e_data.shape =-1, self. sampwidth self.wav e_data = self.wav e_data.T # store the original array f. close () self. name = OS. path. basename (filepath) # record the file name return T Rue failed T: raise IOError, 'file error' def fft (self, frames = 40): self. fft_blocks = [] # divide the audio into 40 pieces per second, and then perform Fourier transform blocks_size = self for each piece. framerate/frames for I in xrange (0, len(self.wav e_data [0])-blocks_size, blocks_size): paie_data [0] [I: I + blocks_size]) @ staticmethod def play (filepath): chunk = 1024 wf = wave. open (filepath, 'rb') p = pyaudio. pyAudio () # Stream = p. open (format = p. get_format_from_width (wf. getsampwidth (), channels = wf. getnchannels (), rate = wf. getframerate (), output = True) while True: data = wf. readframes (chunk) if data = "": break stream. write (data) stream. close () p. terminate () if _ name _ = '_ main _': r = record () r. record (RECORD_SECONDS = 3, WAVE_OUTPUT_FILENAME='record.wav ') v = voice () v.loaddata('record.wav') v. fft () file _ List = OS. listdir (OS. getcwd () res = [] for I in file_list: if I. split ('. ') [1] = 'wav' and I. split ('. ') [0]! = 'Record ': temp = voice () temp. loaddata (I) temp. fft () res. append (dtw. dtw (v. fft_blocks, temp. fft_blocks, compute_distance_vec) [0], I) res. sort () print res if res [0] [1]. find ('Open _ qq ')! =-1: OS. system ('C: \ program \ Tencent \ QQ \ Bin \ QQScLauncher.exe ') # My QQ path elif res [0] [1]. find ('zhimakaimen ')! =-1: OS .system('chrome.exe ') # Path of the browser. elif res [0] [1]. find ('play _ Music') has been added to the Path ')! =-1: voice. play ('C: \ data \ music \ audio \ (92.16.wav ') # play a piece of music # r = record () # record ')

You can use the record method here to record several command words in advance, and try to say it in different tone and different rhythm, so as to improve accuracy. Then, design the file name. Based on the matching file name that is closest to the audio file, you can know which command is used to customize the execution of different tasks.

This is a Demo Video: http://www.iqiyi.com/w_19ruisynsd.html

Summary

The above is all about this article. I hope this article will help you learn or use python. If you have any questions, please leave a message.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.