This article mainly introduces how python uses less than one hundred lines of code to implement a little siri. This article is very detailed and has some reference value for everyone, let's take a look. This article mainly introduces how python uses less than one hundred lines of code to implement a little siri. This article is very detailed and has some reference value for everyone, let's take a look.
Preface
If you want to easily understand the core feature computation, we recommend that you first read my previous article on listening to songs. portal: www.jb51.net/article/97305.htm
This article mainly implements a simple command word recognition program. The core of the algorithm is audio feature extraction and DTW algorithm for matching. Of course, such code cannot be used for commercialization. it is good for everyone to make fun of it.
Design Concept
Even if it is a small thing, we need to clarify our ideas before doing so. Audio recognition is not difficult, but the difficulty of feature extraction can be seen in my article on song recognition. Speech recognition is more difficult, because music is always fixed, and human speech is often changed. For example, some people say "SESAME opens the door", while others say "SESAME opens the door ". In addition, the time for speaking during the recording is not the same. it may be very urgent to start the recording, or the four words can be said at the end of the recording. This is difficult.
Algorithm flow:
Source code and comments
# Coding = utf8import osimport waveimport dtwimport numpy as npimport pyaudiodef compute_distance_vec (vec1, vec2): return np. linalg. norm (vec1-vec2) # Calculate the Euclidean distance between two features class record (): def record (self, CHUNK = 44100, FORMAT = pyaudio. paInt16, CHANNELS = 2, RATE = 44100, RECORD_SECONDS = 200, WAVE_OUTPUT_FILENAME = "record.wav"): # method of recording songs p = pyaudio. pyAudio () stream = p. open (format = FORMAT, channels = CHANNELS, rate = RATE, Input = True, frames_per_buffer = CHUNK) frames = [] for I in range (0, int (RATE/CHUNK * RECORD_SECONDS): data = stream. read (CHUNK) frames. append (data) stream. stop_stream () stream. close () p. terminate () wf = wave. open (WAVE_OUTPUT_FILENAME, 'WB ') wf. setnchannels (CHANNELS) wf. setsampwidth (p. get_sample_size (FORMAT) wf. setframerate (RATE) wf. writeframes (''. join (frames) wf. close () class voice (): def loadd Ata (self, filepath): try: f = wave. open (filepath, 'RB') params = f. getparams () self. nchannels, self. sampwidth, self. framerate, self. nframes = params [: 4] str_data = f. readframes (self. nframes) self.wav e_data = np. fromstring (str_data, dtype = np. short) self.wav e_data.shape =-1, self. sampwidth self.wav e_data = self.wav e_data.T # store the original array f. close () self. name = OS. path. basename (filepath) # record the file name return T Rue failed t: raise IOError, 'File error' def fft (self, frames = 40): self. fft_blocks = [] # divide the audio into 40 pieces per second, and then perform Fourier transform blocks_size = self for each piece. framerate/frames for I in xrange (0, len(self.wav e_data [0])-blocks_size, blocks_size): paie_data [0] [I: I + blocks_size]) @ staticmethod def play (filepath): chunk = 1024 wf = wave. open (filepath, 'RB') p = pyaudio. pyAudio () # Stream = p. open (format = p. get_format_from_width (wf. getsampwidth (), channels = wf. getnchannels (), rate = wf. getframerate (), output = True) while True: data = wf. readframes (chunk) if data = "": break stream. write (data) stream. close () p. terminate () if name = 'main': r = record () r. record (RECORD_SECONDS = 3, WAVE_OUTPUT_FILENAME='record.wav ') v = voice () v.loaddata('record.wav') v. fft () file_list = OS. listdir (OS. getcwd () res = [] for I in file_list: if I. split ('. ') [1] = 'wav' and I. split ('. ') [0]! = 'Record ': temp = voice () temp. loaddata (I) temp. fft () res. append (dtw. dtw (v. fft_blocks, temp. fft_blocks, compute_distance_vec) [0], I) res. sort () print res if res [0] [1]. find ('open _ QQ ')! =-1: OS. system ('C: \ program \ Tencent \ QQ \ Bin \ QQScLauncher.exe ') # My QQ path elif res [0] [1]. find ('zhimakaimen ')! =-1: OS .system('chrome.exe ') # Path of the browser. elif res [0] [1]. find ('Play _ music') has been added to the Path ')! =-1: voice. play ('C: \ data \ music \ audio \ (92.16.wav ') # play a piece of music # r = record () # record ')
You can use the record method here to record several command words in advance, and try to say it in different tone and different rhythm, so as to improve accuracy. Then, design the file name. based on the matching file name that is closest to the audio file, you can know which command is used to customize the execution of different tasks.
This is a demo video: www.iqiyi.com/w_19ruisynsd.html