Plda algorithm explains conceptual understanding
In the field of voice-print recognition, we assume that the training data speech consists of the voice of I speaker, wherein each speaker has a different voice of the J segment. So, we define the first speaker of Article J of the speech as Xij. Then, based on the factor analysis, we define the Xij generation model as:
This model can be considered as two parts: the first two items on the right of equal sign have nothing to do with the speaker's specific voice, which is called the signal part, which describes the difference between the spoken human beings; the second right of the equal sign describes the difference between different voices of the same speaker, called the Noise section. In this way, we use two hypothetical variables to describe the data structure of a voice.
We notice that the middle two items to the right of the equals sign are a matrix and a vector representation, which is another core part of factorial analysis. Both matrices F and G contain the basic factors in the space of their imaginary variables, which can be considered as eigenvectors of their respective spaces. For example, each column of F is equivalent to the eigenvector of the Inter-class space, and each column of G corresponds to the eigenvector of the space within the class. And the two vectors can be regarded as the characteristics of the respective space, such as hi can be regarded as xij in the speaker space of the characteristic expression. In the recognition scoring phase, if the two voices have the same likelihood of the Hi feature, then the two voices are more definitely the same speaker.
Model Training
Easy to understand, Plda model parameters of a 4, respectively, are data mean Miu, spatial characteristic matrix F and g, noise covariance sigma. The training process of the model is solved by using the classical EM algorithm. Why Use em? Because the model contains hidden variables.
Model testing
In the test phase, we no longer calculate the score based on consine distance like LDA, but instead calculate whether the two voices are generated by the feature hi in the speaker space, or the likelihood generated by hi, rather than the difference in the space within the class. Here, we use the logarithmic likelihood ratio to calculate the score. As shown in the following:
In the formula, if there are two test voices, the two voices from the same space are assumed to be HS, from different space assumptions for HD, then by calculating the logarithmic likelihood ratio, you can measure the similarity of two voices. The higher the score, the greater the likelihood that two voices belong to the same speaker.
A simplified version of Plda
Since we are only concerned with distinguishing between the different speaker's inter-class features and not the same speaker's class features, it is not necessary to solve the G-parameters of the intra-class space as above. Thus, we can get a simplified version of the Plda, as follows:
Comprehension of Plda model of voice-print recognition