Multitask and Transfer Learning
Multitask Learning: Different task networks can share a subset of network structures (for example, one hidden layer)
Transfer learning: Migration Learning SHL-MDNN
Shared-hidden-layer multilingual DNN, a model for training different languages, all models share the same hidden layer, and the output layer is language-related.
The common hidden layer can be considered as a feature extractor, and the final output layer is the classifier.
SHL-MDNN requires multiple languages to be trained at the same time, and a mini-batch includes training corpus in multiple languages.
The experiment shows that the shl-mdnn of multi-language training is improved compared with the DNN performance of single language training, and the shared hidden layer training weakens the overfitting problem in a certain degree.
As a feature extractor, the hidden layer can migrate the phoneme to other languages.
-If you need to add a language, only need to add an output layer, or reuse the previously trained hidden layer, training is fixed hidden layer, only need to train the last layer of parameters can be
-If the new language training data is sufficient, the overall retraining effect is better.
-Can be migrated from English to Chinese, still effective RNN
RNN
LSTM Reference
"Automatic speech recognition a deep learning approach" chapter 12-15