Today I read a Zhiyuan Liu Teacher team 2015 in the Summit IJCAI published the paper "Joint Learning of Character and Word embeddings", also have about in the word vector generation part has been improved, The paper introduces the information of the single Chinese character of word composition (mainly in Chinese), and improves the quality of word vector generation. Because model name is called "character-enhanced word embeddding Model", the model is abbreviated as CWE.
As can be seen from the title of the paper, this article paper in the word vector training, the words in terms of the Chinese characters are extracted separately, and the words to train together. This makes a connection between those words that share Chinese characters, because paper's hypothesis is that Chinese characters in the words "semantically compositional" have a certain representation of the meaning of words, such as the word "intelligence". But not all the words in Chinese are semantically compositional, such as some translated words "chocolate", "couch", for example some names of entities, such as some names, place names and country names. In these words, the meaning of a single Chinese character may be completely related to the meaning of the word. In this article, the author has done a lot of work to paper these words are not semantically compositional nature of all the manual selection, for these words do not go to single-word split processing.
The model presented in this paper is improved on the basis of Word2vec's Cbow, and the overall optimization function of the model is as follows:
∑N−KI=KLOGPR (xi|xi−k,...., xi+k) \SUM_{I=K}^{N-K}LOGPR (X_i|x_{i-k},...., x_{i+k})
PR (xi|xi−k,...., xi+k) =exp (XO.XI) ∑xj∈dictionaryexp (XO.XJ) Pr (X_i|x_{i-k},...., x_{i+k}) =\frac{exp (x_o.x_i)}{\sum_ {X_j\in{dictionary}}exp (X_o.x_j)}
Xo=12k∑j=i−k,.... i+kxj x_o=\frac{1}{2k}\sum_{j=i-k,.... i+k}x_j
It can be seen from the Cbow model that the context representation is the addition and averaging of the word vectors in the windows before and after the WI w_i.
After introducing the traditional Cbow model, we will introduce the model presented by this paper, the model diagram is as follows:
As can be clearly seen from the above image, in the traditional Cbow model, the context information of target word "era" is to add the word vector form of "intelligence" and "arrival" directly, while in the CWE model, the representation of words in the context is derived from the word vectors on the one hand, There is also a part of the vector of words from these words, which are calculated in the following way:
xj=