Basic ideas
In Chinese, we all know that Chinese characters are made up of several parts, called the radicals. And the radicals often provide rich semantic information. For example, the word composed of a single person often means people, such as "he", "you" and so on, three points of water for the radical character is often related to water, such as "Sea", "Jiang" and so on. Therefore, a very natural idea is to integrate the information of the radicals into the generation of word vectors. Yanran Li and other people published the paper "Component-enhanced Chinese Character embeddings" is in this regard to make an attempt. Introduced
For Chinese, now divided into simplified and traditional characters, the strokes of the text is not the same, so for the same word and the same radical, its manifestations are different. such as "food" as a radical, in traditional characters is "飠", and in Simplified Chinese is "cannibals". To address this inconsistency, the author of the paper converted all the characters into traditional characters. Secondly, the author thinks that the radical of a word can provide richer semantic information than other parts of the word, so only the radical is added to the generation of the word vector as additional semantic information. Specific methods
First, some symbol tags are introduced. Suppose there is a word sequence d={z1, Z2, ..., and Zn} represents a collection of n words in the dictionary v. Make z represent a Chinese character, C represents context information, E represents a list of radicals, K is the dimension of a vector, T represents the size of a window, m represents the number of strokes that each word takes into account, and the first is used as a radical and V for the dictionary size.
The authors propose two models, namely Charcbow and Charskipgram, respectively, which are based on the Cbow and Skipgram models. The Charcbow model is introduced here, and it makes two changes on the basis of the Cbow model: first, it adds the radical information, then the original vector of the map layer is added to the end-and-end connection of the vector.
Its goal is to make the following likelihood function the maximum
The model is illustrated as follows