A word vector model for joint training of words and words

Source: Internet
Author: User

Today I read a Zhiyuan Liu Teacher team 2015 in the Summit IJCAI published the paper "Joint Learning of Character and Word embeddings", also have about in the word vector generation part has been improved, The paper introduces the information of the single Chinese character of word composition (mainly in Chinese), and improves the quality of word vector generation. Because model name is called "character-enhanced word embeddding Model", the model is abbreviated as CWE.

As can be seen from the title of the paper, this article paper in the word vector training, the words in terms of the Chinese characters are extracted separately, and the words to train together. This makes a connection between those words that share Chinese characters, because paper's hypothesis is that Chinese characters in the words "semantically compositional" have a certain representation of the meaning of words, such as the word "intelligence". But not all the words in Chinese are semantically compositional, such as some translated words "chocolate", "couch", for example some names of entities, such as some names, place names and country names. In these words, the meaning of a single Chinese character may be completely related to the meaning of the word. In this article, the author has done a lot of work to paper these words are not semantically compositional nature of all the manual selection, for these words do not go to single-word split processing.

The model presented in this paper is improved on the basis of Word2vec's Cbow, and the overall optimization function of the model is as follows:
∑N−KI=KLOGPR (xi|xi−k,...., xi+k) \SUM_{I=K}^{N-K}LOGPR (X_i|x_{i-k},...., x_{i+k})
PR (xi|xi−k,...., xi+k) =exp (XO.XI) ∑xj∈dictionaryexp (XO.XJ) Pr (X_i|x_{i-k},...., x_{i+k}) =\frac{exp (x_o.x_i)}{\sum_ {X_j\in{dictionary}}exp (X_o.x_j)}
Xo=12k∑j=i−k,.... i+kxj x_o=\frac{1}{2k}\sum_{j=i-k,.... i+k}x_j
It can be seen from the Cbow model that the context representation is the addition and averaging of the word vectors in the windows before and after the WI w_i.

After introducing the traditional Cbow model, we will introduce the model presented by this paper, the model diagram is as follows:

As can be clearly seen from the above image, in the traditional Cbow model, the context information of target word "era" is to add the word vector form of "intelligence" and "arrival" directly, while in the CWE model, the representation of words in the context is derived from the word vectors on the one hand, There is also a part of the vector of words from these words, which are calculated in the following way:
xj=

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.