There are many configuration parameters to train Word2vec model with Gensim function library. The parameter descriptions for the Word2vec function of the Gensim document are translated here.
Class Gensim.models.word2vec.Word2Vec (sentences=none,size=100,alpha=0.025,window=5, min_count=5, max_vocab_size= None, Sample=0.001,seed=1, workers=3,min_alpha=0.0001, sg=0, Hs=0, negative=5, Cbow_mean=1, hashfxn=<built-in function Hash>,iter=5,null_word=0, Trim_rule=none, Sorted_vocab=1, batch_words=10000)
Parameters:
· Sentences: Can be a ist, for large corpus, it is recommended to use Browncorpus,text8corpus or inesentence build.
· SG: Used to set the training algorithm, the default is 0, corresponds to the cbow algorithm; sg=1 uses the Skip-gram algorithm.
· Size: Refers to the dimension of the feature vector, which defaults to 100. Larger size requires more training data, but it works better. The recommended values are dozens of to hundreds of.
· Window: Indicates what the maximum distance between the current word and the predicted word is in a sentence
· Alpha: is the learning rate
· Seed: Used for random number generators. is related to the initialization word vector.
· Min_count: You can truncate the dictionary. Words with a frequency less than min_count are discarded and the default value is 5
· Max_vocab_size: Sets the ram limit during word vector building. If the number of independent words exceeds this, the least frequent one will be removed. Approximately 1GB of RAM is required for every 10 million words. No limit is set to none.
· Sample: The configuration threshold for random drop sampling of high frequency vocabulary, default is 1e-3, range is (0,1e-5)
· The workers parameter controls the number of parallel sessions of the training.
· HS: If 1, the Hierarchica Softmax technique will be used. If set to 0 (Defau t), negative sampling will be used.
· Negative: If >0, negativesamp ing is used to set the number of noise words
· Cbow_mean: If 0, the same as the vector of the context word, and if 1 (Defau t), the mean value is used. Only works when Cbow is used.
· The Hashfxn:hash function to initialize the weights. Use Python's hash function by default
· ITER: Number of iterations, default = 5
· Trim_rule: Used to set the collation rules for the glossary, specifying which words to leave and which to delete. Can be set to None (Min_count will be used) or one accepts () and returns RU E_discard,uti s.ru E_keep or UTI s.ru E_defau The function of T.
· Sorted_vocab: If 1 (Defau t) is assigned word index, the word is sorted based on the frequency descending.
· Batch_words: The number of words per batch passed to the thread, default to 10000
Python Word2vec Parameter Contents