Word embeddings: encoding lexical semantics

Source: Internet
Author: User
Tags pytorch
Word embeddings: encoding lexical semantics
  • Getting dense word embeddings
  • Word embeddings in pytorch
  • An example: n-gram Language Modeling
  • Exercise: computing word embeddings: continuous bag-of-Words
Word embeddings in pytorch
import torchimport torch.nn as nnimport torch.nn.functional as Fimport torch.optim as optimtorch.manual_seed(1)word_to_ix = {"hello": 0, "world": 1}embeds = nn.Embedding(2, 5)  # 2 words in vocab, 5 dimensional embeddingslookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)hello_embed = embeds(lookup_tensor)print(hello_embed)

Out:

tensor([[ 0.6614,  0.2669,  0.0617,  0.6213, -0.4519]],       grad_fn=<EmbeddingBackward>)
An example: n-gram Language Modeling
CONTEXT_SIZE = 2EMBEDDING_DIM = 10# We will use Shakespeare Sonnet 2test_sentence = """When forty winters shall besiege thy brow,And dig deep trenches in thy beauty‘s field,Thy youth‘s proud livery so gazed on now,Will be a totter‘d weed of small worth held:Then being asked, where all thy beauty lies,Where all the treasure of thy lusty days;To say, within thine own deep sunken eyes,Were an all-eating shame, and thriftless praise.How much more praise deserv‘d thy beauty‘s use,If thou couldst answer ‘This fair child of mineShall sum my count, and make my old excuse,‘Proving his beauty by succession thine!This were to be new made when thou art old,And see thy blood warm when thou feel‘st it cold.""".split()# we should tokenize the input, but we will ignore that for now# build a list of tuples.  Each tuple is ([ word_i-2, word_i-1 ], target word)trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])            for i in range(len(test_sentence) - 2)]vocab = set(test_sentence) #the element in set is distinctword_to_ix = {word: i for i, word in enumerate(vocab)}class NGramLanguageModeler(nn.Module):    def __init__(self, vocab_size, embedding_dim, context_size):        super(NGramLanguageModeler, self).__init__()        self.embeddings = nn.Embedding(vocab_size, embedding_dim)        self.linear1 = nn.Linear(context_size * embedding_dim, 128)        self.linear2 = nn.Linear(128, vocab_size)    def forward(self, inputs):        embeds = self.embeddings(inputs).view((1, -1))        out = F.relu(self.linear1(embeds))        out = self.linear2(out)        log_probs = F.log_softmax(out, dim=1)        return log_probslosses = []loss_function = nn.NLLLoss()model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)optimizer = optim.SGD(model.parameters(), lr=0.001)for epoch in range(10):    total_loss = 0    for context, target in trigrams:        context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)        model.zero_grad()        log_probs = model(context_idxs)              loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))               loss.backward()        optimizer.step()                total_loss += loss.item()    losses.append(total_loss)print(losses)  
Exercise: computing word embeddings: continuous bag-of-Words
CONTEXT_SIZE=2raw_text= """We are about to study the idea of a computational process.Computational processes are abstract beings that inhabit computers.As they evolve, processes manipulate other abstract things called data.The evolution of a process is directed by a pattern of rulescalled a program. People create programs to direct processes. In effect,we conjure the spirits of the computer with our spells.""".split()# By deriving a set from `raw_text`, we deduplicate the arrayvocab = set(raw_text)vocab_size = len(vocab)word_to_ix={word:i for i,word in enumerate(vocab)}data=[]for i in range(2,len(raw_text)-2):    context=[raw_text[i-2],raw_text[i-1],raw_text[i+1],raw_text[i+2]]    target=raw_text[i]    data.append((context,target))print(data[:5])class CBOW(nn.Module):    def __init__(self):        pass        def forward(self,inputs):        passdef make_context_vector(context,word_to_ix):    idxs=[word_to_ix[w] for w in context]    return torch.tensor(idxs,dtype=torch.long)make_context_vector(data[0][0],word_to_ix)

 

Word embeddings: encoding lexical semantics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.