UIUC University Coursera Course text retrieval and Search Engines:week 3 Practice University

Source: Internet
Author: User
Week 3 Practice quizhelp Center

Warning:the hard deadline has passed. You can attempt it, but and you won't be. You are are welcome to try it as a learning exercise. In accordance with the Coursera Honor Code, I certify this answers here are I own work. Question 1 are given a vocabulary composed of only three words: "text", "mining", and "the". Below are the probabilities of two of this three words given by a Unigram model:

Word

Probability

Text

0.4

Mining

0.2
What is the probability of generating the phrase "text mining" using this Unigram language model? 0 0.4 0.032 0.08 question 2 You are given the query q= "Food Safety" and two documents:
D1 = "Food Quality regulations"
D2 = "Food Safety measures"
Assume you are using the maximum likelihood estimator without smoothing to calculate the probabilities of words in docume NTS (i.e., estimated P (w| D) is the relative frequency of Word w in the document D). Based on the Unigram query likelihood model, which to the following choices is correct? P (q| D1) = 0 P (q| D2) = 1/9 P (q| D1) = 1/3 P (q| D2) = 1/9 P (q| D1) = 1/3 P (q| D2) = 0 P (q| D1) = 1/2 P (q| D2) = 1/2 question 3 probability smoothing avoids assigning zero probabilities to unseen words in documents. True False Question 4 Assume you are given two scoring functions:

S1 (q,d) =p (q| D

S2 (q,d) =logp (q| D

For the same query and corpus,  s1 and  S2 will give the same ranked list of documents. True False Question 5 assume you are using linear interpolation (jelinek-mercer) smoothing to estimate the probabilities O F words in a certain document. What happens to the smoothed probability of the word when the parameter λ is  decreased? It does not-change it becomes closer to the maximum likelihood estimate of the probability of the the document. It becomes closer to the probability of the word in the collection language model question 6 refer to the Rocchio Formula in the slides. If you want to reduce the effect of the  relevant documents in the updated query, which of the following should is done? Increase γreduce βincrease βreduce γquestion 7 assume That β=1 is a good choice when PE Rforming Relevance feedback using Rocchio ' s method. What is a reasonable value of βto to relying on PseuDo feedback? More than 1 less than 1 1 question 8 let  q be The original query vector,  DR={P1,..., pn} be the set O F Positive Document vectors, and  dn={n1,..., nm} be The set of negative document vectors. let  q1 be the expanded query vector after applying Rocchio on  dr and  dn with par Ameter values α, β, And γ. let  q2 be the expanded query vector after applying Rocchio on  dr and  Dn with the same Val UEs forα, β, but γ being set to zero.

In which updated query do your expect Stopwords to have higher? Weights Q1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.