Vector space model-unique words selected as dimensions

Source: Internet
Author: User
Vector Space Model

The basic idea is to represent each document as a vector of certain weighted word frequencies. In order to do so, the following parsing and extraction steps are needed.

    1. Ignoring case, extract all unique words from the entire set of documents.
    2. Eliminate non-content-bearing ''stopwords ''such as ''a', ''and'', ''the'', etc. for sample lists of stopwords, see [#! Frakes: Baeza-Yates! #, Chapter 7].
    3. For each document, count the number of occurrences of each word.
    4. Using heuristic or information-theoretic criteria, eliminate non-content-bearing ''high-frequency ''and ''low-frequency'' words [#! Salton: Book! #].
    5. After the above elimination, suppose unique words remain. assign a unique identifier between and to each remaining word, and a unique identifier between and to each document.

The above steps outline a simple preprocessing scheme. in addition, one may extract word phrases such as ''new York, ''and one may reduce each word to its ''root'' or ''stem '', thus eliminating plurals, tenses, prefixes, and suffixes [#! Frakes: Baeza-Yates! #, Chapter 8].

The above preprocessing yields the number of occurrences of word in document, say, and the number of documents which contain the word, say ,. using these counts, we can represent the-th document as a-dimen1_vector As follows. , Set the-th component , To be the product of three terms

 

$ "Src =" http://www.cs.utexas.edu/users/yguan/papers/effclus/img20.png "width =" 126 "align =" Middle "border =" 0 ">

 

Where isTerm Weighting componentAnd depends only on, while isGlobal weighting componentAnd depends on, and isNormalization componentFor . Intuitively, captures the relative importance of a word in a document, while captures the overall importance of a word in the entire set of documents. the objective of such weighting schemes is to enhance discrimination between varous document vectors for better retrieval implements tiveness [#! Salton: Buckley! #].

There are using schemes for selecting the term, global, and normalization components, see [#! Kolda: thesis! #] For various possibilities. In this paper we use the popular scheme known as normalized Term Frequency-inverse Document Frequency . This scheme uses $ T _ {Ji} = F _ {Ji }$
-->, $ g_j = \ log (D/d_j) $
--> and $ s_ I = \ left (\ sum _ {j = 1} ^ W (T _ {Ji} g_j) ^ 2 \ right) ^ {-1/2 }$
-->. note that this normalization implies that $ \ | \ mbox {$ \ mathbf {x }$ _ I \ | = 1 $
-->, I. E ., each document vector lies on the surface of the unit sphere in. intuitively, the effect of normalization is to retain only the Proportion of words occurring in a document. this ensures that users dealing with the same subject matter (that is, using similar words), but differing in length lead to similar document vectors.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.