Consider a document as a collection of a series of lexical elements, each of which has a weight, as follows:
Document a= {termx, Termy, Termz ... termn}
Document b= {termx, Termy, Termz ... termn}
Documentvector = {weight1, weight2, weight3 ... weightn}
Weigh the length of the mapping of each participle to the unit matrix. This puts the document into an n-dimensional space vector (matrix) "All documents are divided into n-dimensional vector matrices, and in which document D is mapped on M-coordinates as the weight of M-words in document D" Gets the vector coordinate system, and the retrieval of document information is converted to the angle size between two vectors.
cosine similarity determines the similarity between two vectors by measuring the cosine of the angle of the inner product space of two vectors. The closer the cosine value is to 1, the closer the angle is to 0, the more similar the two vectors are.
the cosine of the two vectors can be deduced according to Euclidean dot product and magnitude formula:
(9)
by the formula (9) as well as the theory, we can draw:
(Ten)
The correlation between the query string and the records in the index can be obtained by calculating the cosine of the angle between the query vector and each vector..
The query word is appended with ^n to set the weight of this query word, the default is 1, if n is greater than 1, then this query word is more important, if n is less than 1, then this query word is less important. ^ n represents the length of each word in the matrix. (using a matrix to represent coordinates, for personal expansion of the idea, to a portal http://blog.csdn.net/myan/article/details/1865397)
Spatial vector Model Lucene