Web Information Retrieval | Vector Space Model
Last Updated :
30 Jul, 2024
It goes without saying that in general a search engine responds to a given query with a ranked list of relevant documents.The purpose of this article is to describe a first approach to finding relevant documents with respect to a given query. In the Vector Space Model (VSM), each document or query is a N-dimensional vector where N is the number of distinct terms over all the documents and queries.The i-th index of a vector contains the score of the i-th term for that vector.
The main score functions are based on: Term-Frequency (tf) and Inverse-Document-Frequency(idf).
Term-Frequency and Inverse-Document Frequency -
The Term-Frequency (tf_{ij} ) is computed with respect to the i-th term and j-th document : $$ tf_{i, j} = \frac{n_{i, j}}{\sum_k n_{k, j}} $$ where $ n_{i, j} $ are the occurrences of the i-th term in the j-th document.
The idea is that if a document has multiple receptions of given terms, it will probably deals with that argument.
The Inverse-Document-Frequency (idf_{i} ) takes into consideration the i-th terms and all the documents in the collection : $$ idf_i = \mbox{log} \frac{|D|}{|{d : t_i \in d}|} $$
The intuition is that rare terms are more important that common ones : if a term is present only in a document it can mean that term characterizes that document.
The final score w_{i, j} for the i-th term in the j-th document consists of a simple multiplication : tf_{ij}*idf_{i} . Since a document/query contains only a subset of all the distinct terms in the collection, the term frequency can be zero for a big number of terms : this means a sparse vector representation is needed to optimize the space requirements.
Cosine Similarity -
In order to compute the similarity between two vectors : a, b (document/query but also document/document), the cosine similarity is used : \begin{equation} \cos ({\bf a}, {\bf b})= {{\bf a} {\bf b} \over \|{\bf a}\| \|{\bf b}\|} = \frac{ \sum_{i=1}^{n}{{\bf a}_i{\bf b}_i} }{ \sqrt{\sum_{i=1}^{n}{({\bf a}_i)^2}} \sqrt{\sum_{i=1}^{n}{({\bf b}_i)^2}} } \end{equation}
This formula computes the cosine of the angle described by the two normalized vectors : if the vectors are close, the angle is small and the relevance is high.
It can be shown the cosine similarity is the same of the Euclidean distance under the assumption of vector normalization.
Improvements -
There is a subtle problem with the vector normalization: short document that talks about a single topic can be favored at the expenses of long document that deals with more topics because the normalization does not take into consideration the length of a document.
The idea of pivoted normalization is to make document shorter than an empirical value ( pivoted length : l_{p} ) less relevant and document longer more relevant as shown in the following image: Pivoted Normalization

A big issue that it is not taken into consideration in the VSM are the synonyms : there is no semantic relatedness between terms since it is not captured neither by the term frequency nor the inverse document frequency. In order to solve this problems the Generalized Vector Space Model(GVSM) has been introduced.
The Vector Space Model (VSM) is a widely used information retrieval model that represents documents as vectors in a high-dimensional space, where each dimension corresponds to a term in the vocabulary. The VSM is based on the assumption that the meaning of a document can be inferred from the distribution of its terms, and that documents with similar content will have similar term distributions.
To apply the VSM, first a collection of documents is preprocessed by tokenizing, stemming, and removing stop words. Then, a term-document matrix is constructed, where each row represents a term and each column represents a document. The matrix contains the frequency of each term in each document, or some variant of it (e.g., term frequency-inverse document frequency, TF-IDF).
The query is also preprocessed and represented as a vector in the same space as the documents. Then, a similarity score is computed between the query vector and each document vector using a cosine similarity measure. Documents are ranked based on their similarity score to the query, and the top-ranked documents are returned as the search results.
The VSM has many advantages, such as its simplicity, effectiveness, and ability to handle large collections of documents. However, it also has some limitations, such as the "bag of words" assumption, which ignores word order and context, and the problem of term sparsity, where many terms occur in only a few documents. These limitations can be addressed using more sophisticated models, such as probabilistic models or neural models, that take into account the semantic relationships between words and documents.
Advantages:
Access to vast amounts of information: WIR provides access to a vast amount of information available on the internet, making it a valuable resource for research, decision-making, and entertainment.
Easy to use: WIR is user-friendly, with simple and intuitive search interfaces that allow users to enter keywords and retrieve relevant information quickly.
Customizable: WIR allows users to customize their search results by using filters, sorting options, and other features to refine their search criteria.
Speed: WIR provides rapid search results, with most queries being answered in seconds or less.
Disadvantages:
Quality of information: The quality of information retrieved by WIR can vary greatly, with some sources being unreliable, outdated, or biased.
Privacy concerns: WIR raises privacy concerns, as search engines and websites may collect personal information about users, such as their search history and online behavior.
Over-reliance on algorithms: WIR relies heavily on algorithms, which may not always produce accurate results or may be susceptible to manipulation.
Search overload: With the vast amount of information available on the internet, WIR can be overwhelming, leading to information overload and difficulty in finding the most relevant information.
Similar Reads
Issues in Information Retrieval
Indexing is the most vital part of any Information Retrieval System. It is a process in which the documents required by the users are transformed into searchable data structures. Indexing can be also referred to as the process of extraction rather than analysis of particular content. It creates a co
2 min read
Vector indexOf() Method in Java
The java.util.vector.indexOf(Object element) method is used to check and find the occurrence of a particular element in the vector. If the element is present then the index of the first occurrence of the element is returned otherwise -1 is returned if the vector does not contain the element. Syntax:
3 min read
Types of Queries in IR Systems
During the process of indexing, many keywords are associated with document set which contains words, phrases, date created, author names, and type of document. They are used by an IR system to build an inverted index which is then consulted during the search. The queries formulated by users are comp
3 min read
Record-Based Data Model
Data Model is the model that organizes elements of the data and tell how they relate to one-another and with the properties of real-world entities. The basic purpose of the data model is to make sure that the data stored in the data model is understood fully. Further, it has three types- 1. Physical
2 min read
Page Rank Algorithm and Implementation
PageRank (PR) is an algorithm used by Google Search to rank websites in their search engine results. PageRank was named after Larry Page, one of the founders of Google. PageRank is a way of measuring the importance of website pages. According to Google:PageRank works by counting the number and quali
8 min read
Difference between Web Content, Web Structure, and Web Usage Mining
Web mining is an application of the Data Mining technique that is used to find information patterns from the web data. Web Mining helps to improve the power of web search engines by identifying the web pages and classifying web documents. Types of Web Mining : 1. Web Content Mining - Web Content Min
2 min read
Project Idea | Recommendation System based on Graph Database
The main objective of this project is to build an efficient recommendation engine based on graph database(Neo4j). The system aims to be a one stop destination for recommendations such as Movies, Books, Blog. Features: Movie Module: a) Rate Movies (1-5) rating. b) Get Movie Recommendations using coll
1 min read
Difference Between ER Modeling and Dimensional Modeling
A logical view of data is provided by the high-level conceptual data model known as the Entity-Relationship (ER) model, which represents entities, characteristics, and relationships. It is necessary to create databases that are well-structured. Ralph Kimball invented dimensional modeling, which comb
4 min read
Trie Data Structure Tutorial
The trie data structure, also known as a prefix tree, is a tree-like data structure used for efficient retrieval of key-value pairs. It is commonly used for implementing dictionaries and autocomplete features, making it a fundamental component in many search algorithms. In this article, we will expl
15+ min read
Inverted Index
An Inverted Index is a data structure used in information retrieval systems to efficiently retrieve documents or web pages containing a specific term or set of terms. In an inverted index, the index is organized by terms (words), and each term points to a list of documents or web pages that contain
7 min read