Academic Journals Database
Disseminating quality controlled scientific knowledge

A Hybrid Revisit Policy For Web Search

ADD TO MY LIST
 
Author(s): Vipul Sharma | Mukesh Kumar | Renu Vig

Journal: Journal of Advances in Information Technology
ISSN 1798-2340

Volume: 3;
Issue: 1;
Start page: 36;
Date: 2012;
Original page

ABSTRACT
A crawler is a program that retrieves and stores pages from the Web, commonly for a Web search engine. A crawler often has to download hundreds of millions of pages in a short period of time and has to constantly monitor and refresh the downloaded pages. Once the crawler has downloaded a significant number of pages, it has to start revisiting the downloaded pages in order to refresh the downloaded collection. Due to resource constraints, search engines usually have difficulties keeping the entire local repository synchronized with the web. Given the size of web today and inherent resource constraints: re-crawling too frequently leads to wasted bandwidth, re-crawling too infrequently brings down the quality of the search engine. In this paper a hybrid approach is build on the basis of which a web crawler maintains the retrieved pages “fresh” in the local collection. Towards this goal the concept of Page rank and Age of a web page is used. As higher page rank means that more number of users are visiting that very web page and that page has higher link popularity. Age of web page is a measure that indicates how outdated the local copy is. Using these two parameters a hybrid approach is proposed that can identify important pages at the early stage of a crawl, and the crawler re-visit these important pages with higher priority.
Save time & money - Smart Internet Solutions      Why do you need a reservation system?