Automatic extraction of informative blocks from webpages

Sandip Debnath, Prasenjit Mitra, C. Lee Giles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

56 Scopus citations

Abstract

Search engines crawl and index webpages depending upon their informative content. However, webpages - especially dynamically generated ones - contain items that cannot be classified as the "primary content", e.g., navigation side-bars, advertisements, copyright notices, etc. Most end-users search for the primary content, and largely do not seek the non-informative content. A tool that assists an end-user or application to search and process information from web-pages automatically, must separate the "primary content blocks" from the other blocks. In this paper, two new algorithms, ContentExtractor, and FeatureExtractor are proposed. The algorithms identify primary content blocks by i) looking for blocks that do not occur a large number of times across webpages and ii) looking for blocks with desired features respectively. They identify the primary content blocks with high precision and recall, reduce the storage requirement for search engines, result in smaller indexes and thereby faster search times, and better user satisfaction. While operating on several thousand webpages obtained from 11 news websites, our algorithms significantly outperform the Entropy-based algorithm proposed by Lin and Ho [7] in both accuracy and run-time.

Original languageEnglish (US)
Title of host publicationApplied Computing 2005 - Proceedings of the 20th Annual ACM Symposium on Applied Computing
Pages1722-1726
Number of pages5
Volume2
DOIs
StatePublished - 2005
Event20th Annual ACM Symposium on Applied Computing - Santa Fe, NM, United States
Duration: Mar 13 2005Mar 17 2005

Other

Other20th Annual ACM Symposium on Applied Computing
Country/TerritoryUnited States
CitySanta Fe, NM
Period3/13/053/17/05

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint

Dive into the research topics of 'Automatic extraction of informative blocks from webpages'. Together they form a unique fingerprint.

Cite this