Measuring the web crawler ethics

C. Lee Giles, Yang Sun, Isaac G. Councill

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations


Web crawlers are highly automated and seldom regulated manually. The diversity of crawler activities often leads to ethical problems such as spam and service attacks. In this research, quantitative models are proposed to measure the web crawler ethics based on their behaviors on web servers. We investigate and define rules to measure crawler ethics, referring to the extent to which web crawlers respect the regulations set forth in robots.txt configuration files. We propose a vector space model to represent crawler behavior and measure the ethics of web crawlers based on the behavior vectors. The results show that ethicality scores vary significantly among crawlers. Most commercial web crawlers' behaviors are ethical. However, many commercial crawlers still consistently violate or misinterpret certain robots.txt rules. We also measure the ethics of big search engine crawlers in terms of return on investment. The results show that Google has a higher score than other search engines for a US website but has a lower score than Baidu for Chinese websites.

Original languageEnglish (US)
Title of host publicationProceedings of the 19th International Conference on World Wide Web, WWW '10
Number of pages2
StatePublished - 2010
Event19th International World Wide Web Conference, WWW2010 - Raleigh, NC, United States
Duration: Apr 26 2010Apr 30 2010

Publication series

NameProceedings of the 19th International Conference on World Wide Web, WWW '10


Other19th International World Wide Web Conference, WWW2010
Country/TerritoryUnited States
CityRaleigh, NC

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computer Science Applications


Dive into the research topics of 'Measuring the web crawler ethics'. Together they form a unique fingerprint.

Cite this