TY - JOUR
T1 - Exploiting the humanmachine gap in image recognition for designing CAPTCHAs
AU - Datta, Ritendra
AU - Li, Jia
AU - Wang, James Z.
N1 - Funding Information:
Manuscript received September 09, 2008; revised April 17, 2009. First published May 19, 2009; current version published August 14, 2009. This material is based upon work supported by the National Science Foundation under Grant 0347148, Grant 0219272, and Grant 0705210. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Klara Nahrstedt.
PY - 2009/9
Y1 - 2009/9
N2 - Security researchers have, for a long time, devised mechanisms to prevent adversaries from conducting automated network attacks, such as denial-of-service, which lead to significant wastage of resources. On the other hand, several attempts have been made to automatically recognize generic images, make them semantically searchable by content, annotate them, and associate them with linguistic indexes. In the course of these attempts, the limitations of state-of-the-art algorithms in mimicking human vision have become exposed. In this paper, we explore the exploitation of this limitation for potentially preventing automated network attacks. While undistorted natural images have been shown to be algorithmically recognizable and searchable by content to moderate levels, controlled distortions of specific types and strengths can potentially make machine recognition harder without affecting human recognition. This difference in recognizability makes it a promising candidate for automated Turing tests [completely automated public Turing test to tell computers and humans apart (CAPTCHAs)] which can differentiate humans from machines. We empirically study the application of controlled distortions of varying nature and strength, and their effect on human and machine recognizability. While human recognizability is measured on the basis of an extensive user study, machine recognizability is based on memory-based content-based image retrieval (CBIR) and matching algorithms. We give a detailed description of our experimental image CAPTCHA system, IMAGINATION, that uses systematic distortions at its core. A significant research topic within signal analysis, CBIR is actually conceived here as a tool for an adversary, so as to help us design more foolproof image CAPTCHAs.
AB - Security researchers have, for a long time, devised mechanisms to prevent adversaries from conducting automated network attacks, such as denial-of-service, which lead to significant wastage of resources. On the other hand, several attempts have been made to automatically recognize generic images, make them semantically searchable by content, annotate them, and associate them with linguistic indexes. In the course of these attempts, the limitations of state-of-the-art algorithms in mimicking human vision have become exposed. In this paper, we explore the exploitation of this limitation for potentially preventing automated network attacks. While undistorted natural images have been shown to be algorithmically recognizable and searchable by content to moderate levels, controlled distortions of specific types and strengths can potentially make machine recognition harder without affecting human recognition. This difference in recognizability makes it a promising candidate for automated Turing tests [completely automated public Turing test to tell computers and humans apart (CAPTCHAs)] which can differentiate humans from machines. We empirically study the application of controlled distortions of varying nature and strength, and their effect on human and machine recognizability. While human recognizability is measured on the basis of an extensive user study, machine recognizability is based on memory-based content-based image retrieval (CBIR) and matching algorithms. We give a detailed description of our experimental image CAPTCHA system, IMAGINATION, that uses systematic distortions at its core. A significant research topic within signal analysis, CBIR is actually conceived here as a tool for an adversary, so as to help us design more foolproof image CAPTCHAs.
UR - http://www.scopus.com/inward/record.url?scp=69749128717&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=69749128717&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2009.2022709
DO - 10.1109/TIFS.2009.2022709
M3 - Article
AN - SCOPUS:69749128717
SN - 1556-6013
VL - 4
SP - 504
EP - 518
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
IS - 3
M1 - 4956990
ER -