TY - JOUR
T1 - Direct-to-consumer medical machine learning and artificial intelligence applications
AU - Babic, Boris
AU - Gerke, Sara
AU - Evgeniou, Theodoros
AU - Cohen, I. Glenn
N1 - Funding Information:
S.G. and I.G.C. were supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784).
Publisher Copyright:
© 2021, Springer Nature Limited.
PY - 2021/4
Y1 - 2021/4
N2 - Direct-to-consumer medical artificial intelligence/machine learning applications are increasingly used for a variety of diagnostic assessments, and the emphasis on telemedicine and home healthcare during the COVID-19 pandemic may further stimulate their adoption. In this Perspective, we argue that the artificial intelligence/machine learning regulatory landscape should operate differently when a system is designed for clinicians/doctors as opposed to when it is designed for personal use. Direct-to-consumer applications raise unique concerns due to the nature of consumer users, who tend to be limited in their statistical and medical literacy and risk averse about their health outcomes. This creates an environment where false alarms can proliferate and burden public healthcare systems and medical insurers. While similar situations exist elsewhere in medicine, the ease and frequency with which artificial intelligence/machine learning apps can be used, and their increasing prevalence in the consumer market, calls for careful reflection on how to effectively regulate them. We suggest regulators should strive to better understand how consumers interact with direct-to-consumer medical artificial intelligence/machine learning apps, particularly diagnostic ones, and this requires more than a focus on the system’s technical specifications. We further argue that the best regulatory review would also consider such technologies’ social costs under widespread use.
AB - Direct-to-consumer medical artificial intelligence/machine learning applications are increasingly used for a variety of diagnostic assessments, and the emphasis on telemedicine and home healthcare during the COVID-19 pandemic may further stimulate their adoption. In this Perspective, we argue that the artificial intelligence/machine learning regulatory landscape should operate differently when a system is designed for clinicians/doctors as opposed to when it is designed for personal use. Direct-to-consumer applications raise unique concerns due to the nature of consumer users, who tend to be limited in their statistical and medical literacy and risk averse about their health outcomes. This creates an environment where false alarms can proliferate and burden public healthcare systems and medical insurers. While similar situations exist elsewhere in medicine, the ease and frequency with which artificial intelligence/machine learning apps can be used, and their increasing prevalence in the consumer market, calls for careful reflection on how to effectively regulate them. We suggest regulators should strive to better understand how consumers interact with direct-to-consumer medical artificial intelligence/machine learning apps, particularly diagnostic ones, and this requires more than a focus on the system’s technical specifications. We further argue that the best regulatory review would also consider such technologies’ social costs under widespread use.
UR - http://www.scopus.com/inward/record.url?scp=85104750243&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104750243&partnerID=8YFLogxK
U2 - 10.1038/s42256-021-00331-0
DO - 10.1038/s42256-021-00331-0
M3 - Review article
AN - SCOPUS:85104750243
SN - 2522-5839
VL - 3
SP - 283
EP - 287
JO - Nature Machine Intelligence
JF - Nature Machine Intelligence
IS - 4
ER -