Project Details
Description
Automated machine learning (AutoML) represents a new machine learning paradigm that automates the pipeline from raw data to deployable models, enabling a much wider range of people to use machine learning techniques. However, each stage of this pipeline is subject to malicious attacks, which can lead to inaccurate or vulnerable models. This project’s goal is to understand how both the technologies underlying AutoML and the ways it is adopted change security risks around machine learning and how possible defenses to them change when using AutoML. The success of this project will not only improve the security of AutoML but also promote more principled practices of building and operating machine learning systems in general, while contributing to knowledge in the areas of security, machine learning, and human-computer interaction. The project has three main sub-goals: accounting for the full spectrum of security risks that arise around AutoML; understanding the fundamental factors that drive such risks; and designing for machine learning practitioners without extensive expertise. To accomplish these goals, the team will (i) better understand current practices around AutoML through user studies and interviews; (ii) empirically and analytically explore the security vulnerabilities of AutoML-generated models through assessing these models on widely used datasets; (iii) analyze the results of the first two activities to develop a comprehensive accounting of underlying factors such as standardization of algorithmic choices in the technology or over-reliance on automated metrics by users; and (iv) developing new principles, methodologies, and tools to mitigate the aforementioned risks. The team will also integrate the work into a number of college courses and conduct public outreach to raise awareness of the role machine learning plays in everyday life.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 10/1/22 → 9/30/25 |
Funding
- National Science Foundation: $500,000.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.