Distributed computing plays a central role in enabling modern machine learning and artificial intelligence applications. This project develops new theoretical frameworks, analyses, and techniques for distributed computing and machine learning aiming to (i) accelerate the computation time by overcoming system bottlenecks, (ii) ensure accurate computation in the presence of hardware errors and faults, and (iii) enable data-processing approaches that adhere to data-privacy constraints. These attributes will be enabled by inducing a controlled amount of redundancy in computations based on coding theory - a field that has enabled modern data communication and storage technologies. To obtain a realistic understanding of what is possible in the long run, the developed techniques will be accompanied by fundamental bounds on the tradeoffs between computation accuracy, data privacy, error tolerance, and redundancy overheads. The project will disseminate outcomes and enable awareness of developed research to the broader scientific community through publications, tutorials, and curricular integration.Coded computing is a sub-area of information and coding theory that induces redundancy into distributed computing. Coded computing has emerged as a promising paradigm to relieve straggler, communication, and data-privacy bottlenecks in large-scale distributed machine learning. Yet, state-of-the-art coded-computing techniques, mostly devised to enable exact reconstruction of the computation output, have fundamental efficiency limitations, particularly for nonlinear computation tasks. This project develops techniques for approximate coded computing, wherein the decoder aims to obtain the function output within a prescribed distortion limit, and data-privacy constraints are posed as limits on differential-privacy parameters. The research will be conducted in three closely connected thrusts: (i) coding schemes for fault-tolerant approximate matrix multiplication, (ii) coding schemes for fault-tolerant approximate nonlinear computations beyond matrix multiplications, and (iii) coding schemes for differentially private computations. The techniques developed will combine ideas from information and coding theories, mathematical approximation theory, and differential privacy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|Effective start/end date||6/1/23 → 5/31/26|
- National Science Foundation: $350,000.00
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.