Project Details
Description
Extended reality (XR) technologies, including virtual reality (VR) and augmented reality (AR), are transforming how people interact with the world by merging virtual content with the physical world and creating immersive, interactive experiences. To reach their full potential, next-generation XR systems demand a high degree of context awareness, that is, a detailed understanding of both user behaviors and surrounding environmental conditions. With enhanced context awareness, XR systems can deliver virtual content that is personalized, timely, and highly relevant, adapting to user interactions and responding to changes in the surrounding environments. To achieve this goal, this project builds a new class of retrieval-augmented generation (RAG)-empowered XR systems that bring together the power of large language models (LLMs) and localized, context-rich knowledge databases to make XR systems more intelligent and adaptive. The project builds and maintains an accurate, up-to-date, and diverse knowledge database, which integrates diverse sources of contextual information, such as 3D object data, egocentric images, text inputs, and user-specific data like head pose, eye gaze, and user preferences. The project also makes context-aware XR more resource-efficient and low-latency, by strategically leveraging the collaboration of XR devices with nearby edge servers to process complex and dynamic contextual inputs. This project will lay the foundation for context-aware XR applications across multiple domains, such as commerce, entertainment, manufacturing, and social interactions. It will support a wide range of use cases, including smart homes, intelligent manufacturing, and collaborative social XR platforms, where systems can intelligently adapt to both individual and group user behaviors, as well as dynamic environmental conditions. The project will train several cohorts of undergraduate and graduate students. The findings of this project will be presented at K-12-oriented events, and at multiple venues in the field. This project designs, implements, and evaluates context-aware XR systems that can understand and respond to user states and environmental conditions. The research focuses on three thrusts. The first thrust builds a RAG system with a comprehensive knowledge database that integrates multimodal context data of XR users and environments. The second thrust designs resource-efficient update mechanisms that minimize latency for executing context-aware XR algorithms while maintaining high accuracy of context awareness under the resource constraints of edge servers. The third thrust tests the context-aware XR systems through building XR emulators and implementing real-world prototypes, with user studies evaluating performance based on real-time interactions. The project makes intellectual contributions to the studies of XR, edge computing, and machine learning. It augments LLMs with a personalized, localized knowledge database collected in real time from users’ This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
| Status | Finished |
|---|---|
| Effective start/end date | 10/1/25 → 1/31/26 |
Funding
- National Science Foundation: $199,444.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.