Project Details
Description
This project will endow small aerial vehicles (e.g., quadcopters) with autonomous and universal perching capability on stationary or moving surfaces of arbitrary orientations, thereby expanding their operational capabilities in the areas of reconnaissance, inspection, surveillance, environmental monitoring, and search and rescue. For example, it will enable them to land on a sailing ship that heaves and sways with the sea, to hitchhike onto a moving ground or aerial platform for charging or safety, to assist a human-pilot to easily land a drone on self-selected targets (e.g., on walls, powerlines, and underneath a bridge). The research will focus on the co-design of embodied physical and computational intelligence through an integrated learning framework to achieve robust perching in most circumstances. The project will also create a STEM educational framework for K-12 students through visually appealing, interactive robotic flight and perching experiments to introduce multidisciplinary concepts in robotics, machine learning, mechanical design, smart materials, and flight principles. Furthermore, the research outcomes will be integrated into various educational and outreach modules for undergraduate students as well as general workforce development, leveraging the newly NSF-funded Center for Autonomous Air Mobility and Sensing (CAAMS) at Pennsylvania State University.The objective of this research is to combine the design and learning modalities of both physically embodied intelligence and computational intelligence to enable a wide range of dynamic touchdown mechanisms necessary for robust omnidirectional perching of small aerial vehicles. Physical intelligence will be achieved via a novel landing gear system with an array of bio-inspired, miniature robotic tarsi, whose compliance can be rapidly tuned on the spot during the touchdown. Computational intelligence will be achieved via 1) the initiation and control of perching angular maneuvers by learning predictive policy regions and the associated policy mapping for motor control and 2) vision based, optical-flow-constrained tau-guidance that simultaneously brings a robot into the target policy region and the target landing location. Finally, computational intelligence will be integrated with physical intelligence through a two-layered framework composed of joint learning of: a) mechanical design and motor control policies and b) embodied and motor control policies, respectively, as the former component would control the bio-inspired tarsi compliance and the latter would control the aerial maneuvers. In conclusion, this project will advance knowledge in the co-design, integration, interplay, and trade-offs between computational and physical intelligence in robots to achieve novel and robust capabilities. This project is supported by the cross-directorate Foundational Research in Robotics program, jointly managed and funded by the Directorates for Engineering (ENG) and Computer and Information Science and Engineering (CISE).This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 8/15/23 → 7/31/26 |
Funding
- National Science Foundation: $700,021.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.