Abstract
Experimental measurement of position and attitude (pose) of a rigid target using machine vision is of particular importance to autonomous robotic manipulation. Traditionally, the monocular four-point pose problem has been used which encompasses three distinct subproblems: inverse perspective; calibration of internal camera parameters; and knowledge of the pose of the camera (external camera parameters). To this end, a new unified concept for monocular pose measurement using computational neural networks has been developed which obviates the need to estimate camera parameters and which provides rapid solution of inverse perspective with compensation for nonhomogeneous lens distortion. Input neurons are (x,y) image coordinates for target landmarks. Output neurons are (X, Y, Z, roll, pitch, yaw) target position and attitude relative to an external reference frame. Modified back-propagation has been used to train the neural network using both synthetic and experimental training sets for comparison to current four-point pose methods. Recommendations are provided for number of neural layers, number of neurons per layer, and richness versus breadth of pose training sets.
Original language | English (US) |
---|---|
Pages (from-to) | 472-479 |
Number of pages | 8 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 1608 |
DOIs | |
State | Published - Mar 1 1992 |
Event | Intelligent Robots and Computer Vision X: Neural, Biological, and 3-D Methods 1991 - Boston, United States Duration: Nov 14 1991 → Nov 15 1991 |
All Science Journal Classification (ASJC) codes
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering