Abstract
Speech recognition (SR) is a technology that can improve accessibility to computer systems for people with physical disabilities or situation-introduced disabilities. The wide adoption of SR technology; however, is hampered by the difficulty in correcting system errors. HCI researchers have attempted to improve the error correction process by employing multi-modal or speech-based interfaces. There is limited success in applying raw confidence scores (indicators of system's confidence in an output) to facilitate anchor specification in the navigation process. This paper applies a machine learning technique, in particular Naïve Bayes classifier, to assist detecting dictation errors. In order to improve the generalizability of the classifiers, input features were obtained from generic SR output. Evaluation on speech corpuses showed that the performance of Naïve Bayes classifier was better than using raw confidence scores.
Original language | English (US) |
---|---|
Number of pages | 1 |
Journal | Proceedings of the Annual Hawaii International Conference on System Sciences |
State | Published - Nov 10 2005 |
Event | 38th Annual Hawaii International Conference on System Sciences - Big Island, HI, United States Duration: Jan 3 2005 → Jan 6 2005 |
All Science Journal Classification (ASJC) codes
- General Engineering