TY - JOUR
T1 - Hands-free, speech-based navigation during dictation
T2 - Difficulties, consequences, and solutions
AU - Sears, Andrew
AU - Feng, Jinjuan
AU - Cseitutu, Kwesi
AU - Karat, Claire Marie
N1 - Funding Information:
Support. This material is based on work supported by the National Science Foundation under Grant No. IIS-9910607. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF).
PY - 2003
Y1 - 2003
N2 - Speech recognition technology continues to improve, but users still experience significant difficulty using the software to create and edit documents. In fact, a recent study confirmed that users spent 66% to their time on correction activities and only 33% on dictation of particular interest is the fact that one third of the user's time was spent simply navigating from one location to another. In this article we investigate the efficacy of hands-free, speech-based navigation in the context of dictation-oriented activities. We provide detailed data regarding failure rates, reasons for failures, and the consequences of these failures. Our results confirm that direction-oriented navigation (e.g., Move up two lines) is less effective than target-oriented navigation (e.g. Select target). We identify the three most common reasons behind the failure of speech-based navigation commands: recognition errors, issuing of invalid commands, and pausing in the middle of issuing a commands. We also document the consequences of failed speech-based navigation commands. As a result of this analysis, we identify changes that will reduce failure rates and lessen the consequences of some remaining failures. We also propose a more substantial set of changes to simplify direction-based navigation and enhance the target-based navigation. The efficacy of this final set of recommendations must be evaluated through future empirical studies.
AB - Speech recognition technology continues to improve, but users still experience significant difficulty using the software to create and edit documents. In fact, a recent study confirmed that users spent 66% to their time on correction activities and only 33% on dictation of particular interest is the fact that one third of the user's time was spent simply navigating from one location to another. In this article we investigate the efficacy of hands-free, speech-based navigation in the context of dictation-oriented activities. We provide detailed data regarding failure rates, reasons for failures, and the consequences of these failures. Our results confirm that direction-oriented navigation (e.g., Move up two lines) is less effective than target-oriented navigation (e.g. Select target). We identify the three most common reasons behind the failure of speech-based navigation commands: recognition errors, issuing of invalid commands, and pausing in the middle of issuing a commands. We also document the consequences of failed speech-based navigation commands. As a result of this analysis, we identify changes that will reduce failure rates and lessen the consequences of some remaining failures. We also propose a more substantial set of changes to simplify direction-based navigation and enhance the target-based navigation. The efficacy of this final set of recommendations must be evaluated through future empirical studies.
UR - http://www.scopus.com/inward/record.url?scp=0042739414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0042739414&partnerID=8YFLogxK
U2 - 10.1207/S15327051HCI1803_2
DO - 10.1207/S15327051HCI1803_2
M3 - Article
AN - SCOPUS:0042739414
SN - 0737-0024
VL - 18
SP - 229
EP - 257
JO - Human-Computer Interaction
JF - Human-Computer Interaction
IS - 3
ER -