This paper focuses on the ways dialog systems might learn better strategies to handle automatic speech recognition errors from the way people handle such errors. In the well-known Wizard of Oz paradigm to study human-computer interaction, a user participates in dialog with what she believes to be a machine, but is actually another person, the wizard. The Loqui project ablates its wizards, removing human capabilities one at a time. This paper details a pilot experiment to develop specifications for Loqui's wizard ablation studies. In the pilot task, a speaker requests books in a library application. The key finding here is that, when bolstered by a very large database of titles, humans are remarkably successful at interpreting poorly recognized output. Their repertoire of clever, domain-independent methods depends upon partial matches, string length, word order, phonetic similarity, and semantics. The long term goals of this work are to provide dialog systems with new ways to ask users for help, and to provide users with greater understanding of system functionality. Once implemented, these methods should substantially reduce human frustration with automated dialog systems, and improve task success.