Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

Jeff Rimland, Mark Ballora

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

Original languageEnglish (US)
Title of host publicationNext-Generation Analyst II
PublisherSPIE
ISBN (Print)9781628410594
DOIs
StatePublished - 2014
EventNext-Generation Analyst II - Baltimore, MD, United States
Duration: May 6 2014May 6 2014

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume9122
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Other

OtherNext-Generation Analyst II
Country/TerritoryUnited States
CityBaltimore, MD
Period5/6/145/6/14

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Applied Mathematics
  • Electrical and Electronic Engineering
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data'. Together they form a unique fingerprint.

Cite this