© 2022
Play Live Radio
Next Up:
Available On Air Stations

Dr. Jeff Stanton, Syracuse University – Sonification of Data


In today’s Academic Minute, Dr. Jeff Stanton of Syracuse University reveals efforts to represent large data sets using sound.

Jeff Stanton is Professor and Senior Associate Dean in the School of Information Studies at Syracuse University. As a result of his interests in data mining and machine learning, he has begun work in an emerging area called data science, which focuses on the management, analysis, and visualization of large data sets. He earned his Ph.D. at the University of Connecticut.

About Dr. Stanton

Dr. Jeff Stanton – Sonification of Data

“Big data” is an important topic at present, because many organizations have realized the value that they can extract from data they have collected about their operations. One of the tools for dealing with large, complex data sets is visualization – the process of summarizing data in the form of graphs, charts, and maps. Most people learned to read charts from a very young age and they have become very adept at making sense of pie charts and bar charts. It is interesting to note, however, that we also use our ears extensively to comprehend information. Technologies ranging from fire alarms to Geiger counters translate data into sounds that help us interpret our environment and make decisions.

Starting in the early 1990s scientists began a series of systematic investigations into how to translate data into sound. Only recently, however, have tools emerged to make the processes of sonification feasible outside of the research lab. Most important among these is a free data analysis program called R. You can use R to turn data into sound. For example, here is a sonification of the bell curve, with the pitch of each note showing the height of the bell curve from left to right.

[Play bell curve demo sound.]

Almost everyone is familiar with the basic sounds and rhythms of music, so translating data into differences in pitch and tempo is straightforward, as the bell curve example suggests. Most people can also tell the difference in timbre or sound quality of different instruments, such as the trumpet and the piano. Anyone who has worn a pair of headphones also knows about panning, which is changing the left-right position of a sound. What scientists do not yet know is the best way to combine all of these aspects of sound into an effective data display. Given the huge amount of information in the world, though, it has never been more important for us to find new ways of making sense out of big data.

Production support for the Academic Minute comes from Newman’s Own, giving all profits to charity and pursuing the common good for over 30 years, and from Mount Holyoke College.

Related Content