Thursday, February 7, 2013

Big Data of Sounds

Hearing begins with the ears, but it's more than the sum of sounds. We are able to recognize piano notes and experience music, understand speech in noisy surroundings and localize voice in 3D.  Is it because the sound transmitted to the inner ear is broken down into frequencies and mapped onto the brain like musical notes are mapped on a piano keyboard? But are not we able to hear different things at once in three dimensions? Are not we translating frequencies into meanings on the fly, processing Big Data better than any famous statistician? As Jacob Oppenheim and Marcelo Magnasco showed that simple decomposition of sounds into its components by Fourier transform loses important information about the sound's duration, something that our brain is actually able to overcome. Humans can beat the limits of Fourier analysis, and the Brain processes the big data of sounds better than existing algorithms do. As a matter of fact, training our brain on music makes us better in math and problem solving skills. While inability to process signals in the brain leads to developmental disorders. More to discover, more to learn.


Oppenheim, Jacob N., and Marcelo O. Magnasco. "Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle." arXiv preprint arXiv:1208.4611(2012).

Díaz, Begoña, et al. "Dysfunction of the auditory thalamus in developmental dyslexia." Proceedings of the National Academy of Sciences 109.34 (2012): 13841-13846.