For neurologists, human hearing is a process full of unanswered questions. How does the brain translate sounds — vibrations that travel through the air — into the patterns of neural activity that we recognize as speech, or laughter, or the footsteps of an approaching friend? And are those same neural processes universal, or do they vary across cultures?
With support from the National Science Foundation’s (NSF) Directorate for Social, Behavioral, and Economic Sciences (SBE), Massachusetts Institute of Technology professor Josh McDermott is leading a research team seeking to answer those questions. Their work lies at the intersection of psychology, neuroscience and engineering.
McDermott’s team has developed an artificial neural network that can recognize sound and music. The network can identify words in speech and genres of music with the same accuracy as a human subject.