Japan, The University of Tokyo
No gallery images have been uploaded.
According to the WHO, around 466 million people worldwide suffer from hearing disabilities, of which 34 million are children. A common misconception is that all people with hearing impairments hardly hear anything, but many actually hear a confusing cacophony of noise, making it difficult to focus on specific sounds or voices. This can make everyday outings to noisy environments such as restaurants or cafes exceedingly difficult. Mediated Ear provides an elegant solution to this challenge. Using deep learning, Mediated Ear can isolate any voice from a mixed audio source containing various noises and sounds, including multiple speakers. With Mediated Ear, anyone with a hearing impairment can easily tune into the voice they want to hear, and filter out distracting ambient noise. While existing solutions only work on highly specialized and expensive equipment, Mediated Ear works on any smartphone. Once Mediated Ear has learned a person's voice, users can use the companion smartphone app to effortlessly select, isolate and wirelessly transmit it to their earphones. To accomplish this, Mediated Ear only needs to listen to the target person speaking for 1 minute. The recorded audio is then quickly processed in the cloud, where a neural network learns to extract the target’s voice. Once training is complete, the neural network model is sent back to the smartphone app, enabling instantaneous on-device voice isolation. Mediated Ear could also be incorporated into standalone hearing-aid devices or earphones. Besides aiding those with hearing impairments, this could benefit ASD (Autism Spectrum Disorder) patients, who often suffer from oversensitivity to ambient noise. Mediated Ear could also improve safety in hazardous environments such as factories, construction sites, and airports, by allowing workers to filter out distracting noises. As such, Mediated Ear has the potential to empower not only those with hearing impairments, but also to truly "Empower us all".
Ken Tominaga is a software engineer and in charge of the client-side development of Mediated Ear. Kunihiko Sato is a machine learning engineer. Kunihiko is responsible for designing the neural network architecture and the backend development. We are former members of Rekimoto lab (https://lab.rekimoto.org/) at The University of Tokyo.