Sign In

A device which helps blind people “see” using musical sounds developed by researchers from the Faculty of Medicine

Researchers at from the Faculty of Medicine developed a device which employs musical notes to help blind individuals “see” using sound. This non-invasive sensory-substitution device (SSD) converts images into a combination of musical notes, or “soundscapes”. The device was developed in Prof. Amir Amedi’s lab, and was named the “EyeMusic”. The EyeMusic’s algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); Black is represented by silence (for more on the EyeMusic, see IOS Press  ; Sample sound recordings are available on: Prof. Amedi's lab website. The ultimate goal of this, as well as other devices developed in Amedi’s lab, is to assist blind, visually impaired and color blind individuals in p-erceiving and interacting with their environment. The researchers demonstrated that after a short training session with the device (less than 5 min), 18 sighted participants were able to distinguish between two targets – on either the right or the left – sounded out by the EyeMusic. Following the training, they were asked to use a joystick to point at these targets. Their arm was placed under a cover, so that they could not see it moving. The targets were either seen on a computer screen, or sounded out via headphones, with the EyeMusic. Shortly after the start of the experiment, the relationship between their hand movements and the on-screen cursor movement was changed: for example, they had to move their hand to the left for the cursor to go up. The participants learned this new relationship, or mapping, without being aware of it, when they could see the targets and the cursor. Then they used this new mapping to make movements to targets whose location they only heard via the EyeMusic. That is, they naturally and seamlessly transferred the novel, unconsciously learned, information between their different senses, without being aware of it. These findings hint at what appears to be a supra-modal representation of space: whether the spatial information comes from vision or from audition, it appears to be interchangeably used to create an inner representation of space that is then used to move within it.The results of the study, led by Prof. Amir Amedi, of the Edmond and Lily Safra Center for Brain Sciences (ELSC) and the Institute for Medical Research Israel-Canada at the Hebrew University, and Dr. Shelly Levy-Tzedek, an ELSC researcher in his lab, which are published in this month’s issue of the Nature Group new open access journal Scientific Reports, are quite encouraging.  These findings pave the way for development of hybrid aids for the Blind, which would combine input from low-resolution visual prostheses (or residual vision) – used for example to locate a nearby tree – and from the EyeMusic, used to perceive the luscious fall colors of the leaves.For more on the cutting-edge research done in the Amedi Lab, see Prof. Amedi’s recent TEDx talk:   
×