Skip Ribbon Commands
Skip to main content
Researchers
  • Prof.  Amir Amedi
Prof Amir Amedi
 
Biography 
Born 1972 Jerusalem, Israel.
PhD 2005 ICNC, Hebrew University of Jerusalem
2006 - Instructor of Neurology, Harvard Medical School
2007 - Senior Lecturer, Hebrew University of Jerusalem.
 

Research Interests 
My lab's work ranges from basic science, querying brain plasticity and sensory integration, to technological developments, allowing the blind to be more independent and even “see” using sounds and touch similar to bats and dolphins (a.k.a. Sensory Substitution Devices, SSDs), and back to applying these devices in research. In the lab cutting edge technologies and innovative methods and techniques are used to study perception and multisensory relation, sensory substitution approaches and dynamics of brain processes.

 
Recent Projects
 ​
Reading in the blind
According to the canonical view, the human cortex is a sensory machine in which various regions govern activities of the different senses (e.g. the visual, auditory and somatosensory cortices). Within the visual areas, a region known as the Visual Word Form Area (VWFA) was shown to develop expertise for reading in sighted. If the brain is indeed a sensory machine, one would expect that the expertise for reading by touch in the blind would be in the somatosensory cortex. Another possibility is that it would be in the bilateral primary visual areas, as they were sown to be recruited for various non-visual tasks (such as language and tactile related tasks) in the blind. However, in a recent study performed in the lab in collaboration with Laurent Cohen we showed that the VWFA is also the peak of Braille-words selective activations across the entire brain of the congenitally blind. Furthermore, the anatomical location of the VWFA was highly consistent across blind subjects and between blind and sighted individuals. Thus, the functional recruitment and specialization of the VWFA for reading is independent of the sensory modality in which the words are presented, and even more surprisingly do not require any visual experience. This counters the notion that the brain is a sensory machine and suggests that it is rather a task machine, in which each area supports a specific task, computation or representation, irrespective of the input sensory modality. Future studies will test whether the metamodality of the reading areas extends also to auditory input. To this end, the shape of visual words can be translated into sounds using a visual-to-auditory sensory-substitution algorithm, similar to the system used to study the LOtv. Other future studies would investigate the specificity of the VWFA for words and the specific pathways and connections which enables the activation of the VWFA by tactile information.
 
Development of Sensory substitution devices
In our laboratory, we have developed several new navigational aids. We are testing them in different environments and are using them for brain research regarding the brain’s flexibility and reorganization in cases of visual deprivation.
One of the tools developed is a tiny virtual cane called the EyeCane. The cane operates as a kind of virtual flashlight, replacing or strengthening the “classic” walking stick. The device comprises sensors, which estimate the distance between the user and the object at which it is pointing. This information undergoes a "sensory transformation", to the form of complex vibrations, allowing the blind person to identify obstacles of different heights, understand the distance between him/herself and the objects around him, and create a spatial picture through which to navigate safely. The use of the device is intuitive and can be learnt within a few minutes of use.
For the purpose of this research, we built a real maze as well as virtual mazes, which facilitated trials in order to assess how the blind may deal with navigating through different paths, and improve their navigational capacities in both physical and virtual environments. This is complemented with the use of other devices which can convert complex visual input like objects and faces in the users surroundings into auditory and tactile input.
Research on the occurring brain activity shows that during the navigation time, areas of the brain devoted only to vision also come into play. These findings point to the brain’s flexibility and its division according to task rather than sense.
Another tool which we developed is a visual-to-auditory SSD called the EyeMusic which employs pleasant musical scales to convey visual information. We demonstrated in several studies that the EyeMusic, can be used after a short training period to guide movements, similar to movements guided visually. The level of accuracy reached in our studies indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use.
By using both veteran sensory substitution devices such as The vOICe (Meijer 1992), and novel devices such as the new EyeCane and EyeMusic developed in our lab, our group of blind and sighted volunteers are learning to interact with visual information through other senses. They are facing fascinating challenges such as learning to read, to identify,locate and grasp objects and more. They are learning to use the advantages of each device, such as the high resolution of The vOICe, the pleasant soundscapes of the EyeMusic and the depth information of the EyeCane.
Additionally, we aim to create a training program which will allow potential users of these devices to train more efficiently, independently and easily on their use.
 
Multisensory Perception
The appropriate binding of sensory inputs is fundamental in everyday experience. When crossing a road, for example, the sound of a car approaching can come from a car you can see, but also from another car outside of your field of view. Multisensory perception relies on multiple streams of unisensory input, which are processed independently in the periphery (e.g. auditory and visual inputs are detected in different sensory organs, and are transmitted to the cortex via separate pathways), and a multisensory integration stage during which the multiple sensory streams must be integrated to create a unified representation of the world. In other cases we need to separate these streams into separate entities. In spite of recent advances, how and where this is done in humans is still unclear. In the lab we use advanced techniques and computational approaches to study this subject. We recently studied how haptic and visual conveyed shape information are processed in the brain by using a multisensory design of adaptation fMRI. This technique relies on the "repetition suppression effect", which refers to the decrease in activation when an experimental condition is repeated. By presenting the same object in different modalities in a sequential audio and visual stimuli design, one can detect areas that manifest crossmodal repetition suppression effect. We identified a network of occipital (LOtv and the calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the Insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in humans. In another study we applied a novel experimental design with novel computational approach to detect unisensory and multisensory components of multisensory perception. We used a multisensory adaptation of spectral analysis used in retinotopy studies. In our paradigm auditory and visual stimuli were delivered in the same experimental condition with different number of repetitions. The rate of the auditory and visual stimuli getting in and out of synchronization was associated with a third interaction frequency. Spectral analysis enabled us to determine the contribution of auditory, visual and interaction responses to a voxel's overall response. This was done during one experimental condition, with audio and visual stimuli presented at the same time, in and out of synchronization, in a manner similar to real world experiences of multisensory perception. The results reveal a complex view of auditory and visual processes under a multisensory context. Future studies will elaborate the current results, using more complex stimuli, and different experimental and contextual conditions. For example one can introduce a task in which active combination of auditory and visual stimuli is required. Another effort is in the identification of the functional and effective connectivity between the areas identified earlier.
 
 
Lab Members 
Master's degree students: Miri Guendelman, Sami Abboud and Danit Nativ.
 
Doctoral degree students: Lior Reich, Ella Striem-Amit, Uri Hertz, Haim Azulay, Zohar Tal, Noa Zeharia, Nadine Sigalov, Shachar Maidenbaum and Roni Arbel.
 
Post-doctoral Fellows and Visitors: Dr. Ilan Goldberg (MD, PhD), Dr. Daniel-Robert Chebat and Dr. Shelly Levy-Tzedek
 
 
Selected Publications 
Amedi A., Stern, W., Camprodon, JA., Bermpohl, F., Merabet, L., Rotman, S., Hemond, CC., Meijer, P., Pascual-Leone, A. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience 10: 687-689. 14.2;5/237 (Neuroscience);46.
 
Reich, L., Szwed, M., Cohen, L., and Amedi, A. (2011). A ventral visual stream reading center independent of visual experience. Current Biology 21, 363-368.
 
Striem-Amit, E., Dakwar, O., Hertz, U., Meijer, P., Stern, W., Merabet, L., Pascual-Leone, A., and Amedi, A. (2011a). The neural network of sensory-substitution object shape recognition. Functional Neurology, Rehabilitation and Ergonomics 1, 271-278.
 
Reich, L., Maidenbaum, S., and Amedi, A. (2012). The brain as a flexible task-machine: implications for visual rehabilitation using non-invasive vs. invasive approaches. Current Opinion in Neurology 25, 86–95.
 
Levy-Tzedek S., Hanassy S., Abboud S., Maidenbaum s., Amedi A (2012a). Fast, Accurate Reaching Movements with a Visual-to-Auditory Sensory Substitution Device. Restorative Neurology and Neuroscience 30(4); 313-323
 
Striem-Amit, E., Dakwar, O., Reich, L., and Amedi, A. (2012a). The large-scale organization of "visual" streams emerges without visual experience Cereb Cortex 22, 1698-1709.
 
Striem-Amit, E., Guendelman, M., and Amedi, A. (2012b). 'Visual' acuity of the congenitally blind using visual-to-auditory sensory substitution. PLoS ONE 7, e33136.
 
Maidenbaum, S., Arbel, R., Abboud, S., Chebat, D. R., Levy-Tzedek, S., Amedi, A. (2012). Virtual 3D shape and orientation discrimination using point distance information. Submitted and presented in the 2012 International Conference Series on Disability, Virtual Reality and Associated Technologies.
 
Striem-Amit, E., Cohen, L., Dehaene, S., and Amedi, A. (2012c). Reading with sounds: Sensory substitution selectively activates the visual word form area in the blind. Neuron 76, 640-652.
 
Levy-Tzedek, S., Novick, T., Arbel, R., Abboud, S., Maidenbaum, S., Vaadia, E., Amedi, A. (2012b). Cross-sensory transfer of sensory-motor information: visuomotor learning affects performance on an audiomotor task, using sensory-substitution. Scientific Reports 2:949.
 
Zeharia,N., Hertz, U., Flash, T., Amedi, A. (2012). Negative blood oxygenation level dependent homunculus and somatotopic information in primary motor cortex and supplementary motor area. PNAS 109: 18565-18570. 
 
An unexpected error has occurred.
website by Bynet Software Systems