INCA

Context

Recent advances in the field of brain-machine interfaces and in particular in that of visual neuroprostheses for the blind allow us to envisage a significant medium and long term diffusion of these innovative replacement devices.
These neuroprostheses consist of a matrix of electrodes implanted in the retina or the visual cortex for their internal part and a camera coupled to a calculator and to a stimulator for their external part. These devices allow blind people to perceive white spots – called phosphenes – on a limited portion of their visual field.
However, current implants have a very low display resolution which makes them difficult to use (even impossible) if the electrical stimulation applied to the visual system directly reproduces the visual information captured by the camera. To make more efficient use of these neuroprostheses, it seems wise to rely on artificial vision algorithms to present the implanted person with less information but of great relevance for performing a given task.
In recent decades, computer vision has come a long way, thanks to the development of new image processing algorithms and the increase in processing power of processors. Now, it is possible to locate and recognize objects, identify faces or even read text, and this in real time. These are different strategies for representing this high-level information that we propose to evaluate in various everyday situations.

As access to implanted patients is still confidential, many research groups are studying the possibilities offered by visual neuroprostheses based on simulation experiences in sighted people (Simulated Prosthetic Vision – SPV). This is the case in this project: the sighted subjects are equipped with a virtual reality headset which displays what a blind person would see if they were equipped with a visual neuroprosthesis (namely: points white in the visual field).

Goals

The objective of this work is to improve the usability of low-resolution visual neuroprostheses (which concerns current models but also most of the systems announced), by using high-performance image processing algorithms. We propose to test in prosthetic vision simulation experiments multiple situations in which a higher level approach to the presentation of visual information makes it possible to restore suitable “visuomotor” behaviors: detection of people or faces, recognition and localization of objects, text detection, navigation, etc.

Skip to content