Application of the coherence scheme to the multisensory fusion problem
This chapter is an attempt to provide a common conceptual and computational framework for neurophysiologists and roboticians who are faced, in spite of their different motivation, with the similar problem of combining several signals issued from sensors having various geometrical and dynamical properties. For animals and robots, motion is a fundamental source of information about their interaction with the environment. Animals (and some robots, now) have at their disposal a dedicated sensory system, devoted to the detection of their own 3D movement: the vestibular system. However, the vestibular organs fail to detect self-movement at low frequency and have to be complemented by other information sources such as vision, proprioception, or efferent copies of motor commands. The visual system is particularly useful for estimating the displacement and the 3D shape of other mobile objects, as well as the 3D structure of the environment. Many theoretical studies have been proposed to account for the ability of biological organisms to perceive 3D movement, or to build robots that are able to move and avoid unexpected obstacles. One of the central question in this context is the way in which the various signals are fused, and, more generally, how the 3D processing of individual sensors may dynamically interact.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.