Spatial Cognition

Project

Project Acronym:

SpaceCog

Funding:

European Commission (FP7-FET proactive, Neuro-Bio-Inspired Systems (NBIS), Grant Agreement No. 600785)

Project duration:

01.03.2013 - 28.02.2016

Link in Cordis:

http://cordis.europa.eu/projects/rcn/106239_en.html

Abstract

Humanoid robots will become important machines to support mankind if they develop similar capabilities as humans have. One of those capabilities is to orient in space and to extract the relevant information from its environment. A common approach has been to build a spatiotopic map of the external world, so called an internal world model. However, since the sensors, such as the eyes (cameras) are attached to the body an updating problem occurs: After any action the input changes and additional information about the position of the eyes or the posture or the position in the external world is required to map a new sensory input into an existing map of the world. As this position about sensors is not error free, internal world models are not always reliable. However, a large body of information suggests that humans do not maintain full maps of their external world. They are rather very sparse and evidence suggests that we extract the important information from the world just on time and only keep track of a few relevant aspects in a scene by means of attentive and memory processes. Humans rather know how to retrieve the necessary information rather than representing all information in an internal world model. Thus, we aim to explore how humans solve the necessary updating and by which mechanisms they keep track of important aspects and extract the relevant information from the environment. This will be done by a combination of experimental investigations and computational modelling and by the integration of the developed modules leading to a human-like neural model of spatial orientation and attention in the context of eye, head and body movements. The model will be demonstrated as "neuroware" for a virtual human acting in a virtual reality.

 

Corollary Discharge

The brain has only access to the outside world by sensors that sense the environment in a non-allocentric coordinate system, for example, a retinal view of a street scene in the city of Chemnitz at the top-right. In order to allow for continuous perception and action, a corollary discharge (the plan to initiate an action) can be used for internal updating of spatial representations prior to the action and thus, link more high level representations, such as sparse allocentric maps with eye-, head-, and body-centred representations. Allocentric maps do have much less detail than available in the present visual view, as indicated in the lower part of the figure showing a world-centred view of the area around the street scene above. The project focuses on the construction and role of different coordinate-systems and how they interact with each other to allow for spatial cognition, such as the construction of a coherent world representation and its link to memory.



WP1 Saccadic and head remapping

When there is an impending eye movement, the visual system can anticipate where the target will be next. In preparation for this arrival, attention moves to the new location and starts to select information. This anticipatory shift around the time of a saccade is called "remapping" in the physiological literature. We have described this remapping as a shift of attention pointers that specify locations of attended targets. When the eyes move, the updating of attentional pointers to their post-saccadic locations not only helps manage the uptake of information from these targets but also keeps track of where they are in the world. Specifically, we propose that the activations on the saccade/attention map not only specify saccade targets and guide attentional selection, but also specify perceived location. In the experiments proposed here we will characterize the properties of this predictive location shift using a motion task (Task 1.1) and we will extend this to shifts of the head where we assume target locations are remapped based on corrections for head movement. Finally, we will adapt a previous neuro-computational model for explaining our recent results of attention remapping and then extend this to our new results from Task 1.1 and 1.2.



Task 1.1:
Predictive remapping of visual location for moving eyes.
Task 1.2:
Predictive remapping of visual location for moving head.
Task 1.3:
Revision and extension of a model for saccadic
remapping.

Deliverable 1.1:
Report for motion tests of saccade remapping effects. (month 12)
Deliverable 1.2:
Report for modeling the remapping of attention
pointers.
(month 24)
Deliverable 1.3:
Report for motion tests of head remapping effects.
(month 24)



WP2 Self-motion remapping

In normal daily interactions, the position of the retina is not only changing due to eye or head movements, but also due to body movements. Yet, even during such self-motion, we retain a sense of whether visual objects are stable or moving with respect to an earth-centric reference frame (Medendorp et al. 2011, for review). Achieving visual stability in these conditions is a complex process because visual signals are coded with respect to gaze, not in an earth-fixed reference frame. When the visual scene lacks earth-centric landmarks, the brain should distinguish which changes in retinal input result from real world movement and which from eye movement. The usual view (see section 1.3) is that this is achieved by subtracting the extraretinal signal of eye motion from the retinal image shifts. Experiments with head-fixed saccades suggest that efference copies of the outgoing motor command serve this purpose. Neurons in the frontal eye fields and the lateral intraparietal area demonstrate pre-saccadic shifts of receptive fields, elicited by an efference copy (Sommer and Wurtz, 2006). These shifts could allow the brain to anticipate and cancel out the changes in retinal input due to the saccade. Experiments in WP1 are performed under simplified laboratory conditions, with the body immobilized, testing how our behavioural repertoire is controlled by dynamic sensory and motor feedback, how it is modulated by context, and how it is shaped by past experience. But will the observed principles and mechanisms also apply to more complex real-life conditions? Outside the laboratory, in the real world, multiple sensory and motor systems must be used in synergy in order to achieve a stable representation of space in a dynamic setting. Here, in work package 2, we will address the question of how we use sensory information to remember and remap object information (location, orientation) under continuous body motion. The work package aims at the following two objectives: (i) Understanding the computational algorithms for remapping of relevant object information during self-motion, thereby distinguishing contributions of the various sensory modalities and cognitive signals. (ii) Developing a neuro-computational model on self-motion remapping, using neurophysiologically grounded learning algorithms, that mimics behavioural observations.

Task 2.1:
Testing remapping of point locations after and during
self-motion.
Task 2.2:
Testing remapping of spatial orientation during
self-motion.
Task 2.3:
A neural network model of self-motion remapping.

Deliverable 2.1:
Report on the experimental setup to investigate
self-motion updating.
(month 12)
Deliverable 2.2:
Report of sensory contributions to point and orientation
remapping during body translations.
(month 24)
Deliverable 2.3:
A neural network model of self-motion remapping.
(month 32)



WP3 Dynamic Sampling

We also aim to explore, understand and simulate the temporal dynamics of remapping processes, focusing on visual remapping across saccades, although the corresponding mechanisms could equally well apply to other forms of remapping (head motion, self-motion, etc). We rely on the notion of "attention pointers": the few salient, attended and/or task-relevant objects in the world for which the brain will make the effort of compensating for any damaging effect of saccadic eye movements. The key idea is to envision the representation of attention pointers as a dynamic process. Just as in a saliency map where a winner-take-all isolates the current object of attention, and periodically switches its focus to a new target, we assume that multiple attention pointers are represented sequentially, in successive cycles of an oscillation that can be recorded for example using EEG. Using EEG we will explore the possible oscillatory frequencies involved in the remapping process and using psychophysics we will verify that the "simultaneous" remapping of multiple attention pointers effectively involves a sequential process at this frequency. Finally, we will develop computational models for the concurrent representation of multiple attention pointers based on multiplexing principles and involving cross-frequency coupling between oscillations as well as spike-phase neuronal coding and extract the core principles (e.g. multiplexing, sampling frequency) from the previous computational model based on spiking neurons in order to render it applicable (i.e., rate-based and continuous) to the large-scale model that is the objective of WP5. Overall, the results of this work package should (i) characterize the dynamics of remapping in human observers, (ii) provide a descriptive model of these dynamics based on realistic spiking neural network models, and (iii) enhance the large-scale model built by the consortium, by giving it a sampling dynamic similar to that observed in human subjects.

Task 3.1:
Experiment 1: Is remapping periodic?
Task 3.2:
Experiment 2: Is remapping sequential?
Task 3.3:
Modelling 1: Multiplexing of attention pointers.
Task 3.4:
Modelling 2: Core principles.

Deliverable 3.1:
Report of periodic behavior in remapping processes.
(month 12)
Deliverable 3.2:
Report of sequential behavior in remapping processes.
(month 24)
Deliverable 3.3:
Full model of the temporal dynamics of attention
pointers and saccadic remapping.
(month 32)



WP4 Spatial memory

When we move around in space the interoceptive movement-related information and environmental sensory information has to be updated considering ones internal representation of the own location within the environment and the locations of objects around. We are specifically interested in spatial memory, i.e. internal representations that endure long enough to guide behaviour after a significant temporal delay or bodily movement. The length and duration of the movement is an important consideration here. Over short durations and small movements, egocentric representations can be updated relatively accurately using movement-related information ("path integration"), and allocentric representations may not be required. Over longer periods of time and longer movements, movement-related updating of egocentric representations becomes impractical and knowledge of the location relative to environmental information becomes important. For example, when you find your way back to your house after taking a holiday. We have developed a neural-level model of spatial memory based on representations of environmental location by place cells and boundary vector cells in the medial temporal lobe, and representations of environmental orientation by head-direction cells. These allocentric representations interact with egocentric representations in medial parietal areas via an egocentric-allocentric translation process. While workpackages 1-3 consider spatial updating of these egocentric representations, this workpackage considers spatial updating of the allocentric representations as the person moves through space. The model builds on recent support for its static representations (the discovery of the predicted "boundary vector cells") and recent identification of a likely neural basis for spatial updating in the medial temporal lobe (the discovery of grid cells). We will extend a previous computational model of spatial memory to include spatial updating of allocentric representations by interoceptive movement-related information. This will allow comparison of the relative influence of interoceptive movement-related information and environmental sensory information in determining our sense of self-location with our environment. It will also add an important component to allow the model to provide more robust guidance for navigation in situations in which sensory (visual) information is absent or ambiguous. We experimentally assess the relative contributions of interoceptive motion-related and environmental visual inputs to memory for self-location. We will modify our previous use of fully-immersive virtual reality to investigate path integration to compare the influences of internal self-motion information and visual information on memory for our starting location after translational movement. These experiments will examine the effects of the length and duration of the movement, or a delay after moving, on the relative influence of the two types of information on the return trajectory. A second manipulation will examine the nature of the visual environment, whether it contains discrete landmarks for orientation, extended objects or boundaries, or more ambiguous visual texture providing optic flow. If both experiments are successful, a final experiment will determine whether the two factors i) delay and ii) visual content interact (e.g. whether extended boundaries cause more enduring visual representations).

Task 4.1:
Computational model of spatial memory.
Task 4.2:
Experiments assessing the relative contributions of
interoceptive motion-related and environment visual
inputs to memory for self-location.
Task 4.3:
Evaluation/comparison of model and experiment.

Deliverable 4.1:
Description of the immersive virtual reality experiments. (month 12)
Deliverable 4.2:
Report on head-direction and self-location models.
(month 24)
Deliverable 4.3:
Report on path integration experiments, including
comparison with model.
(month 36)



WP5 Large-scale models of spatial cognition

Finally we aim to build a large-scale model of spatial cognition by a sequential integration of model parts. All models will be tested using an artificial agent operating in a virtual reality (VR). For the virtual reality environment we will use Unity 3D. The agent will be built as a virtual human capable of realistic saccades, head movements and body movements. Its early visual computation will consider cortical magnification (high spatial resolution in the centre of the image). Thus, visual perception will be human-like as well.

Task 5.1:
Software repository and documentation.
Task 5.2:
Maintanance of the neurosimulator "ANNarchy" for all
partners.
Task 5.3:
Design of a virtual reality.
Task 5.4:
Learning gain-field representations in parietal cortex.
Task 5.5:
Combination of the model of the parietal cortex with a
model of the ventral stream and oculomotor selection
for eyemovement and spatial attention.
Task 5.6:
Integration of parietal, ventral, oculomotor and
hippocampus for spatial cognition.
Task 5.7:
Revision of the model of spatial cognition using novel
results.

Deliverable 5.1:
Neurosimulator "ANNarchy".
(month 6)
Deliverable 5.2:
A model of gain field neurons for coordinate
transformation in parietal cortex.
(month 24)
Deliverable 5.3:
Model of spatial cognition.
(month 36)



back to top