Start date: Mar 1, 2013,
End date: Feb 29, 2016
Humanoid robots will become important machines to support mankind if they develop similar capabilities as humans have. One of those capabilities is to orient in space and to extract the relevant information from its environment. A common approach has been to build a spatiotopic map of the external world, so called an internal world model. However, since the sensors, such as the eyes (cameras) are attached to the body an updating problem occurs: After any action the input changes and additional information about the position of the eyes or the posture or the position in the external world is required to map a new sensory input into an existing map of the world. As this position about sensors is not error free, internal world models are not always reliable.\nHowever, a large body of information suggests that humans do not maintain full maps of their external world. They are rather very sparse and evidence suggests that we extract the important information from the world just on time and only keep track of a few relevant aspects in a scene by means of attentive and memory processes. Humans rather know how to retrieve the necessary information rather than representing all information in an internal world model. Thus, we aim to explore how humans solve the necessary updating and by which mechanisms they keep track of important aspects and extract the relevant information from the environment. This will be done by a combination of experimental investigations and computational modelling and by the integration of the developed modules leading to a human-like neural model of spatial orientation and attention in the context of eye, head and body movements. The model will be demonstrated as "neuroware" for a virtual human acting in a virtual reality.
Get Access to the 1st Network for European Cooperation