How do we manage to perceive the environment and to interact with it?
Scientists at the Max Planck Institute for Biological Cybernetics are dealing
with these fundamental questions. They are researching signal and
information processing in the brain and investigating which processes
are necessary to generate a consistent image of our environment
and suitable behaviour from a wide variety of sensory information.
Its tender pink blossoms, green leaves, long stem with its sharp thorns and unique aroma enable us to recognise a rose as a rose. We register this piece of the outside world in a multisensory manner and form or reinforce a mental image of it – a task that our brain must tackle permanently and which it performs easily. But how do we manage to internalise things that exist outside our bodies? How do we perceive the world and how do we then manage to interact with it? These are questions that three departments and various research groups are looking into at the Max Planck Institute for Biological Cybernetics. With the use of a range of very different approaches and methods, they are attempting to decode how the human brain processes information.
Scientists in the High-Field Magnetic Resonance Department at the Institute are already able to observe the thinking process to a certain extent because our brain activity is made visible by means of functional magnetic resonance tomography (FMRT). The most detailed brain scans currently available can be produced using the Institute’s own 9.4-Tesla magnetic resonance tomograph, one of the world’s strongest human MRT systems. The work of the Department of Physiology of Cognitive Processes focuses on researching neuronal structures. A combination of experiments in psychophysics, electrophysiology and MRT analyses the places in the brain in which sensory perception is coded and processed. This makes it possible to continually refine the three-dimensional map of our brain which has been put together over the past few decades. By contrast, my Department of Human Perception, Cognition and Action concentrates on understanding the processes going on in the brain with the help of perceptual experiments. These answer questions of object and face recognition, social interaction and spatial cognition. We are also working on how to transfer knowledge gained from human perception to designing and improving intelligent robots. We are investigating these questions using both psychophysical experiments and methods taken from systems and control theory, computer vision, virtual reality (VR) and with the help of new kinds of motion simulators.
Multi-sensory perception, as described in the introduction, is not absolutely necessary for registering many things. We can and must often perceive things with only one sense: for example, visually. At the same time, we fail in many cases to recognise, what a feat our brain must achieve in performing this task alone. Even interpreting a single sensory stimulus requires balancing various sources of information – in this case, we talk about multi-modal information processing. In the human visual system, this is information that enables us to perceive shadows, textures, texture processes and our capacity for stereo vision. Among other things, we are able to successfully use this process because our brains always take experiences and prior knowledge of the intrinsic nature of the world into account. It is only this ingenious inclusion of congenital and learned prior knowledge in the process of perception that enables us to create an unambiguous interpretation of the exterior world from ambiguous or missing information. Intuitively, we even possess a feeling for probabilities, which helps us to reject illogical interpretations of the world.
Even investigating just one sense requires a cleverly designed experiment to find out how various signals are integrated. Our interest is now focused on experiments with various senses, such as our sense of balance and vision, under realistic real-world conditions. As perception and actions constantly influence each other in reality, our aim is to design experiments in which every reaction in turn influences what is shown next. In order to cope with all this, very special test rigs are often necessary.
For the first time, the relatively new method of virtual reality is making it possible to carry out such experiments under precisely controllable stimulus conditions in a closed loop of perception and action. Research in virtual reality offers the opportunity to precisely document and specifically manipulate such interaction-cycles. This permits conclusions as to how specific changes in stimuli influence both perception and resulting behaviour. In order to exploit this potential in virtual reality technology to its maximum extent, the Cyberneum was constructed on the Max Planck Campus in Tübingen. The Cyberneum is a building with some 1,200 square metres of useable floor area which – as the name suggests – offers space for new developments in the field of virtual reality. The fact that research in virtual environments needs square metres – or even cubic metres – of space must sound strange. But this is in fact the case, as its task is to accommodate a lot of the latest technology. The Cyberneum houses such equipment as two motion simulators, tracking systems and gigantic conveyor belts. These latter offer the opportunity to make virtually projected worlds very real and – above all – provide unlimited “walk-in” accessibility. The job of the many cameras in the tracking systems is to determine the position of the moving test persons at all times and to set them in relation to the virtually visible world. As another example, a novel motion simulator based on the principle of an industrial robot makes it possible – together with a display – to uncouple our own perceived movements from observed movements. Thus, these various senses can actually be observed separately from each other and provide clues as to how the incoming information is represented in our brain. At the same time, our aim is to re-create human perception and behaviour and to test these models in order to predict probable reactions – for example in driving and flight simulation. Initial results already show that this method is contributing to improvements in motion simulators in the form of authentically represented flight scenarios, which will lead to more realistic training and consequently to more safety for pilots and passengers. In the basement of the building, a few quad-copters – flying objects comparable with model helicopters – are learning to fly independently without colliding with each other. The aim in this case is to make it possible for humans and robots to interact safely with each other in environments they use together. Our work is based on the vision that, in the future, humans and machines will work with each other seamlessly in joint biospheres and that interactive machines will become part of our daily lives. The human brain already holds a rich store of strategic solutions for a wide variety of problems and offers us the opportunities that we wish to realise – to gain access to the world of tomorrow.
Born in 1950, the author is Director of the Department of Human Perception, Cognition and Action at the Max Planck Institute for Biological Cybernetics in Tübingen. Heinrich Bülthoff is a specialist in object and face-recognition, senso-motoric integration, space cognition and behaviour in virtual environments.