Feb 12th-Read the Laura Spinney new article on Math

TRY this: close your eyes and touch your nose with your index finger. You know that you are touching yourself, but how do you know it?

A moment’s reflection suggests that your brain makes that deduction because it detects that the two incoming sensory signals, one from your finger and one from your nose, have their origins at the same point in space. That in turn means that action potentials zipping along the nerve fibres from finger and nose to brain contain more than one type of information: information about noseness and fingerness, and information about where those two appendages are in relation to one another.

 That in turn poses a dilemma that has exercised philosophers for centuries. If the brain is able to distinguish between spatial and non-spatial information, it must have a concept of space. How does it acquire such a concept?

 The question is important because space is key to how we make sense of the world, including how we differentiate “self” from “other”. Consider something as fundamental as how we recognise an object as an object. We observe that when the object moves, all parts of it move together, and that those parts behave in the same way in relation to each other—that is, cohesively—if the object remains stationary and the observer moves her eye over it instead.

 “Space has this quality that it’s universal,” explains mathematician Alexander Terekhov, a postdoc in the group of experimental psychologist Kevin O’Regan at Paris Descartes University in France. “It’s not an entity itself, but a set of laws that all entities share.” As such, he says, it offers the brain an elegant way of deducing a world of objects from the information it receives via its sensors.

 Back in the 18th century, the German philosopher Immanuel Kant suggested that our sense of space had to be hardwired because there was no way we could learn it through experience. Experiences being tied to places, he said, learning through experience assumes a pre-existing understanding of space. O’Regan and Terekhov think he was wrong, and that a brain could in fact deduce the existence of space from scratch, simply by detecting regularities in its neural inputs as its senses explore the world.

 One way to test their theory might be to study how a baby’s sense of space develops from birth, since there’s good evidence that children’s notion of space isn’t fully developed until puberty. A newborn baby has already spent nine months moving and receiving sensory feedback inside the womb, however, and it’s difficult to study the workings of its brain in there in any detail. So instead Terekhov has invented virtual “babies”, or agents as he prefers to call them—computer simulations he can study from the moment they come into existence.

 The simplest agent Terekhov has devised has an “eye” consisting of a few simulated photoreceptors—the light-sensitive cells that pack the retina at the back of the human eye—and a single muscle, so that it can move its eye and visually explore the virtual space, populated with virtual objects, that it inhabits. “They are pure mathematical constructs, and yet they have something in common with the simplest biological organisms,” he says. More complicated agents have more photoreceptors and more muscles, meaning they can interact with the world in more complex ways.

 Because these agents are computer simulations, Terekhov can analyse the data recorded by the photoreceptors and look for patterns. He has found that even the simplest agent can learn to extract the laws that define space, as represented by coincidences and transformations in those data. “It can learn, for example, that a cup and a bottle may be subject to the same laws of transformation—that is, move in the same way—without losing their identities,” he says. Such an agent can also learn to distinguish between movement in the world and its own movement, as exercised by its eye. It acquires these spatial notions in a matter of days, being constrained only by the power of the computer—though Terekhov acknowledges that biological organisms might take longer.

 He has also created more exotic agents that exist in the absence of space. The “piano organism”, for example, inhabits a world of sound. Crucially, however, it is unable to move. Think of it as an antenna or tuning fork fixed to the soundboard of a virtual piano, where the antenna responds to a certain pitch or frequency of vibration. Though the piano organism can’t move, it can alter the frequency to which its antenna responds and so scan the soundscape—just as the “seeing” agents can move their eyes and scan the landscape.

 Terekhov has found that, like the seeing agents, the piano organism is able to recognise entities before and after they have undergone a transformation—only now, the entities are not physical objects but musical chords, and the transformation is in pitch rather than space. “For this agent, objects become chords and the musical stave—the range of possible pitches—replaces space,” he says.

 He and O’Regan conclude that it is possible to learn about space without any prior experience of it, and that this is a first, critical step in the development of our peculiarly human experience of the world—the “raw feel” of being alive that philosophers have dubbed the hard problem of consciousness, and whose origins they have so far failed to explain.

The findings could have major ramifications for the fields of artificial intelligence and robotics too. Though much progress has been made in the last 50 years, robots are still pretty bad at many things that humans do well, including object recognition. One reason for that, these researchers suggest, is that they are programmed to recognise an object by extracting the common features of many examples of that object. If space is not a feature of an object, however, but a set of laws that applies to all objects, then robots are missing a key element of object recognition—which might help to explain their poor performance.

 The good news, according to Terekhov and O’Regan, is that their experiments suggest that organisms can develop a sense of space de novo, as long as they are equipped with sensors that they can actively move around in the world. In theory, therefore, the next generation of robots could relate to the world very much more like humans do.

 

This entry was posted in Non classé. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *