Press

October 2018, A new approach to infuse spatial notions into robotics systems – Tech Xplore presents some achievements of FEEL team in WP2 (Mathematics) 

OCTOBER 16, 2018 FEATURE

A new approach to infuse spatial notions into robotics systems

a) 1% of the 2500 exploratory arm configurations mi . b) Two 3D projections of 1% of the sets Mi embedded in the 4D motor space. c) Schematic of the projected manifold and capturing of external parameters. d) Projection in 3D of the 2500 manifolds Mi (gray points) with surfaces corresponding to translations in the working space for different retinal orientations. Credit: Laflaquière et al.

Researchers at Sorbonne Universités and CNRS have recently investigated the prerequisites for the emergence of simplified spatial notions in robotic systems, based on on a robot’s sensorimotor flow. Their study, pre-published on arXiv, is a part of a larger project, in which they explored how fundamental perceptual notions (e.g. body, space, object, color, etc.) could be instilled in biological or artificial systems.

So far, the designs of have mainly reflected the way in which human beings perceive the world. Designing robots guided solely by human intuition, however, could limit their perceptions to those experienced by humans.

To design fully autonomous robots, researchers might thus need to step away from human-related constructs, allowing robotic agents to develop their own way of perceiving the world. According to the team of researchers at Sorbonne Universités and CNRS, a robot should gradually develop its own perceptual notions exclusively by analyzing its sensorimotor experiences and identifying meaningful patterns.

Read more

“The general hypothesis is that no one gives perceptual notions to biological organisms,” Alexander Terekhov, one of the researchers who carried out the study, told TechXplore. “These concepts are instead developed over time, as useful tools that help them to make sense of the vast sensorimotor data they are constantly exposed to. As a consequence, a frog’s notion of space will most likely differ from that of a bat, which will in turn differ from that of humans. So when building a robot, what notion of space should we give it? Probably none of these. If we want robots to be truly intelligent, we should not build them using abstract notions, but instead, provide them with algorithms that will allow them to develop such notions themselves.”

Terekhov and his colleagues showed that the notion of space as environment-independent cannot be deduced only by exteroceptive information, as this information varies greatly depending on what is found in the environment. This notion could be better defined by looking into functions that link motor commands to changes in stimuli that are external to the agent.

“Important insight came from an old study by famous French mathematician Henri Poincare, who was interested in how mathematics in general and geometry in particular could emerge from human perception,” Terekhov said. “He suggested that the coincidence in the sensory input may play a crucial role.”

The agent can move its sensors in external space using its motor. Although the external agent configuration x can be the same, its sensory experience varies greatly depending on the structure of the environment. Credit: Laflaquière et al.

The ideas introduced by Poincare can be better explained with a simple example. When we look at a given object, the eyes capture a particular image, which will change if the object moves 10 cm to the left. However, if we move 10 cm left, the image we see will remain almost exactly the same.

“This property seems miraculous if you think about how many receptors the human body has,” Terekhov said. “It is nearly impossible to have the same input twice in a lifetime, yet we constantly experience it. These low-probability events may be used by the brain to construct general perceptual notions.”

To apply these ideas to the design of robotic systems, the researchers programmed a virtual robotic arm with a camera at its tip. The robot noted the measurements coming from the arm’s joints every time it received the same visual input. “By associating all these measurements, the robot builds an abstraction that is mathematically equivalent to the position and orientation of its camera, even though it has no explicit access to this information,” Terekhov said. “The most important thing is that even though this abstract notion is learned based on the , it ends up being independent from it, and thus works for all environments; the same way our notion of space does not depend on the particular scene we see.”

Applying the same principle in another study, the researchers successfully prompted a robot to compensate for an optical distortion caused by a lens placed in front of its camera. Typically, this would be attained by training algorithms on pairs of distorted and undistorted images.

“The tricky part of our study was that the had to complete this task by looking into the distorted images only, just like humans learn to compensate for the distortion introduced by eye glasses,” Terekhov said. “We believe that the principles introduced by Poincare, which are the basis of our algorithms, could be more general and are utilized by the brain at multiple levels. We are currently exploring the possibility of using these principles to build neural networks that do not suffer from catastrophic forgetting and can gradually accumulate knowledge.”

More information: Learning agent’s spatial configuration from sensorimotor invariants. arXiv:1810.01872v1 [cs.LG]. arxiv.org/abs/1810.01872

Henri Poincaré, Science and Hypothesis. www.gutenberg.org/files/37157/37157-pdf.pdf

Unsupervised model-free camera calibration algorithm for robotic applications. ieeexplore.ieee.org/document/7353799

Source : https://techxplore.com/news/2018-10-approach-infuse-spatial-notions-robotics.html

 

April 2018, The Guardian mentions Kevin O’Regan’s Why Red Doesn’t Sound Like a Bell among five books selected by Nick Chater to explore the mysteries of the mind  February 2018, Get lost easily? Where’s your inner compass?  ERC=Science² describes some interesting results and new directions of the ERC FEEL project. 

Our sense of superiority notwithstanding, the human senses are not the most sophisticated in the animal kingdom. Dogs can hear sounds at a pitch well out of our range. Dolphins use echolocation as a simple and effective SatNav device.

So could humans incorporate new senses? Yes, says Kevin O’Regan, an ERC-funded researcher at the University of Paris Descartes. One project on his list is fine-tuning our internal GPS system so that finding magnetic north becomes second nature.

Read more

In the lab with Dr. O’Regan (buckle up!)

To test whether people can learn to ‘feel’ North on a compass, Frank Schumann in O’Regan’s team seated volunteers blindfolded in a special chair, gave them headphones (linked to an iPhone) that played a waterfall sound when pointed North, and then began rotating the chair. He doesn’t say, in his published Nature Scientific Reports paper, whether anyone got carsick.

 

This line of research was once seen as quackery, but has been attracting attention in recent years. Five years ago, scientists in the US identified neurons in the inner ear of pigeons which respond to the direction and intensity of magnetic fields. Then teams in the USA and the UK reported that humans, too, had a built-in homing device; we’re just not as adept at using it as pigeons are.

Now Frank Schumann and Christoph Witzel in O’Regan’s team have trained people to integrate a sense of magnetic north into their perceptual system using two smartphone apps called “hearSpace” and “naviEar”.

One group of people was given earphones enhanced with a geomagnetic compass. When they turned north, the pleasant sound of a waterfall could be heard from in front of them; the sound moved to the side and back as they turned away. And here’s the surprise: Soon people were so attuned to this new sense of direction that it became an integral part of their sense of orientation. Or, as O’Regan puts it: “We successfully integrated magnetic north into the neural system in the inner ear that underlies spatial orientation.”

Just to prove the point, the researchers then recalibrated the equipment. “We cheated by changing the direction of north as people turned. After 20 minutes of training, when we took off the earphones, people’s sense of space was all mucked up,” the researchers say, “and it even remained mucked up when we retested them a few days later.” This means that people not only integrated the north signal, but quickly came to trust the artificial magnetic sense more than their natural vestibular sensations.

Buzz for North

Companies are already getting in on the act. Cyborg Nest is selling a product called North Sense. It can be attached to the skin and gently vibrates when the user faces north. Rivals include wearable ankletsthat buzz when heading north – along with a host of smartphone apps that offer less invasive ways to find your way home.

And it needn’t stop with a compass. O’Regan thinks we could potentially find other ways to augment the suite of senses that comes preloaded in our heads. That, he says, would be “a first step towards the development of cyborgs.”

And that opens some interesting philosophical questions, about the difference between humans and robots. After all, O’Regan works at the university named after the man who wrote: “I think, therefore I am.” So if you’re up for some meditations, read on.

O’Regan starts with a simple question: What is it about a red patch of colour that makes us sense ‘red’ the way we do? Is it the way red patches excite the light receptors in our retinas? Yes, but why that particular experience? Is it the way nerves carry the signals to the brain? Yes, but what is it about those signals that mean ‘red?’ Is it the way the synapses in the brain cells respond to the signals from the retinas? Yes, but what is it about the synapses or the excitation patterns that would have anything to do with what we mean by ‘red?’

The more you think about it, the harder it is to say why red things feel red. There is, O’Regan says, “an infinite regress of questions” that get you nowhere. There’s “no way of making the link between physics and experience.”

What does it mean, to ‘feel’?

Trying to make machines that can feel the world – or emotions – is bound to involve some heavy philosophy. Here’s how O’Regan starts to explain it, in one of his academic presentations.

How to make a robot see red

So he takes a different approach. He notes that what we really mean by a sensory experience is what we do when we interact with the world when we have that experience. So in the case of ‘red’ what we mean by the experience of ‘red’is the particular way we interact with light reflected off a red object. That pattern can be described mathematically. It can as easily be programmed into a robot as you can teach a child the word for ‘red.’ The robot would see red, as defined in this precise manner.

In the same way, you could teach a robot to smell flowers, or sense the space around it. So long as the robot interacts in the correct way with its environment, it is feeling. The only thing lacking is that it be self-aware that it is feeling – but that, too, could be programmed.

O’Regan’s ERC grant allows him to explore what he calls this sensorimotor theory of sensation, on which he began work more than 15 years ago. According to this approach, our experience of the world is a product of how we interact with it, and it obeys a series of laws known as sensorimotor contingencies.

“Most people are looking for something in the brain that generates consciousness,” O’Regan says. “Perhaps they are looking for comfort in the idea that robots would never be able to feel in the way we feel.”

He believes this is a waste of time: there is nothing sufficiently special about humans that makes conscious robots an impossibility. Instead, we should focus on understanding how we really experience the world so that we can understand and enhance our ability to feel.

O’Regan says robots will soon eclipse human intelligence and will be able to perceive the world just as we do. “If we can explain feel – as a way of interacting with the environment – we can explain everything including emotions,” he says. “Will robots have emotions one day? Yes, it’s coming in the next 20 years.”

Source : https://www.sciencesquared.eu/how-do-you-feel

 January 2018, On the “feel” of things. The sensorimotor theory of consciousness (An interview with Kevin O’Regan) 

Kevin O'Regan explains the sensorimotor theory of consciousness in his interview with Cordelia Erickson-Davis. Source : O’Regan, K. & Erickson-Davis, C. (2018). On the “feel” of things: the sensorimotor theory of consciousness. An interview with Kevin O’Regan. ALIUS Bulletin, 2, 87-94.

February 2017, A magnetic sixth sense: The hearSpace smartphone app transforms human experience of space

photo Frank s article NatureA novel sensory augmentation device, the hearSpace app, allows users to reliably hear the direction of magnetic North as a stable sound object in external space on a headphone.

For this, Schumann & O'Regan developed a new approach to sensory augmentation that piggy-backs the directional information of a geomagnetic compass on ecological sensory cues of distal sounds, a principle they termed contingency-mimetic sensory augmentation. In what is potentially a break-through advantage, contingency-mimetics allows the magnetic augmentation signal to integrate into the existing spatial processes via natural mechanisms of auditory localisation.

Despite many suggestions in the literature that sensory substitution and augmentation may never become truly perceptual, their results, now published in the journal Scientific Reports, show that short training with this magnetic-auditory augmentation signal leads to long-lasting recalibration of the vestibular perception of space, either enlarging or compressing how space is perceived.

Source: Schumann, F., & O’Regan, J. K. (2017). Sensory augmentation: integration of an auditory compass signal into human perception of space. Scientific Reports7, 42197. http://www.nature.com/articles/srep42197

 

May 2016, EU Research, journal with extensive experience working with EU funded projects, wrote an article about FEEL

The FEEL project are developing a new approach to the ‘hard’ problem of consciousness, pursuing theoretical and empirical research based on sensorimotor theory. We spoke to the project’s Principal Investigator J. Kevin O’Regan about their work in developing a fullyfledged theory of ‘feel’, and about the wider impact of their research

November 2015, Kevin O'Regan's talk on how a naive agent can discover space by studying sensorimotor contingencies. Presented at BICA 2015

Kevin O'Regan from Alexei Samsonovich on Vimeo.

June 2015 French Tv programme (E=M6) broadcasted Kevin O'Regan's explaination on magic and illusion in the first part of the video and in the last part there is Christoph Witzel and Carlijn Van Alphen’s experiment on the dress illusion

May 2015, Article by Laura Spinney about our work on infant development

Image courtesy of Serge Bertasius Photography at FreeDigitalPhotos.net

Image courtesy of Serge Bertasius Photography at FreeDigitalPhotos.ne

GIVE a 14-month-old baby a rake and show it a toy just out of its reach, and it will do one of a number of things. It might wave the rake about without making contact with the toy, for example, or it might drop the rake and point at the toy. What it won’t do is use the rake to bring the toy within its grasp. Not until around 18 months of age does a baby realise that the rake can function as a tool in this way—and then it does so quite suddenly.

At Paris Descartes University in France, developmental psychologist Jacqueline Fagard is interested in what it is that changes in the baby’s brain at that age, that allows it to learn that new behaviour. Another person who is interested in the answer to that question is experimental psychologist Kevin O’Regan, who works at the same university.

Read more
O’Regan believes the answer will throw light on how babies learn to understand the nature of space and objects—the building blocks, in his view, of the way we perceive ourselves and the world.

O’Regan’s interest in Fagard’s work dates back to 2008, when she began collaborating with Patrizia Fattori, a neurophysiologist at the University of Bologna in Italy who works on macaques. When Fagard tested Fattori’s macaques on the same task involving a rake and a toy that she had given to the babies, she found to her surprise that they never learned to use the rake as a tool. Whatever happens in babies at around 18 months of age, therefore, seems to set humans apart from other primates.

With their joint PhD student Lauriane Rat-Fischer, Fagard and O’Regan decided to investigate the stages in a baby’s development that lead up to that behavioural switch. They began by testing three groups of babies on the same task involving a rake. One group had the rake added to its cache of toys and was allowed to manipulate it, but was never shown how it could be used to bring the desired object within reach. Another was shown how it could be used for that purpose but not allowed to handle it, while a third, control group could play with the rake and sometimes saw others using it as a tool—as might happen in a “normal” baby’s environment.

The babies were all aged 14 months at the beginning of the experiment, which lasted six weeks. When they were tested at the end of it, those who had observed without practising were significantly better at using the rake as a tool than those who had practised without observing, who in turn showed no difference from the control group.

Fagard believes that a number of maturing skills come together in a baby at 18 months, to allow it to master this task. They include the ability to attend to more than one object at once, and the ability to learn by observation. The latter is what is missing, or fundamentally altered, in macaques, she says. Given the right training, these monkeys also learn to manipulate tools. But whereas a baby seems to grasp that it must reproduce the entire gesture it has observed, the macaque reproduces only the part of it that achieves the immediate required result—meaning that it fails to learn how the gesture might be adapted to other, similar situations. “Macaques emulate, they don’t imitate,” says Fagard.

The skills that mature in a baby at 18 months are laid down over the foundations of others that develop earlier, she thinks, and it’s this entire developmental trajectory that she and O’Regan would now like to parse out. Early on in pregnancy, for example, a fetus is already capable of moving its body in many and varied ways—behaviour known as “motor babbling”. The ability to detect that different movements have different consequences, or contingency, also seems to develop early on—though it is further refined with age. This has been demonstrated in experiments in which very young babies have a mobile attached to their arm by a string. Initially they move their entire bodies, making the mobile move and produce lights and sounds. But they quickly learn that they only need move the arm with the string attached to get the same result.

“There are two things that developmental psychologists have traditionally underestimated,” says Fagard, “The experience that the baby acquires in the womb, and the amount it learns by watching.”

But there may also be things that psychologists have overestimated in the past, she says. In one recent study, she and colleagues confronted babies with an object that they could only bring within reach by pulling on a string attached to it. Most babies learn to do this by 12 months of age, a finding that has led some researchers to conclude that they have grasped the notion of physical contiguity by then. However, babies as old as 16 months fail in a task where they are presented with several strings, only one of which is obviously attached to the object.

Why this odd disconnect? Rat-Fischer divided babies into those who had succeeded on the multiple string task, and those who had failed, and had both groups watch an adult perform it. Only the babies who had succeeded themselves correctly anticipated—as judged by where they looked—which string the adult should pull. Though a baby may already have a fairly well-developed sense of physical contiguity by 12 months, they concluded, it still has to have mastered the task itself before it can grasp the concept.

The group will continue to elucidate the various stages of a baby’s cognitive development, including how it learns the boundaries of its own body—the physical basis of “I”. It has been shown, for example, that a baby won’t touch or try to remove a foreign object—a plaster, say—stuck to its face until around eight months of age. At that point, however, a handful of trial-and-error gestures that bring it into contact with the contaminating object is sufficient to teach it that there is something on its body that needs to be removed, and from then on the appropriate response is included in its repertoire. It isn’t until much later, around 15 months, that the infant learns to recognise itself in a mirror, suggesting that the concept of self, like so much else, is assembled incrementally.

March 2015, Laura Spinney's article about Color workpackage of Feel project 

WHERE does the feeling of redness come from? You might think it is generated inside your brain when it detects light of a certain wavelength. That’s the conventional and in some ways the instinctive view. But colour scientists have known for a long time that this is wrong.

Read more
Here’s why. The range of wavelengths of light that enter your eye when you look at a coloured object depend on two things: the wavelengths of the light illuminating that object, and the reflectance of the object’s surface (the wavelengths it reflects rather than absorbs). The wavelengths of the light entering your eye therefore change when you view the same coloured object under different lighting conditions.

How is your brain—that porridge-like lump of matter nestling inside the dark cavity of your skull—to know from the incoming light signals that it is looking at the same red surface under different lighting conditions? The answer is that it can’t. It needs some additional information. “Wavelength alone does not tell you anything about the colour of objects in the world,” says Kevin O’Regan, an experimental psychologist with the French National Centre for Scientific Research (CNRS) in Paris.

According to O’Regan, redness lies in the set of ways that light hitting a red surface is transformed into light reflected from that surface and entering the eye, under a range of different lighting conditions. That set of transformations is described by a set of laws—he calls them sensorimotor laws—and it’s when the brain detects the laws defining red, that a person has the sensation of redness.

Implicit in this sensorimotor theory of colour is the concept of action or motion. In order for the brain to detect the laws defining red, the person must explore the world visually, testing how the light coming into their eye changes as the light falling on the red surface changes. “The feel of colour lies in what you do and what happens to incoming light as you move coloured surfaces around in different illuminations,” says O’Regan.

This theory represents a radically new way of thinking about colour, and as a first step toward proving it, O’Regan and his team have been working hard over the last few years to try to identify the sensorimotor laws that define colour. With his former PhD student David Philipona, he started by making a mathematical analysis of the responses of photoreceptor cells on the retina when a coloured surface was viewed under different illuminations.

Humans have three types of photoreceptor or cone, each of which responds to a different band of wavelengths in visible light. For any light signal entering the eye, each of these three cone types is stimulated to a greater or lesser degree. In mathematical terms, their combined response can be described by a point in a three-dimensional space.

Philipona simulated the three cone types’ behaviour in a computer and found that, for most coloured surfaces, their combined output varied widely over the entire 3D space as he modified the illumination. Intriguingly, however, a handful of coloured surfaces produced a very different kind of response. For these colours, the combined output of the three cone types varied in a more restricted way across the range of illuminations. Mathematicians would say that a two- or even a one-dimensional space is sufficient to describe the retinal signal in the case of these oddball or “singular” colours.

What is really exciting about this finding is that it corroborates, and to some extent explains, what is known about colour language—the way that people distinguish colours using words.

Consider red again. There are many different shades that attract the label red, so colour scientists talk about a red “category”. The shades that English-speakers include in the red category may differ from those that speakers of other languages include in an equivalent category. However, if you ask speakers of different languages to pick out the shade they consider to be the most representative of each category—the reddest red, for example—they show a surprising degree of agreement. It turns out that these “focal” colours that are recognised by a wide swathe of humanity, overlap to a large extent the singular colours identified by Philipona in his mathematical analysis.

In real life, we are exposed to changing light all the time. Some of it is slow—the daily arc from dawn to dusk, for example—but some is instantaneous, as when we walk out of the dusk into an electrically lit house. Colours that appear the same even when viewed under different illuminations will clearly be useful for guiding us through the world.

“When it comes to language,” explains Christoph Witzel, a postdoc in O’Regan’s group, “It makes sense that people would use the colours that change in a simpler, more stable way as points of reference—as perceptual anchors.” A mother who wants to teach her child the concept of red, for example, will do so most effectively if she chooses a red that maintains its redness under the widest range of lighting conditions—the focal red, in other words.

Witzel is now building on Philipona’s work by investigating what it is about singular colours that makes them singular. Three things affect the sensorimotor laws that define colour: the light illuminating a coloured surface, the surface reflectance and the range of wavelengths to which each cone type is sensitive. Witzel has created computer simulations in which he can vary each one of these at a time, and he finds that the one that has the most influence on the laws—including the simplest of those laws, the focal colours—is reflectance. “The key determinant of colour sensation is not in the brain, but outside it—in the basic physics of light and surfaces,” he says.

The FEEL project has only just got underway, and there are many more mysteries Witzel and O’Regan would like to probe when it comes to colour—not least, whether babies who haven’t yet learned colour language still recognise the focal colours, and whether animals whose retinas are packed with cones that respond to different wavelengths detect different singular colours. The answers to these and other questions will begin to throw light on how differently we experience the world from other species and ourselves when young.

March 2015: Here is our explanation of why the famous photo of "The Dress" looks utterly different to different observers.

February 2015, Laura Spinney's article about Math related workpackage of Feel project 

TRY this: close your eyes and touch your nose with your index finger. You know that you are touching yourself, but how do you know it?

Read more
A moment’s reflection suggests that your brain makes that deduction because it detects that the two incoming sensory signals, one from your finger and one from your nose, have their origins at the same point in space. That in turn means that action potentials zipping along the nerve fibres from finger and nose to brain contain more than one type of information: information about noseness and fingerness, and information about where those two appendages are in relation to one another.

That in turn poses a dilemma that has exercised philosophers for centuries. If the brain is able to distinguish between spatial and non-spatial information, it must have a concept of space. How does it acquire such a concept?

The question is important because space is key to how we make sense of the world, including how we differentiate “self” from “other”. Consider something as fundamental as how we recognise an object as an object. We observe that when the object moves, all parts of it move together, and that those parts behave in the same way in relation to each other—that is, cohesively—if the object remains stationary and the observer moves her eye over it instead.

“Space has this quality that it’s universal,” explains mathematician Alexander Terekhov, a postdoc in the group of experimental psychologist Kevin O’Regan at Paris Descartes University in France. “It’s not an entity itself, but a set of laws that all entities share.” As such, he says, it offers the brain an elegant way of deducing a world of objects from the information it receives via its sensors.

Back in the 18th century, the German philosopher Immanuel Kant suggested that our sense of space had to be hardwired because there was no way we could learn it through experience. Experiences being tied to places, he said, learning through experience assumes a pre-existing understanding of space. O’Regan and Terekhov think he was wrong, and that a brain could in fact deduce the existence of space from scratch, simply by detecting regularities in its neural inputs as its senses explore the world.

One way to test their theory might be to study how a baby’s sense of space develops from birth, since there’s good evidence that children’s notion of space isn’t fully developed until puberty. A newborn baby has already spent nine months moving and receiving sensory feedback inside the womb, however, and it’s difficult to study the workings of its brain in there in any detail. So instead Terekhov has invented virtual “babies”, or agents as he prefers to call them—computer simulations he can study from the moment they come into existence.

The simplest agent Terekhov has devised has an “eye” consisting of a few simulated photoreceptors—the light-sensitive cells that pack the retina at the back of the human eye—and a single muscle, so that it can move its eye and visually explore the virtual space, populated with virtual objects, that it inhabits. “They are pure mathematical constructs, and yet they have something in common with the simplest biological organisms,” he says. More complicated agents have more photoreceptors and more muscles, meaning they can interact with the world in more complex ways.

Because these agents are computer simulations, Terekhov can analyse the data recorded by the photoreceptors and look for patterns. He has found that even the simplest agent can learn to extract the laws that define space, as represented by coincidences and transformations in those data. “It can learn, for example, that a cup and a bottle may be subject to the same laws of transformation—that is, move in the same way—without losing their identities,” he says. Such an agent can also learn to distinguish between movement in the world and its own movement, as exercised by its eye. It acquires these spatial notions in a matter of days, being constrained only by the power of the computer—though Terekhov acknowledges that biological organisms might take longer.

He has also created more exotic agents that exist in the absence of space. The “piano organism”, for example, inhabits a world of sound. Crucially, however, it is unable to move. Think of it as an antenna or tuning fork fixed to the soundboard of a virtual piano, where the antenna responds to a certain pitch or frequency of vibration. Though the piano organism can’t move, it can alter the frequency to which its antenna responds and so scan the soundscape—just as the “seeing” agents can move their eyes and scan the landscape.

Terekhov has found that, like the seeing agents, the piano organism is able to recognise entities before and after they have undergone a transformation—only now, the entities are not physical objects but musical chords, and the transformation is in pitch rather than space. “For this agent, objects become chords and the musical stave—the range of possible pitches—replaces space,” he says.

He and O’Regan conclude that it is possible to learn about space without any prior experience of it, and that this is a first, critical step in the development of our peculiarly human experience of the world—the “raw feel” of being alive that philosophers have dubbed the hard problem of consciousness, and whose origins they have so far failed to explain.

The findings could have major ramifications for the fields of artificial intelligence and robotics too. Though much progress has been made in the last 50 years, robots are still pretty bad at many things that humans do well, including object recognition. One reason for that, these researchers suggest, is that they are programmed to recognise an object by extracting the common features of many examples of that object. If space is not a feature of an object, however, but a set of laws that applies to all objects, then robots are missing a key element of object recognition—which might help to explain their poor performance.

The good news, according to Terekhov and O’Regan, is that their experiments suggest that organisms can develop a sense of space de novo, as long as they are equipped with sensors that they can actively move around in the world. In theory, therefore, the next generation of robots could relate to the world very much more like humans do.

December 2014, Laura Spinney's article about the "Philosophy" workpackage of Feel project 

What’s missing is an account of the phenomenal experience of perception—what it feels like. With its emphasis on our active engagement with the world, the sensorimotor theory aims to fill that gap by providing a language with which we can articulate what our experiences are like. 

Read more

In a speech called “This is water”, American writer David Foster Wallace recounted the story of a fish who greets two younger fish with the words, “Morning boys, how’s the water?” The two young fish swim on for a bit before one says to the other, “What the hell is water?” Wallace’s point was that we are so well-adapted to our medium that we don’t perceive it any more. If we want to understand what it means to be human, we need to step outside it.

There is a long tradition of researchers doing just that, to try to understand the nature of perception. In the late 19th century, for example, American psychologist George Stratton sported a strange tube over one eye (the other eye was covered) that inverted the world left-right and up-down, to see how this radically altered world would look to him—and, importantly, if he could adapt to it. He found he could, albeit slowly and with difficulty, and he wrote up a detailed account of his experience, one interesting aspect of which was that different elements of the visual scene “righted” themselves at different times and in different contexts.

inverting glass

Others have performed similar experiments using slightly different approaches and reporting different results. Some describe partial or gradual adaptation, others wholesale flipping of the visual scene, so it’s still far from clear if there is one human way of relating to the world, or many. To help throw more light on the question, in 2011 Degenaar decided to perform his own “fish-out-of-water” experiment, and donned inverting glasses.

His glasses, which actually look more like goggles, place a right-angled prism in front of each eye, thereby inverting the left and right sides of space. He wore them for four hours a day, on average, for 31 days. To begin with, he saw double and felt nauseous. He would repeatedly fail to grasp an object he could see, finding it impossible to correct for the visual inversion. But the most disturbing aspect of all, he says, was his sense of instability: each time he moved his head, the visual scene rushed past him and he couldn’t track anything in it.

Gradually, he adapted. By day four, he was able to cook a simple meal. By day 13, the visual instability had gone away. Two days after that, he ventured out into the streets of Groningen, where he was a PhD student at the time, armed with a white stick for his own and passer-bys’ protection. On the 30th day something strange happened: he could be looking at a scene, not moving his head or eyes, and he would suddenly be aware of a change in it, even though nothing had moved. A few seconds later it changed back to what it had been. “It was like a Gestalt switch, like one of those ambiguous images—the duck/rabbit or Necker cube—that you switch between seeing in different ways,” he says. Being able to switch between two possible ways of seeing the world felt both natural and exhilarating.

duck rabbit

There are parallels in his experience to the thought experiment described in the previous post, in which a person’s sensory input is artificially modified when they look at an object of a certain colour. Initially, this person would have a dramatically different colour experience from a neighbour looking at the same object who had not been subjected to the modification. With time, however, her brain would “tune” itself to the new relationship between her eye movements and the resulting signals registered on her retina. Learning that it was the same as before, she would revert to seeing the object as the same colour as before.

Likewise, says Degenaar, with time one adapts to an inverted world and learns to move deftly through it. A conventional way of thinking about perception is to invoke an internal picture of the world that is created in our minds. If that were the case, then that adaptation would require the internal image to flip back to its correct orientation. Degenaar believes something more subtle and complex is happening.

It turns out that we don’t have equal mastery of all the methods at our disposal for visually exploring the world. So for example, turning his head towards an object was difficult when wearing the inverting glasses, but he could move his eyes to it with ease, as long as he kept his head still. According to the sensorimotor theory, perception is about the relationship between movement and the resulting sensory stimulation. The theory predicts that adaptation of visual experience would coincide with the regaining of exploratory skills—which is what happened. Degenaar’s visual experience “corrected” itself in piecemeal fashion, depending on which exploratory method he was using.

Interestingly, once he had adapted and could again see where things were in relation to himself and to each other, he still felt that he was perceiving the world differently from how he perceived it without the glasses. In other words, though the informational content he was receiving was the same, the quality of the experience—how it felt—was different. The reason, he thinks, is that the way you engage with the world in order to sense it—by moving your head, your eyes and so on—is as much a part of perception as the knowledge it affords. “Your visual experience as a whole is a combination of all these things,” he says.

Degenaar published his findings in the journal Phenomenology and the Cognitive Sciences in 2013. Self-experiment is clearly an invaluable tool for probing the phenomenal aspects of perception, but one is a small sample, and he thinks it would be interesting to repeat his experiment in a larger number of volunteers, having them describe their experiences while observers looked for changes in their behaviour that might correspond to stages in their perceptual adaptation.

His ultimate goal, with O’Regan and other members of the FEEL group, is to understand how the brain mediates sensorimotor interactions and underpins perceptual adaptation, rather than generating an internal picture of the world, and also how those relatively low-level sensorimotor interactions relate to abstract thought—the capacity for which is after all one of the things that defines humans. For now, at least, he can say one thing: he knows how the water feels.

October 2014, Laura Spinney, science journalist, wrote several pieces about the project. The first one is an introduction  that gives an interesting overview of the sensorimotor theory

IT’S A DILEMMA beloved of barstool philosophers everywhere: imagine that, when you and I both look at a yellow flower such as a marigold, you see blue and I see yellow. If we both describe the marigold as yellow, and if we have both grown up making yellow-type associations whenever we saw marigolds, would there be any way of telling that our perceptual experiences of the flower’s colour were fundamentally different? 

Read more
It’s called spectral inversion, and the British philosopher John Locke proposed it as a thought experiment in 1689. Latterday philosophers picked holes in his version and subtly reformulated the dilemma, but little work has been done to test whether it might be possible in the real world. Testing spectral inversion in the lab is one of the goals of FEEL, a new European Research Council-funded project led by Kevin O’Regan, former head of the Laboratory of the Psychology of Perception at Paris Descartes University in France.

FEEL is an attempt to tackle the hard problem of consciousness—what we mean when we talk about the sight of yellow, the sound of a bell, the taste of an onion or the softness of a sponge. What constitutes these “raw feels”, and how do we become conscious of them? For a long time—ironically, under the influence of the philosopher who lent his name to O’Regan’s university—the dominant idea among those who study the brain has been that sensation is generated inside that organ; that it is purely the product of activity in neural networks.

O’Regan’s research has led him to doubt that idea, and to ask the following question: how can activity in the brain give rise to the colour sensation we get when we look at a marigold, and how does one explain that this activity should give rise to the sensation of yellow, rather than the sensation of blue or of any other colour in the visible spectrum? How, in short, does the brain produce “feel”?

O’Regan believes the brain can’t do this, on its own, and that to locate the cause of feel in brain activity is a logical impossibility. That’s because, each time a scientist points to a neural mechanism that might encode a feel such as colour—a certain pattern of brain activity, say, or the activity of a neurotransmitter at a certain type of synapse—the question remains, why should that activity pattern or neurotransmitter correspond to one type of feel rather than another, to yellow rather than blue?

Scientists have been lured into that logical impasse by language, O’Regan says, and he gives the analogy of life. For centuries, our best thinkers believed life to be a substance—some kind of magical essence that animate objects had and inanimate objects didn’t. Then they shifted their frame of thought, realising that life could be better defined as the way that an object—a living organism, in this case—interacted with its environment.

Likewise, O’Regan suggests, feel does not lie in the brain, nor is it something that can be generated by the brain. It lies in the interaction of the brain with its environment. That interaction is in turn governed by what he calls sensorimotor laws. For example, the feeling of redness is in the set of ways that light hitting a red surface is transformed into light reflected from that surface and entering the eye, under a range of different lighting conditions. We sense redness when we know that, if we were to move our eyes or the surface we are looking at, the stimulation by light of the photoreceptors on our retina would change in a way that corresponds in a predictable fashion to our past experience of redness under similar conditions.

Importantly, according to the sensorimotor theory of feel, as O’Regan calls his approach to consciousness, the brain still plays a role and a critical one at that. Its role isn’t to generate feel, however, but to learn the relationship between eye movements and the resulting light changes—in the case of colour—and to become attuned to the mathematical relationship, or law, that defines that relationship for the colour in question.

The five work packages of FEEL will, over the next four years, explore different aspects of this theory as it relates to our conscious, sensory experience of the world, and we’ll describe those work packages in more detail in future posts.

Returning for now to the example of colour, imagine that you could devise some experimental trick to modify the sensory input a person’s brain receives when they look at an object of a certain colour. If you had two individuals sitting side-by-side in a lab, looking at the same marigold—one of whom had been subjected to the trick while the other hadn’t—the sensorimotor theory would predict that initially, they would have radically different colour experiences.

But then what? Over time, the theory predicts, the brain of the one who had been subjected to the trick would adapt to the new relationship that had been imposed on it. It would register that the laws that govern the light changes detected by the retina when the eyes explore the flower are the same as before, even though the resulting neural activation is different. That person would go back to seeing the flower as yellow.

This is the scenario the FEEL team would like to create in the lab, because they have many unanswered questions about it. Would colour experience return fully to normal, for example, or only partially? Would something similar happen in other senses? Would such an experiment also elucidate how newborn babies come to perceive the world, including their own bodies? To find out the answers to these and other questions, watch this space—and prepare for a rollercoaster ride into the hard problem of consciousness.

September 2014, Aldebaran Robotics have put on youtube their first "A-talk" (imitating TED talks) in which Kevin O'Regan talks about the sensorimotor theory and claims that robots will soon be conscious

 

 

February 2013, Workshop on Conceptual and Mathematical Foundations of Embodied Intelligence, Max Planck Institute for Mathematics in Sciences, Leipzig Germany 

A theoretical basis for how artificial or biological agents can construct the basic notion of space

(joint work with Alban Laflaquière, Alexander Terekhov)

Mai 2012, “Je tâte donc je suis”, Podcast Recherche en Cours

http://www.rechercheencours.fr/REC/Podcast/Entrees/2012/5/25_Je_tate_donc_je_suis.html

March 2012, Interview with Paul Verschure, Convergent Science Network Podcast

http://csnetwork.eu/podcast/?p=episode&name=2012-03-07_interview_kevin_oregan.mp3

Leave a Reply

Your email address will not be published. Required fields are marked *