How does the brain represent the world?

A brain that gets brushed!

A baby looks at a brush. The object has no meaning for her. She sketches one when her mother grabs it to straighten her hair. Years later, the brush is part of a rich mental universe of utensils with well-defined functions. It proposes itself to the consciousness of the baby who has become a young girl, when she gets ready in the morning. What happened in the brain? How did it become the owner of the meaning of the object and connect it to others?

The baby’s brain contains more connections than the adult’s. To represent is to prune a part of it. Let’s imagine that these connections are the pixels of a screen. The baby brain is a screen filled with white noise, pixels turned on randomly. There are a few large simple forms: instinctive reflexes. Pixels gradually turn off. Drawings appear on the screen. They acquire an increasing complexity and become designs. After years of maturation, it is the scene of the adult brain that appears on the screen, as detailed as a painting by Caravaggio.

To go further

This simple explanation is continually growing. Mechanisms other than dendritic bonds are involved: synaptic weights, synchronization of excitations, neuro-hormonal modulation, etc. Entanglement of information levels. None is independent of the others. However, neuroscience, despite its progress, fails to understand two essential points: 1) How does such complexity not fall into chaos? 2) What uses representations (homonculum problem)? Or, if there is no homunculus, how do representations become intentions?

1) How does the brain not sink into chaos?

The parameters we have just mentioned, if they evolved freely, could trigger any reaction of the brain. However, our behaviour is always consistent. Even the insane has her own, out of the ordinary. So there is a back-check. Even stranger from a neural point of view, it is abstractions and not physical parameters that exert control. The meaning of acts is judged in terms of concepts and not dendritic connections or biochemistry. How do concepts, virtual entities, modify neural bonds?

2) How do representations become intentions?

That neural images become more complex is modelable. In an AI, artificial neurons do the same thing by deepening their successive levels of analysis. But the result is read by the designer of the AI. She is the one who adapts the program accordingly. The intention is not in the program. While it is well in the brain. The hypothesis of a supervisory center, for example the prefrontal cortex, only shifts the problem: Intentions do not fall from the sky. How are they formed from simple descriptions provided by other centres?

Going from outside or inside?

These difficult questions have divided neuroscientists into two camps, outside-in OI and inside-out IO. For the outside-in OI, the most classic since Aristotle, the outside world engraves the intimacy of the brain. Organ designed to understand the nature of the environment. It is a reflection of that. For IOs such as György Buzsácki, author of ‘The brain from inside out’, the brain has an internal dynamic that is not fixed a priori. It randomly triggers behaviors that make sense when confronted with the context. Thus intentions differ from simple passive representations.

Each side has excellent arguments. OIs easily explain the influence of the world and mental programming through learning. IOs explain our fickle and irrational thoughts, emancipated from the world. Each side is struggling where the other excels. OIs are mute when the brain makes incomprehensible decisions, inconsistent with its own predictions. IOs skate when it is necessary to detail “random” about the brain’s creations. From what can it imagine new behaviors if not what it has already experienced, at the risk otherwise of initiating suicidal solutions?

Help!

Impossible to get away with an exclusively OI or IO approach. They are complementary. Lacks the theory that can bring them together. This is the objective of Stratium, which brings together the “pixels” of the sensory screen and the higher concepts by a hierarchy of levels of information, each in relative independence: a level integrates the parameters of the previous ones, producing an autonomous emergence. The significance of the integrator level is stable within certain limits of variation of the underlying parameters.

For example, an apple remains ‘apple’ even when you flatten its circumference (some varieties are more spherical than others). On the other hand, if its peduncular hollow reverses to become a bulb, there is a limit beyond which the information becomes ‘pear’. The integrator level makes a concept that is an approximation of the image. The concept covers many of its variants and changes abruptly when the limits are crossed.

Let’s finally capture the intention

The intention can be defined fundamentally as follows: it is to keep the world within the limits of an integrated representation even when its constitution varies. A desire exists by itself. The image of the apple exists by itself. Close your eyes. The image is still there, behind your eyelids. If the apple is appetizing, having closed your eyes does not make your desire to taste it disappear.

The integration of one level of information into another has several possible solutions, apple, pear, peach, etc. Other parameters consolidate the result: taste, smell, color, tree bearing the fruit. Natural competition between different solutions that ends up selecting one. It is no longer just pure perception but the intention to designate ‘apple’ the fruit before your eyes. A small child who has not yet encountered different fruits would not have this discriminating level.

Natural selection in the brain…

The more the levels of information pile up, the more the complexity of the concept deepens, the more independent it becomes of primary perception. The mind flies away, creates representations that do not exist in the world, solutions waiting to find a corresponding reality. The imagination is particularly astonishing when the basic objects treated are already abstractions, for example numbers. Equivalent to perceptions for sensory areas, numbers are for prefrontal areas of objects to be organized. Different postulates are in competition. It is by confronting them with the logic of the world that some survive at the expense of others. There is indeed a natural selection of ideas from their incessant mutations.

… to the point of free will

Inside-outs are right: the brain generates independent thoughts that seek themselves in the world and are reinforced by their confirmation. Outside-ins are right, before inside-outs: independent thoughts are not random but different possible integrations of data provided by the world. It is their regularities that shape us. But by raising our Stratium, our staging of mental complexity, we gradually gain an increasingly marked independence from the world, to the point of imposing intentions on it that it did not contain. The human will, the efflorescence of reality.

*

The Brain from Inside Out, Buzsáki, G. 2019

1 thought on “How does the brain represent the world?”

  1. Good piece. The baby is doing, albeit subconsciously, what millions of babies have done before. It is beginning to learn about its’ life and the world, which at present appears to revolve around all its’ needs and wants. This appearance will devolve, gradually over time. Notice I did not use the term unconscious: that would mean nothing is happening at all in any conscious sense. A book by Jean Piaget, written many years ago, captures this sequential development succinctly. He said children develop pre-operationally; formal operationally and concrete operationally. The baby, unless it is a burgeoning savant, is in the pre-op phase of conscious growth. These phases follow a pattern, mostly, associated with increases in chronological age and attending experience.

    Reply

Leave a Comment