1000 brains gathered to vote, it’s you!

Cardinal principles from ‘1000 Brains’ by Jeff Hawkins:

1. Modular brain: The brain is not a monolithic entity but composed of countless modules that function independently.
2. Functional uniformity of the neocortex: The modules are the columns of the neocortex. They are versatile and deal with reasoning as well as perception or language.
3. Repositories: Repositories are mental structures that allow us to spatially organize sensory information, but also abstract ideas.
4. Predictive Brain: Mental activity is a continuous cycle of prediction and correction at the heart of learning and decision-making.

    Jeff Hawkins’ book is remarkably well constructed. His popularization work is exemplary. Although it covers very technical subjects, you never get bored. He chose a Sherlock Holmes style and the little touches on his personal life liven up the investigation without ever risking the heaviness of a biography. His theory of neocortical columns is convincing and I learned a lot about neural mechanisms from this book. However, the title is too ambitious. This is not a new theory of intelligence.

    The flat mind, inhabited by homunculi

    Why this terse introduction, when the book is so well done? Like many computer scientists and neuroscientists, Jeff is not trained in philosophy of mind. No matter how much attention to detail is given to describing neural micromechanisms, the essential question about intelligence is not mechanical, it is: How do these neural networks begin to experience their own intelligence in order to talk about it? Until this problem is resolved, the term ‘intelligence’ means nothing because there is nothing to define it.

    Jeff touches on the problem on page 92: “A column does not know what the signals coming to it represent, and has no prior knowledge of what it is supposed to learn. A column is just a mechanism made up of neurons that strives to blindly discover and model the structure of what causes the incoming signals to change.” This frame of reference that the neocortical column constantly adapts, for whom does it produce it? For which homunculus seated in front of the brain control panel?

    The ghost is real

    With his columns, Jeff discovered a good model of the brain as a computer with a multitude of tiny processors. But who is the user interface output intended for? The excellent neuroscientist Hawkins dismisses this question because it has no legitimacy for him. If the columns are devoid of intention and their activity makes one appear, this can only be an illusion, an automatic phenomenon which there is no need to explain, any more than it is necessary explain magnetism as a phenomenon when electrons flow in a coil. The world is like this. Modeling equals explanation.

    Classic error, every philosopher protests. It signals that one has removed oneself from reality and placed it in the ‘divine point of view’. The world is like this… because I conceive it like this, with its so-called natural laws. How do we reintegrate into reality? We must abandon such simple alibis and convince ourselves that these are the illusions that do not exist. Of course, it is possible to delude oneself by associating a phenomenon with an erroneous explanation. But the phenomenon itself is not debatable. If you see a ghost, probably someone is playing a joke on you, or maybe your brain itself is playing this trick, but the phenomenon of your perception of the ghost is very real.

    A repository that speaks

    Having placed oneself in the divine point of view in complete ignorance has twisted effects. It becomes possible to distribute the power of intention to any place, in good conscience. It’s usually our specialty that decides. The psychologist attributes free will to the mind. For the neuroscientist it is the brain that acts; our neurons decide. Whereas for a biologist it is cells and metabolic pathways, and for a geneticist a DNA sequence. A physicist will laugh at them. Organs, neurons and chromosomes are illusory constructions, devoid of the slightest objective. They all come down to excitations of quantum fields. Unfortunately these are too numerous to be able to calculate them. Otherwise the disciplines of her dear colleagues would quickly sink into oblivion. Quantum models would be incomparably more precise and authentic than those of neuroscientists.

    Jeff therefore places causality in the neocortical columns, the level of organization he knows best. They are his little homunculi who direct our lives. He imagines p95 what they could say if they could speak: “I created a frame of reference which is fixed to the body…”. He has excellent intuitions, then understanding that the columns dedicated to the body and external objects are supposed to make the difference, to delimit the ‘me’. He is right to assume that it is the origin of the incoming signals that establishes this difference. At the output we therefore have body references for certain columns and the outside world for others. But who uses it? There is no neoneo-cortex above to manage these differences. So how does the self separate from the non-self?

    In the nursery of dimensions

    The major interest of column theory is to show that mental representations are managed by the same type of elementary cerebral organization, whatever the origin of the signals. Even for purely abstract concepts the same neural microprocessor is always at work. This is a big step forward for the theory that I myself defend, which frees the brain model from its compartmentalization into anatomical areas dedicated to its tasks. The brain is not a computer made up of functional modules but a whole conscious of its own constitution. Homogenizing the processing of material and virtual objects is the first step towards a global model of the brain based on information processing. It really doesn’t matter whether the material support for this treatment is based on carbon or silicon. A theory that stops at the neuron aims too short.

    Jeff is on the threshold of the complex dimension when he wants to convince us that his columns deal with more than the three spatial dimensions, that they can handle more for abstract concepts. Yes ! But does he realize that the concepts he takes as an example, politics or mathematics, use dozens of dimensions to assemble more basic concepts which are themselves made up of numerous criteria. From three dimensions we move on to a multitude that the columns cannot manage independently. The questions keep coming: How do these columns manage them together? How deep is their complex organization? Why do higher concepts gain consciousness while the majority, in the unconscious, remain inaccessible?

    What are the mechanics of the experience?

    Because if there is one area where Jeff needs to advance, it is that of complexity. He writes on p107 about the nesting of concepts: “To learn the coffee cup decorated with the logo, the column creates a new repository where it stores two things: a link to the cup repository already acquired and another to that of the logo already acquired […] it is only a matter of adding a few synapses.” Unfortunately no. Some synapses add some electrochemical stimuli between two columns. This observation does not explain how one concept mixes with another. What is nesting, this strange phenomenon where the basic concepts seem to have given way to the higher concept but the higher one would not exist without the base? No light here on this crucial subject.

    It is a whole part of the enigma of intelligence which remains in the shadows and which is not unrelated to the philosophical problem stated earlier, which remains intact once we have closed the book. How do the neocortical columns have the experience of their own functioning and how do they merge this experience to give that experienced by a human being descended from the divine point of view into reality? To put it in terms beloved of neuroscientists, what are the mechanics of the experiment? How do neurons already experience themselves as more than a collection of biomolecules? It is not possible to talk about intelligence without someone watching the outcome of brain processes, and in Jeff Hawkins’ book, that someone is absent. There is no observer in his brain, just a multitude of microprocessors that perpetually process incoming information and make predictions.

    Complexity divides more than it unites

    In Jeff’s defense, the concept of conscious observer cannot be explained using neuroscience alone. Since we must first understand how a neuron is more than a set of molecules, our entire physical reality needs to be looked at with a new eye. The “new theory of intelligence” is necessarily a new theory of integral reality. Transdisciplinary. The functioning of the columns that results in intelligence does not start from the base of the neocortex; it starts from the basis of complexity, from these quantum fields which structure reality and which are themselves possibly the emergence of something else. The solution to intelligence is not in the elements of the structure but in the structure itself, in the complex dimension which makes it appear.

    Concluding this way seems rather consensual, and that’s where the problem lies. When I talk about “complex dimension” everyone agrees to recognize its importance. Yet this term clearly divides my readers. On the one hand, the vast majority, including Jeff, see complexity as a simple inherent characteristic of the world, a property impossible to discuss or decompose, any more than the passage of time. On the other hand, the rare specialists in complexity perceive it as the most fundamental of dimensions. But we are overwhelmed with observations about it but still have no model.

    *

    Reading notes

    p116What a column learns is limited by its inputs. For example, a tactile column cannot learn a cloud pattern and a visual column cannot learn melodies.
    Comment: Clumsy and contradictory with the universality of the column supported so far. One column deals with regularities in signals, regardless of their origin. Explanation of the very great plasticity of the brain during neurological lesions: other columns can take over the signals which reached the injured parts. A column has no identity on the signal side (object side in the world). These lines show that Jeff does not know how to base the meaning of frames of reference.

    p117In the network simulations that [Jeff and his colleagues] create, even losing 30% of neurons generally has only a minor effect on network performance. Likewise, the neocortex never depends on a single cortical column. […] Our knowledge of a thing is distributed between thousands of cortical columns. The columns are not redundant, nor are they carbon copies of each other. But above all, each column is a sensorimotor system in its own right, just as each water employee is capable of repairing a portion of the water distribution infrastructure alone.
    C: Complexity doesn’t work that way. The significance weight of neurons varies greatly depending on their position in the networks. Some are insignificant and others have a crucial role for the concept. A common experience disparages Jeff’s interpretation: it is the strange phenomenon of forgetting a word while knowing that it exists. If the columns supported each other like Jeff thinks, the word would still be there. In fact, it only takes the damage of a single neuron, or even a few synapses, for the word to disappear. But it is reconstructed from graphs deeper in complexity. Critical connections sometimes reform within seconds, or minutes when dendritic growth is necessary: a neuron is assigned a new symbolism in the upper graphs, that of the lost word.

    p119Researchers have long assumed that the various inputs to the neocortex converge at a point in the brain where something such as a coffee cup would be perceived. This assumption follows from the theory of the hierarchy of characteristics. However, the connections that we observe in the neocortex do not present this aspect. Far from converging, on the contrary they go in all directions. This is one of the things that makes the linking problem mysterious, but we have proposed an answer to it: the columns vote. Our perception is the consensus obtained by the vote of the columns.
    C: Who records the voting result and acts? Always the same misunderstanding of the complex process. Its hierarchy is not that of anatomical centers, like a society with its institutions. Moreover, in a human society, the elements of the hierarchy are distributed in all heads. Institutions serve to express the right elements in the right place. There is indeed a geographical location of the decision, which is based on a much more widely distributed constitution. The functioning of the brain is indeed a hierarchical democracy, it is neither a tyranny of nerve centers as conceived by the first neurologists nor the anarchy of the columns depicted by Jeff.

    p123We describe computer simulations that show how learning occurs and how voting takes place quickly and reliably.
    C: Digitally reproducing a function does not show that we have understood it. On the contrary, specialists in simulated neural networks admit that they do not know what is happening in the “black box”.

    p124If we could look at the neocortex from above, we would see a stable pattern of activity within a layer of cells. This stability would cover vast areas, thousands of columns. These are the voting neurons.
    C: Tasty illustration of the divine point of view. Conscious fusion does indeed come from a vast graph hidden within the crowd of neurons, but there are no voting and non-voting neurons. All are part of graphs which synthesize the underlying elements, and observe them, which avoids recourse to the divine gaze.

    p129The theory of a thousand brains solves the puzzle of how neurons know what the next incoming message will be while the eyes are still moving. Each column having models of entire objects, it knows what should be perceived at each point of the object. If a column knows the current position of its entry point and the movement the eyes are making, it can predict the new location and how it will feel there, such as when looking at the plane of a city by predicting what we will see if we start moving in this or that direction.
    C: Neither neurons nor columns “know” anything. Let’s stop transferring to them an intelligence that only belongs to the brain considered as a whole. A frame of reference awakens under the effect of incoming stimuli. It has a certain independence from them—a certain range of stimulus configurations awakens the same frame of reference. But above all the frame of reference has an extensive identity over time. It includes the future of the situation that aroused it. A response was started automatically based on this included/expected future. It can be further modified by a clear change in the stimuli – their configuration leaves the range which aroused the frame of reference. It is the theory of neural graphs superimposed on complexity which solves the enigma and not that of a thousand brains.

    p129The binding problem postulates that the neocortex has a unique model of each object in the world. The theory of a thousand brains reverses this hypothesis by proposing the existence of thousands of models of each object. The various messages reaching the brain are not linked or combined into a single pattern.
    C: Complexity is precisely the magic that allows thousands of potential models to be merged into a single conscious representation. There is both the potential for thousands of alternatives and the organization to select only one outcome. This result is the (determined) configuration of the probabilities of all (indeterminate) models. Both hypotheses, single and multiple model, are correct!!

    p131As I walked through the door of his office, Mountcastle stopped me, put his hand on my shoulder, and said to me in a tone of strong recommendation: “You should stop talking about hierarchy. In truth, there is none. »
    C: And that’s how Jeff got drawn into the “flatists” (neuroscientists convinced that the mind is flat, devoid of hierarchy). Let us remember that the discourse of our masters always stops at the limits that they were unable to cross.

    I stop these comments at the end of the first part of the book. The rest concerns AI, a different subject that I have covered in detail elsewhere. Based on the book’s surveys, Jeff seems on the optimistic side, convinced that any nastiness that might arise from the AIs will only be ours. AIs are by nature free from outdated instincts that are unsuitable for today’s society. He is right of course, which does not free AIs from potential danger. Like our children, it all depends on the kind of education we provide them. AIs are no more independent than humans. Let’s hope they do better in terms of solidarity.

    *

    Leave a Comment