This article will be of interest to philosophers of the mind. It develops the principle of mental inner scene and explains it neurologically. It connects the notions of reactive and cognitive agent, in other words our automatisms of perception and our decision generators.
Is a place a framework or an agent?
Perhaps the question seems abstruse or absurd to you? How important is the place where we are? Usually it is considered as a framework in which the mind that decides evolves. The agent, what acts, is this decision-making mind. The place exerts an influence but does not change the personality of the decision-maker, we think. Passive and non-active role.
I am talking about the physical place and not the beliefs we associate with it. Of course, if a place has a particular cultural or personal importance, we do not hesitate to attribute power to it. The place can even become the main character of a story, like the beach in Camus’ Stranger. But it is precisely as a historical person that the place has such power. Here our question is limited to the place as a material element, as a source of data for the senses. Think of ‘place’ as something that is invisible to you because it is so banal.
Things have become more complicated with new patterns of mental functioning. The entire scene takes place inside the mind, framework included. The place intervenes as a mental representation, a neural pattern like any other. From an electrochemical point of view, the activation of this network has neither more nor less weight than the representations of the Self, which are considered more essential to personal behavior. Another discovery: there is no decision-making center. No CEO of consciousness comes to look at the scene and take the necessary measures. It is the scene itself that decides. The decor is active! So why refuse the title of agent instead?
From automation to self-organization
We will grant it. However, not all agents have the same decision-making power. Because, on the one hand, some elements of the scene are really labeled as static. The mind does not mobilize for one place the vast resources necessary to image another human being. The place is permanent, the human more complicated to update. The stage has its players and furniture.
On the other hand, the representations follow a hierarchy. Raw sensory perceptions are the most servile, rule-abiding agents. No deviation is expected. While conscious agitation is an aristocracy of thoughts that does as it pleases. The place does not allow to predict its choices. It may even decide to do the opposite of what is expected in such a place. Small rebellions provide nice feelings of reward.
This hierarchy between basic and higher representations is the same as between reactive and cognitive agents, between automation and self-organization, or cybernetics of 1st level (Wiener) and second level (Ashby, Maturana, Varela). A reactive agent acts automatically based on the parameters it receives. Simple mirror of these parameters. A cognitive agent has a true representation of its environment and chooses between several options.
Behaviorism vs cognitivism
But the difference seems contentious: the cognitive agent does not choose at random. So it uses hidden parameters. How does it collect them? If they are revealed, doesn’t this make it an automatism determined as the reactive agent? This dispute founded behaviorism, which sees the human mind as a sophisticated automatism, having the appearance of free will only because the number of hidden parameters is immense.
Cognitivism manages to oppose behaviorism by reversing the direction of automatism: it is no longer the parameters that form the representation and trigger the reaction, but the representation that seeks itself in the parameters and reacts if it recognizes itself. The result is the same, do you think? No, there is a fundamental difference: it is now in the identity of the representation, and in the way it was formed, that the result is based. The property of the result changes from the parameters to the representation. Let’s see how everything is turned upside down:
Identity has a temporal dimension
The representation has a history, unlike the instantaneous values of the parameters. It integrates the past, the sequence of previous values. It is not the memory but the synthesis, an identity configuration. Ferment of personality. It is also information independent of all those who constituted it. Some values have changed it, others have not. The representation approximates all the recorded values. Representation also has a future. It is not fixed. Its temporal extent is very different from the perpetual present of instantaneous data.
Why does the reactive agent appear so different from the cognitive agent? An automatism seems as devoid of intention as a rock. The dynamics are on the parameter side. That’s why we make them owners of the result. Error!! Automatism is simply blocked in its evolution because its programmer wanted it that way. Its identity does exist, as with any agent, but it is reduced to a photograph of the programmer’s intention. Isolated, automatism seems involuntary, but place its creator at its side: it suddenly acquires personality. It is indeed a fragment of the will of its author. Far from the passive mechanism that is its usual label.
Freedom for the automaton
Reversing the causal direction of automatism erases the difference between reactive and cognitive agent. The reactive agent is a fixed intention, an isolated cognitive level, with no way to form others. If we allowed it to interact with other agents, the intelligence of the whole would jump immediately.
This is what the neural groups do together. But they do not do so on an equal footing. Depending on their position in the network, their hierarchical status differs. Let’s get rid of the horizontal design, that of neurons all aligned by an identical physiology and function. Their symbolism, a key element of their function, is hierarchized from one neural group to another. It is only through this vertical dimension that we can understand the deepening of the meaning that the brain allows.
Making the big gap
Now it is easier to see the big gap between our basic and superior representations, in other words between the physical place and the scene with its characters. The neural groups that symbolize them are physically very similar, the main difference being their location in the gigantic graph formed by our hundred billion neurons and their ten trillion synapses. The basic representations are at the periphery of the graph, in the 1st rank to process sensory data. Binary data, without complex depth. It is higher up the hierarchy, integrated together and analyzed from different angles, that they enrich themselves in meaning.
The agent constituted by this 1st rank group is reactive. It always provides the same output for the same sensory data set. But remember! We have reversed the direction of automatism: it is this group that owns its result. If it is immutable it is because it has no reason to change it. Since birth the same sensory stimuli lead to the same result. No programmer, but if one had to be designated, it would be natural evolution, which would have eliminated this genetic coding if the result had harmed the survival of the organism.
The history of the 1st rank has not changed since birth. Its temporal extent is brief. The same is not true of integrator groups higher up in the hierarchy. These often changed their configuration, as parameters were added to the synthesis and a multitude of reports of a different nature came from the lower levels. At the top, the dynamic is such that thoughts rarely follow each other in the same order. Identity is changing. The neural aristocracy frolicks from one subject to another, makes different choices from the day before to the next day. It emancipates itself from the relative rigor of the frame built by the lower levels.
Behaviorism is strongly present in the recognition of the place. It gives way to cognitivism or even illusionism when consciousness has installed its actors. All this within the same mind.
The place is indeed an agent. The one who makes the framework. In which the superior agents recognize themselves. But between the two is enough complexity to settle… reason or madness.