Does a dog have any ideas?

The “third man” objection

The philosopher Martin Legros observes his dog confidently recognizing his peers, even though they sometimes look very different, and wonders if he has any ideas.

Thought is often schematized by making ideas —idein (ἰδεῖν) means “to see”— abstract, ideal and general forms, like the shape of a man or a tree, which would allow us to discern in the sensible world the appearances of a particular man or tree. Plato warns of the insufficiency of such an explanation and the risk of infinite regression: if it were indeed the general idea of ​​man which allows us to recognize the concrete individuals that we come across, it would in reality require a third instance to identify what there is in common between these concrete men and the idea of ​​man that we have in our heads, and so on until infinity. This is the “third man argument.”

Without ideas, what is the difference between dog and robot?

Martin transposes this objection to the mind of his dog and concludes that it is through sensory experience that the typical pattern of the dog is drawn, a silhouette in which his own is projected. The dog is therefore “without ideas”.

Like the ancients, Martin makes some fundamental errors in this reasoning. On the one hand it inserts in the background an “I” of the dog or the man who would possess the ideas and the sensitive experience. There is no such thing, except by believing in the soul. We are our ideas and our experience, without any intermediary relationship. No homunculus to dispose of it.

An approximation of the world that makes up our identity

The homunculus objection is not quite the “third man” objection. The third man would rather be a kind of neurological operator who would compare each sensory pattern to a known pattern and say whether it is the same or not. Indeed the third man does not exist. Neural networks don’t work that way. These are the existing patterns which are recognized alone in the sensory signals and are activated. This identification involves a great deal of vagueness. Different sets of signals, for example several images of dogs that don’t look alike, all have the effect of activating the “dog” schema thanks to this blur. The mind constructs an approximation of the world.

Another fundamental error is to think that an idea is a neural pattern. It is a reductive, “horizontal” vision of the functioning of the mind. An idea is actually a vertical stack of entangled neural graphs in the complex dimension. This is how retinal pixels are assembled into features, objects, concepts. Each of the graphs is “observed” by another more superior and synthetic one, on a multitude of successive levels, and it is indeed a sort of “infinite regression”, or at least a very high levels of observation, which constructs our sensitive experience. This is also how our ideas and our final consciousness are enriched in content, thickened by these countless micro-observations on its own process.

The dog’s consciousness is simply a lower building than ours. And he “is” his ideas, like us, that is to say a world approximately twinned with reality…

*

Leave a Comment