Intentional Connectionism

Steven D Marlow
6 min readNov 4, 2019

--

A formal reply to another article.

Carlos E. Perez recently posted an article (AGI via “Selves and Conversations all the way up”) that seemed to occupy the space between the GOFAI and Deep Learning divide, where some form of general intelligence can be reached while still based on neural networks. On the first read, there seemed to be criticism of the current models used and some unclear formulation or explanation of another way. I had no interest in trying to connect the dots as it felt like the same kind of thinking that takes a concept we can’t fully explain (such as intelligence or consciousness) and just expands the definition to include all systems. This requires us to preface the kind of intelligence we are talking about, or accept that consciousness is a fundamental property of the universe.

My only comment to him (via Twitter, naturally) was that looking at the problem on a per neuron level would be like looking at software on a per transistor level, and no one would ever suggest we do that. His reply highlighted the World within a pearl argument, for which I responded with a comment about reductionism solving nothing. To be fair, that’s a common argument about not seeing the forest thru the trees. His follow-up is the reason I’m going thru his article a second time in order to dissect it.

I’m looking to see what the article says about the collective behavior of neurons with human social interaction as framework.

Evolutionary Drive

It starts by saying Artificial Neural Networks (ANN) are a black box that don’t operate in the same was as real neurons, but they share similar pattern matching abilities. I find that to be a simplistic representation of the entire brain and cognitive pipeline, but OK so far. He then jumps into this idea of the brain having 5 self-models that seek knowledge to preserve themselves. You’d have to examine some of his other posts to understand his meaning, and indeed the larger theme that is motivating his reasoning. It sounds like he’s trying to connect innate behaviors or even reflexive traits in animals that don’t require sentient levels of activity with primitive neural structures (though I’ve just conveyed that idea with greater clarity in one line than he has in multiple articles). How raw brain structure is able to function as independent minds with their own functionality will never be clarified or explained.

He makes reference to Daniel Dennett (pro reductionism) and the evolutionary process of dumb systems with no ability for comprehension leading to humans that, deep down, continue to operate in that manner. This is what allows for dumb, crudely constructed networks to be on equal footing with the brain, with evolution replaced with compute power. Just like the idea of illusionism, there is a grain of truth, but viewed thru one of those carnival mirrors that distorts everything around it. Abstraction is not substrate. The operation of systems on top of systems means you can’t point to a clump of cells and expect to find a fully functional copy of the brain, or divide that even further to find a neuron that can see, hear, and speak (or be the fulcrum point for all of human consciousness).

Carlos begins to swerve all over the road, going from collaboration between neurons to collective intelligence and “nano-intentionality.” It’s a leap to suggest simple cell behavior is an example of sophisticated levels of intentionality, and I don’t think a case could be made for that in a local cluster of cells, but such an argument has been made at the level of intelligent plants having self-awareness. He also blames the birth of Artificial Intelligence, as a field of study, along with dualism, for the move away from cybernetics. Seems like an odd claim given that feedback (in any system) is understood as a fundamental mechanic. The failing, if any, goes back to the disconnect between an intelligent action in humans and stimulus-response activity at the cell level.

He’s wrong when he claims that evolution is not an additive process, because each new system or feature becomes a part of a more complex design millions of years later. As to the link between parallel development of biological systems (cell and structure specialization, etc.) and parallel thinking for technological innovation… this is the first hint of complex interaction between neurons being mirrored by humans at a societal level. Is the central premise that a single neurons desire to survive is echoed at the social level, or that there is a direct connection where humans are manifestations in the minds of neurons?

There is a part that describes human innovation as being inconsistent with the evolution of living things that lack comprehension (at the level of cells, though the same article also seems to claim cells have delusions of grandeur). Oh, this is also where we are encouraged to leave the laws of physics behind and read yet another background article on generative modularity (good news, it’s in two parts). To his credit, he accidentally mentions that current methods of AI fail because there is a lack of abstraction, but literally goes back to the Big Bang in order to reiterate growing complexity and evolution of systems.

I, Neuron

By this point in the article, if you are still reading it, Carlos is talking about the self and awareness. He’s also still talking about neurons having a self. I honestly can’t bridge the gap in logic between a cell that is concerned with self-preservation and organism that evolved with no ability to comprehend anything. And if you remember, the brain has 5 selves, presumably constructed from clusters of nano-intentionality. It’s really just a strange way to reference consciousness as a fundamental property, and it’s surely not helping to sell the idea.

Consciousness as a manifestation from all of this nano-self activity isn’t an explanation for anything, rather, it’s just the same familiar starting line. What the article is attempting to convey is that we are already building neural networks that may not be aware or intelligent in human terms, but share the same intentions as many of our biological systems. Not expressed in the article, but a useful line of thinking, is the misapplication of neural networks for complex, high-level human tasks when they barely model the activity of the sense of touch in your finger tip.

It would be fine to end there, but the article drags on with the second hint of collective intelligence in humans being more intelligent than from the individual, and thus, so must be true with the collective “intelligence” of neurons. He gives the example of a businesses ability to produce as being limited by the structure of the organization. Again, stating that cognitive activity is a result of how neurons are organized. That organization comes from evolution and basic survival skills. Then he comes full circle to restate how the structure of the brain has an impact on our intelligence and conscious ability.

Buried toward the end of the article is likely the key point that has been poorly expressed to this point: How can the ways in which humans organize into larger and larger structures inform us as to the structure of the brain. A great deal of time and effort went into describing the collective of neurons in the brain as a nano-society of people, and I still feel that he contradicts this claim multiple times. If you look for correlations between biological and mechanical systems, you’re bound to find something. That holds true with electronics and control (even if describing the brain in computer terms has been rather limiting over the years).

Conclusion

On the second reading, it feels even more derivative while also trying to paint an elaborate picture of a flawed concept that is a central theme across multiple posts. When challenged on my understanding (“Did you read the article?”), I was faced with two possibilities. The first was that he failed to articulate, and the other was that I suffered a lapse in reading comprehension. Turns out I’m doing just fine, and no new idea on how to achieve AGI was presented.

--

--

Steven D Marlow
Steven D Marlow

Written by Steven D Marlow

I'm applying for the mad scientist position. Have robot. Will travel.

Responses (1)