Humans Are Not Merely Algorithms

In his recent book Homo Deus: A Brief History of Tomorrow (2016), Yuval Noah Harari presents what can be called the “algorithmic theory” of biology. In effect, this theory makes the simplistic claim that organisms and affects are nothing but algorithms. Algorithms are commonly understood as processes or sets of rules which are followed in calculations or other problem-solving operations, typically in computers. But are humans essentially algorithms? Can affects or emotional states be reduced to nothing but algorithms? And what wider ramifications might an algorithmic view of organisms have on human society?

In Consciousness As Feeling: A Theory of the Nature and Function of Consciousness (2019), I argue vigorously against the algorithmic view. While I can agree with Harari that his perspective has become the predominant paradigm within contemporary science, I reject it in favor of the ontology of “process philosophy,” an approach developed mainly by the philosopher Alfred North Whitehead, as well as William James and Henri Bergson.

Read “The Pragmatic Truth of Existentialism

An Unsettling Future of Artificial Intelligence and the Algorithmic View

While Harari may not be a committed “true-believer,” he presents a very comprehensive account of algorithmic theory. He states that an organism consists of nothing but “. . . an assemblage of organic algorithms shaped by natural selection” and that human beings are  “. . . not individuals.” We are rather a collection of many different algorithms, lacking an inner voice or a single self! 

Consequently, algorithmic theory rejects free will: “The algorithms constituting a human are not free. They are shaped by genes and environmental pressures, and take decisions either deterministically or randomly—but not freely.” All of this, according to Harari, is widely believed in the scientific establishment and that, ultimately, its truth should not really be questioned. He proclaims, “Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing.”

He takes the classically functionalist position here. Meaning, that the materials through which algorithmic calculations are carried out are irrelevant. For example, silicon is as good as organic carbon for this purpose, implying that computers, in addition to being intelligent, may also become conscious. Alternatively, nineteenth-century mechanical calculators used metal cogs to process algorithms. A practical consequence of this is that artificial intelligence systems may soon know us better than we know ourselves! They’ll monitor our bodies and brains. Such systems could, for example, know exactly who I am, how I feel, and what I want! Algorithms could replace the voter, the customer, and the beholder, thus triggering revolutionary changes in society. 

Individualism will collapse, authority shifting from humans to networked algorithms, thus fulfilling the vision of determinists. People will no longer be autonomous beings. Rather they’ll become simply a collection of biochemical mechanisms constantly monitored and guided by a network of electronic algorithms. As Harari puts it, “The algorithm will know me better than I know myself and will make fewer mistakes than I do. It will then make sense to trust this algorithm with more and more of my decisions and life choices.”

Read “Can Social Technologists Solve the Atomization Problem?

Are Our Feelings More Like Algorithms or Qualia?

Harari claims that the algorithmic view has triumphed in modern science. Why should this be the case and is there any alternative? One of the possible major explanations, I believe, might be the prestige of mathematics in the West. For example, in classical Greece Plato was a proponent of the notion that mathematics is the “highest” form of knowledge. With the rise of modern science, mathematics became its principle tool. This can be ascribed, partly, to its cultural prestige, but also due to its promise of certainty. As Galileo famously said in 1623, “The book of nature is written in mathematics.”

As to an alternative to this dominant prestige of algorithms in science, maybe the first notion to visit is the controversial, philosophical concept of “qualia.” Qualia is a philosophical synonym for subjective sensations and affects—feelings. Instances of qualia include seeing a color, tasting a lemon, or experiencing pain. It could be argued that qualia are, ontologically, the direct opposite of algorithms. (Ontology is the study of ultimate reality; what it is composed of and how it “works.”) We are consciously and continually aware of qualia throughout our waking hours, although there is absolutely no consensually agreed-upon explanation as to what they are or how they are generated. In stark contrast, we know exactly what algorithms are. We can write them out and analyze them in minute detail.

Of course, Harari would not agree with this distinction. In fact, he claims that affects, rather than being qualia, are merely just algorithms! He says that we experience many of our evolutionarily generated algorithms as “feelings,” such as fear, hunger, or lust. But how can a string of information-processing steps, mechanically manipulating passive symbols (in and of itself) ever feel like anything? I can accept that algorithms may cause feelings (i.e. an algorithm may initiate a feeling), but as John Searle argues in The Mystery of Consciousness (1997), such a causal relationship has to involve two different phenomena.

In addition, Harari refers to the problem of “other minds.” In other words, we can never be certain that other beings have the same subjective feelings that we have. Given this, how can ubiquitous and familiar algorithms be equated with our mysterious feelings?

Read “Sentience, Not Consciousness, Is Key to the Cosmos

Ontological Speculation and Scientific Theory Are No Different from One Another

The algorithmic view represents a brutal refutation of common human experience, something which Whitehead says is a sure marker of a defective ontology. For example, Daniel Dennett, a prominent spokesperson for the algorithmic view, repeatedly argues that while qualia “seem to exist,” in fact they don’t! This misplaced ontological confidence is based entirely on empirically-unexamined assumptions. In order to reform the mistaken ontology underlying science, we must break the prejudice and reliance upon a logical, analytical approach speculating as to the ultimate nature of reality.

There is nothing wrong with such speculations as long as they are consistent with the current empirical findings of science. Speculating about ontology is essentially no different from speculating about any other scientific theory. Given that no scientific theory is ever directly tested by empirical findings, theories are deconstructed into single, “true-untrue” hypotheses. Only these are then tested. If the testing indicates that the majority of important hypotheses are “true,” the theory is then provisionally accepted.

Why then should speculation about ontology be any different than that about the thousands of existing scientific theories? Granted, ontologies are far more comprehensive and far-reaching than specific scientific theories, such that they might be described as “deeper,” “higher,” or “grander.” Otherwise, why shouldn’t the same scientific methodology be applied to them as to all other theories in science? After all, ontology is necessary for a) guiding long-term research and b) providing an overarching comprehension of science, contributing to consilience—especially in consciousness studies! 

Read “So You Think There Are Laws in Nature?

“Black and White Mary”

I believe that reforming scientific ontology requires confronting the problem of qualia. A good place to start grappling with this topic is Frank Jackson’s 1982 thought experiment: “Black and White Mary.” It deals with the problem of color qualia and can be summarized as follows: Mary, a brilliant neurophysiologist, is incarcerated from birth in an exclusively black and white environment. On the other hand, through her research she has learned everything there is to know about how we perceive colors. The crucial question: when she is released and finally able to personally perceive color for the first time, will she learn anything she doesn’t already know? Dennett’s answer is no, she won’t learn anything new because she already knows everything there is to know about color!

Dennett’s answer, of course, ignores an obvious fact about the human condition—the difference between knowing and experiencing. I, and perhaps a majority of other people, would assert that no matter how much abstract, intellectual, and algorithmic information you accumulate about an experience, there’s an insurmountable difference between knowing something in this way and having a direct experience of it.  

Dennett ignores this distinction between knowledge (or information) and experience (or feeling) because he believes that all brain operations can be reduced to information processing—meaning, the manipulation of physical symbols by means of algorithmic rules, resulting in logical inferences. That’s why, according to Dennett, Mary learns nothing new about color. She’s already processed all the information and made all the right inferences. The algorithmic view sees brain function as very similar to or amounting to nothing more than that of a digital computer.

Read “Virtual Reality Is Not an Empathy Machine

How Is Experiencing Severe Pain Different from Experiencing Color?

Using color as the traditional example of qualia is confusing; the calm, neutral responses we have while perceiving colors can be hard to disentangle from an algorithmic analysis of information about color. What if we reconsider the “Black and White Mary” thought experiment, substituting severe pain for color? My idea is to challenge the behaviorist claim that experiencing severe pain (or, alternatively, orgasm) can be equated with a set of neurophysiological algorithms. As in the original experiment, there’s the further behaviorist claim that acquiring a comprehensive knowledge of the relevant algorithms can provide the equivalent of experiencing these two phenomena.

So let’s first rewrite the Mary thought experiment substituting severe electric shock for seeing color. We can imagine Mary as a medical student researching every aspect of our reactions to pain from a neurophysiological point of view. She finally understands all neural processes which would occur in a normal human being experiencing a severe electric shock. But would this knowledge enable her to feel the same pain as someone actually experiencing a shock? Would any amount of intellectual knowledge of these processes make Mary howl in pain? With her pre-experience, Mary would have mastered all the algorithms as to how a severe electric shock affects the human nervous system. However, until she is wired up and the power is turned on, she wouldn’t have had the experience of what a shock actually feels like in analog, energetic terms. Consequently, she would learn something new from actually experiencing the shock.

Turning to the phenomenon of orgasm, we can start with a venerable anti-behavioristic joke. A behaviorist has just had sex with his lover. He says to her, “It was great for you! How was it for me?” In other words, orgasm consists entirely of its outward, visible, observable manifestations. It doesn’t include a mental state, which can be experienced and reported. Of course, orgasm involves a lot of involuntary physiological processes, but would reading and comprehending a description of them, in itself, induce an orgasm? In reality, it is the intensely pleasurable, subjective feeling of orgasm that motivates people to engage in sexual behavior. As Jaak Panksepp argues in The Archeology of the Mind (2012), behaviorism ignores the affects which come between stimuli and responses. Panksepp also emphasizes that its affects (positive or negative) are the real reinforcers of behavior.

Without sensations, qualia, and affects, human beings would be reduced to the status of “philosophical zombies.” In other words, creatures who look and behave exactly like people but have no inner life, no feelings, and no consciousness. But this conclusion may not be of much concern to the scientific and philosophical establishment. Dennett once noted, “We’re all zombies!” In memetic fashion, Susan Blackmore also regards herself personally as a philosophical zombie. 

Read “A Community of Consciousness: Bridging the Gap Between Mind and Matter

Energy, Not Information, Is the Foundation of Reality

So what’s the alternative? According to process ontology, the ultimate constituents of reality consist of “drops of experience,” rather than “bytes” of information or passive particles. Process ontology argues that this is more consistent with the findings of the “New Science,” especially quantum mechanics. In effect, the process view gives qualia ontological priority over the algorithms of mathematics. Process ontology can also be termed “pan-experientialism.” “Experience” was the term used by both psychologist William James and philosopher Alfred North Whitehead to describe the ultimate constituents of reality. However, I personally find the term “experience” unsatisfactory in this context, mainly because it has too many extraneous connotations.

As a speculative alternative, I’d like to suggest simply using “energy” as the basis of reality, given that contemporary physics seems to have concluded that the universe is ultimately composed of nothing but energy and information. Claiming energy as the basis of feeling may be regarded as implausible. After all, so much of “common sense” is still dominated by Cartesian ontology. However, energy is like the other “brute facts” of physics, such as time and space. We can measure them but we have no idea as to what they really are.

Therefore, defining energy as a capacity for primordial feeling certainly counts as a possibility. Since Einstein’s equation, E = mc2, we’ve known that matter consists of highly-compressed energy which can be reconverted, as in the sun’s nuclear furnace or our own nuclear weapons. These are clearly sudden and enormously powerful processes, but perhaps life evolved much slower and more subtle methods of unfolding the mysterious nature of energy or feeling.

Read “The Power of One Idea

Let Us Not Mistake Maps for Territories

An important part of process philosophy is Whitehead’s “fallacy of misplaced concreteness.” This consists of mistaking abstractions for actualities. Substituting an abstraction for concrete reality may be useful for certain purposes, but the mainstream tendency to lose touch with the abstraction-reality distinction is a major defect in the current scientific paradigm. As the philosopher David Griffin argues, the assumption that “elementary particles” are adequately defined by the abstractions of physics is simply wrong. He also asserts that this mistake can be compared with the externalist concepts of psychological behaviorism. These have abstracted from human beings almost everything that truly characterizes them; namely, the fact that we are conscious, emotion-experiencing individuals.

Given that mathematics is nothing but an extremely abstract description of energetic processes, the essential mistake of the algorithmic view confuses the map with the territory.

To use a more contemporary metaphor, those who claim that computers can generate consciousness must also acknowledge that these processors merely simulate conscious behavior and fail to generate conscious experience. After all, computers can also simulate hurricanes, but these simulations won’t blow you over or dampen your clothes!

Read “What Earthquakes Teach Us About Embracing Uncertainty

We Need Both Qualia and Algorithms

I’ve been trying to contrast algorithmic information-processing with qualic, affectual experience, but I need to emphasize that we have both. This is perhaps most clearly reflected in the notion that this has resulted in our having two separate and distinct perceptual channels.

Nicholas Humphrey addresses this issue in his 1992 book, A History of the Mind. He identifies sensation and perception as completely separate and independent channels between organisms and their environments. For Humphrey, sensations: 

  • Always belong to subjects. They’re owned in a direct and personal way.

  • Are always tied to specific body sites. In other words, sensations are anchored in time and space.

  • Are “modality specific.” They are always experienced as either sight, sound, touch, taste, or smell.

  • Exist always and only in the present tense. 
  • Mark the boundary between “me” and “not me.”

In contrast, perceptions:

  • Are not tied to the body or located in time and space.

  • Consist of abstract, impersonal knowledge.

  • Comprise a model of the world, constructed mainly from the “distance” senses—vision and hearing—via algorithms.

Humphrey summarizes these distinctions by saying that sensation tells us “What’s happening to me?” Whereas perception tells us “What’s happening out there?” So, essentially the distinction amounts to the difference between our constructed, conceptual model of the world and the experiential feelings which we receive from the world. In other words, the difference between how the world “looks” (in the sense of mental understanding) and feels and how we react to it.

Read “Interoception Can Enhance the Depth of Our Emotions

From the Algorithmic View to Whitehead’s Process Philosophy

Whitehead also describes two separate sensory modes, in a way quite similar to Humphrey. One of Whitehead’s modes consists essentially of the conventional conception of perception. Namely, that the external world is represented in the brain through our sense organs. This corresponds to Humphrey’s “perception” or “What’s happening out there?” Again, as with Humphrey’s perception, it consists of impersonal, algorithmic information used to construct a model of the world.

Whitehead calls his second, more profound sensory mode, “prehension,” describing our direct perception of the feeling-energy of other entities. Prehension is based on our immediate experience of the capacity of other entities to exert causal efficacy on us. Thus, prehension comprises our direct, energetic interaction with the world. In other words, sensations, qualia, feelings, and affects. In this sense, it’s equivalent to Humphrey’s “sensation” (“What’s happening to me?”). For example, if you are struck by a rock, you feel the immediacy of the pain constructed by your nerves and brain, but you experience that pain with, and arising out of, the physical causal energy of the rock striking you. That is, a flying rock carries the physical energy which causes the sensory experience of the impact.

In Whitehead’s prehension mode, affects, rather than algorithms, are predominant. In this point, Whitehead and I agree with neuroscientist Antonio Damasio’s assertion that the generation of qualia requires subjective emotion. Out of the mass of algorithmically-processed sensory input from the environment, it’s only a sensation that can provoke an affectual response. This consequently becomes conscious qualia. 

Given all this, I agree with Harari’s characterization of the function of art as the impacting of our emotions. Against this background, we can identify two implications of Whitehead’s ontology. Firstly, that the division between algorithmic science and art is actually greater and deeper than the traditional division between science and religion. Secondly, Whitehead’s ontology provides a scientific basis for non-organized spirituality. 

Read “It’s All A Bit Absurd

Feeling (or Energy) Is the Ultimate Reality

I can now summarize my anti-algorithmic position in four parts:

  1. The notion that feelings consist of nothing but algorithms is deeply counter-intuitive. How can a string of information-processing steps, mechanically manipulating passive symbols, (in and of itself) ever feel like anything?

  2. So, if feelings are not algorithms, then how do they arise? The answer, according to process philosophy, is that a capacity for “experience/feeling” is built into ultimate reality.

  3. I’m also taking an ontological step further by suggesting that energy may be the ultimate basis of reality, accounting for both life, feelings, and consciousness. I regard energy as a more appropriate phenomenon than experience for this role because a) its ubiquity is more familiar, b) since Einstein, we know that matter is just another form of energy, and c) as a “brute fact” of physics, the true nature of energy is open for speculation.

  4. The ontology of contemporary science needs radical revision because mathematics is simply a description of energetic processes. It is an example of “misplaced concreteness” and it does not provide an explanation of energetic processes “in themselves.” In addition to successful prediction, such an understanding is necessary for true science.

Steve Minett

Steve was educated at the universities of Sussex, Oxford, and Minnesota, earning a Ph.D. from the University of Stockholm. In addition to teaching four years at a study abroad program for the University of Stockholm, he has had a career in international marketing, working for many multi-national companies, eventually setting up his own agency. He has devoted himself to the study of theories of consciousness, teaching the subject for several years at East London’s University of the Third Age and the North London Buddhist Centre.

One thought on “Humans Are Not Merely Algorithms

  • We are both algorithms and are not them too! It depend on the time frame. Over the long term this situation may well be correct because it is associated with evolution and somehow we have no means of determining how much we are evolving at present. yet when we look at ourselves over a relatively short time (such as part of our lives) there is no hint of our progress being derived without our ability to make significant and important decisions in a much more free way. The concept of this double happening is one that i believe is a true characteristic of the way our deeper thoughts take us, and it has to be accepted as to how we can get closer to reality in a scientific sense.

    Reply

Leave a Reply