Will AIs (artificial intelligence) become conscious?

In my previous posts on consciousness, I noted that Annaka Harris, David Chalmers and Anil Seth all had some discussion in their books of the likelihood that machine minds could become conscious . I’ll start with the philosophers.

Daniel Dennett thinks consciousness is an illusion, so has nothing useful to say. David Chalmers on the other hand has a lot to say. In his book The Conscious Mind: In Search of a Fundamental Theory Chalmers argues for a general principle that consciousness is an organizational invariant, i.e., that “functional organization fully determines conscious experience. In other words, if a silicon brain is organized identically to a human brain, it will also be conscious. He argues the case for this principle using a thought experiment in which physical components of the brain are replaced one at a time by silicon components. He claims that it makes no sense to think that the consciousness of the brain would either fade as the process continues or disappear at some point.

Given that we currently have no idea how consciousness arises in association with brains, or supervenes on the physical structures, he is basically making a big assumption that the purely abstract computational processes replicated in silicon are completely conscious. In other words, he is assuming his conclusion. I see no reason to assume that consciousness would be preserved, in the absence of any understanding of how or why it is associated with biological brains. In another review, Eric Dietrich comments that the principle of organizational invariance is unintuitive and not widely believed among philosophers.

Chalmers also argues that philosophical zombies are logically possible, and indistinguishable from conscious humans. He uses them quite a lot in his analysis of consciousness. Once Chalmers has decided that zombies can be programmed to report that they are conscious and to behave as though they are conscious, all without being conscious, it rather blows his thought experiment for machine consciousness out of the water. Once he had replaced all the brain components with silicon analogs and the silicon brain tells him it is conscious, how can he tell if that is correct. It could have become a philosophical zombie.

It comes back to the question of how we as external observers would determine an AI system is conscious, which Annaka Harris focuses on in her book Conscious: a guide to the fundamental mystery of the mind, She reminds us that we know that an organism can be conscious without that being detectible from the outside, for example in locked in syndrome or anaesthesia awareness. Similarly, an advanced computer system could be conscious and have no way of convincingly communicating that. At the same time, our intuitions about what organisms can be conscious depend entirely on behaviours but such behaviours can occur in organisms we do not think are conscious and can be programmed into machines that are not actually conscious. Complex behavior doesn’t necessarily shed light on whether a system is conscious or not.

Of course, humans are the one exception to this. While in principle we cannot tell if another human is conscious, particularly if we accept the possibility of zombies, it makes no sense to believe that I am the single human in existence who is conscious and everyone else is a zombie. Given I share the same genetics and physiology, including nervous system and brain structure, that are common to humans, Occam’s Razor alone would require us to conclude that most of us are conscious in much the same ways. Beyond that, we almost certainly have evolved to have an instinctive theory of mind which attributes consciousness to others in order to foster group cohesion and survival.

Understanding whether an advanced AI is conscious or not may not be possible from observation of its behaviour and responses. It could be a philosophical zombie very adept at convincing humans who interact with it that it is conscious. After all, one Google engineer Blake Lemoine was sacked after he became convinced that Google Large Language Model LaMDA was sentient. And reading a transcript of their conversation makes it clear how hard it might be to determine whether a computer program is conscious. Note: it seems that the transcript has actually been edited by Blake Lemoine and so it is hard to know whether that has enhanced the impression of sentience.

Annaka Harris makes the point that understanding whether or not advanced AI is conscious is as important as any other moral question. You have an ethical obligation to call an ambulance if you find your neighbor critically injured, and you would suddenly have similar obligations toward artificially intelligent beings if you knew they were conscious.

For a neuroscientist whose book paid far too much attention to pretend-physics explanations of consciousness as phi, the amount of information in the system, Anil Seth has a surprising good discussion of machine consciousness in its last Chapter 13. He is agnostic and suspicious about functionalism. He argues functional equivalence is not sufficient to ensure consciousness.

I would go further than Seth in arguing that functional equivalence is not enough to guarantee consciousness. Even if the case where every structure in the brain is replicated in silica, at least as far as information processing goes, it may be the case that there are aspects of brain function that are not directly related to information processing that are actually necessary for consciousness.  Or perhaps some version of panpsychism is true, and more elementary atoms and molecules contain traces of consciousness. Consciousness may be emergent in the sense that the brain somehow allows these traces to connect with each other and with brain functions such as memory and sense of self. There is no reason to assume that silicon-based brain parts had the same degree of consciousness as carbon-based brain parts, or that silicon-consciousness could be woven together unless we understood how the “weave” occurs.

Seth goes on to argue that the existential threat of AI arises mainly from its “intelligence” not from any possibility of sentience. The handing over of more and more decision-making to non-sentient AIs is sufficient to cause a terminator-like scenario. After all, the unintended consequences of very low-level algorithms in social media are already quite extreme and have played a significant role in the weakening of democratic governments and the rise of unfounded conspiracy theories and extremism.

Sam Harris has discussed the possibility of machine intelligence in a number of podcasts and videos available on youtube. In addition to points already discussed above, he also points out that many humans will attribute consciousness to AIs who can converse and behave like conscious humans, whether they are or not. He is sceptical that AI consciousness will be achieved, but says that is irrelevant, since humans will assume they are and behave accordingly. Given that the primitive chatbots we have now are already fooling some people (see discussion of LaMDA above), I think he is probably right.

In conclusion, without even minimal understanding of how consciousness occurs for humans, we have no possibility to design intelligent machines that are conscious, or if conscious occurs spontaneously in them, to be able to determine that they are conscious. Regarding the latter issue, the only real solution is to that we must determine what causes consciousness and then ensure, or find out, that those causes are built into the machine. If that is the case, and the also machine behaves as if conscious and tells us convincingly that it is, then we would be justified to accept that it is conscious.

I think we can provisionally conclude that machines designed to be intelligent or super-intelligent are very unlikely to become sentient. But if we ever gain real understanding of the causes of consciousness and explicitly build a machine to not only be intelligent but also conscious, then maybe such a machine could become conscious.

2 thoughts on “Will AIs (artificial intelligence) become conscious?

  1. It seems to me that Chalmer’s silicon replacement thought experiment relies on the assumption that it is actually possible to replace part(s) of the brain with computer components and have it still function. I’m not convinced this is true. I agree that with our current understanding of neuroscience there is good reason to be skeptical about whether consciousness must be meat-based or not.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.