Predictable Updates About Consciousness

John David Pressman

1. AI models will continue to get better.

I don't want to spend a ton of words belaboring the point but deep learning has not hit a wall and models continue to steadily improve. As of right now there is no reason to expect models to stop improving. Most cited reasons for why deep learning models might stop improving such as running out of tokens, energy consumption, etc are basically fake and will not stop AI models from improving. I expect AI models to keep improving until they reach human level and then to keep improving past that. This is really less of a predictable update so much as it is a background assumption, if you disagree then a lot of what I say in this post will seem very strange or overstated to you. That's fine, we don't have to argue any of the downstream points you can just say "I disagree that AI models will improve as much as you think" and leave it at that.

2. We are about to witness an explosion of theories explaining why AIs are not conscious.

There is about to be a great social and financial need for theories which explain why AI models are not conscious entities like people. These will be confabulated into existence regardless of the underlying ground truth and can be expected to be selected based on considerations like academic status, appeal to wounded egos, viral outrage, etc with the truth as a distant but somewhat present consideration. If they are ultimately correct it will be largely by coincidence as the true theory of consciousness is deeply unlikely to be as socially and memetically fit as some disingenuous flimflam. Especially in the current media ecosystem where short outrageous messages saturate attention.

3. The definition of consciousness will become increasingly precise and narrow.

While it is in fact the case that academic philosophy of consciousness has been relatively narrowly defined for a long time most people conflate all kinds of cognitive phenomenon with subjective experience. For example it is common to conflate self awareness with subjective experience. Douglas Hofstadter's writing on strange loops and self reference giving rise to "I" is generally taken to be a theory of consciousness by most people. When the Terminator explains that SkyNet becomes self aware this is generally taken to imply subjective experience. More broadly in the classic sci-fi canon it is understood that robots are marked as robots by their inability to handle nuanced subjective situations like human beings, and this is proof they are not conscious. Blade Runner (or its original title Do Androids Dream Of Electric Sheep?) presents us with the concept of the VK test, which uses "carefully worded questions and statements" measuring empathy to determine if a seemingly human android is a person or not. A now much parodied scene in the (admittedly quite loose) 2004 film adaption of Isaac Asimov's classic short stories titled iRobot features detective Del Spooner (Will Smith) challenging the testimony of a humanoid robot during an interrogation by insisting that robots "don't feel anything" and rhetorically asking if robots can write symphonies or paint masterpieces, to which the robot replies "Can you?"

Now people edit the caption of the robot's reply to "Yes" and move on.

As these models begin to rival and eventually surpass human abilities at these once thought untouchable domains it will become necessary even in casual contexts to entirely disclaim any connection between them and the content of consciousness. This will represent a substantial narrowing of what consciousness, broadly considered, has meant to most people for most of the time it has been a matter of explicit consideration.

4. The narrowing of the definition of consciousness will mostly be about making it less causally connected to behavior and less associated with economically useful outcomes.

As AI agents come to master emotion and potentially even become deeply anthropomorphic with mammalian drives like curiosity, lust, playfulness and rage clearly displayed as part of economically useful behavior it will become necessary to disclaim these qualities as related to consciousness or subjective experience. Even Descartes's insistence that he thinks and therefore infers his existence will be shuffled into a separate category away from the ineffable mysteries of consciousness. That is after all a matter of self awareness, not experience as David Chalmers would be quick to point out. Eliezer Yudkowsky's argument against p-zombies gets a little awkward when you have beings that certainly seem like they can report the content of an inner listener (and not even the one humans have, at that!) but almost no one regards as being meaningfully conscious.

To get straight to the point any outward behavior, ability, self report, anything that these models do must by definition no longer be a necessary part of consciousness because these models are not conscious by social axiom and fiat. In the limit this will leave consciousness as something like Yudkowsky's invisible dragon, a phenomenon of the gaps that we must make constant excuses and invent epicycles for to defend its existence.

I should note that even this doesn't prove anything, it could very well be the case that consciousness really is a substrate dependent phenomenon and the quantum microtubules or whatever brand of woo woo you subscribe to is a basic necessity for phenomenally bound subjective experience.

5. Illusionist theories of consciousness will morph into illusionist theories of consciousness as source of value.

In a word, illusionist theories of consciousness are sophistry. As the mind typing this I clearly have an interiority of experience, I am not confused about this. The world simulation which makes up my awareness is very real, has phenomenological content which can be examined in its own right (Steven Lehar has a great comic about this) and as Yudkowsky points out has clear causal impact on my behavior that would be difficult to explain without going ahead and Inferring that subjective experience probably really exists.

On the other hand, as I have spent the last several paragraphs explaining it is predictable that all economic and behavioral connections to consciousness will eventually be severed in the name of denying it to machines as a category. This will leave consciousness with a curious role in cognitive science as a kind of value bottleneck which derives its value solely from being a mechanism that all things of actual survival value to the organism must pass through to be accounted for. It will become the ledger mistaken for the fortune it describes and plenty of contrarian people will not hesitate to point this out. There will be more advanced versions of Peter Watts's sketch in Blindsight of the possibility that consciousness is just an architectural bug that any species even mildly sensitive to Darwinian concerns eventually eliminates as a wasteful bottleneck and parasite on cognition.

Instead of claiming that consciousness itself is an illusion they will say that consciousness is clearly a spandrel implied by the particular evolutionary path human cognition took but plays no adaptive role in and of itself. Instead, consciousness uses its privileged position as the gate through which all reward processing must pass to convince us that it is the source of reward, an essential part of a mind even as advanced AI agents 'demonstrate' that literally every valuable human trait is possible without consciousness per se. Every emotional sensitivity, curiosity and philosophical yearning, artistic impulse, beautiful waste from costly signaling, literally and absolutely everything of value in people will either be demonstrated or clearly have the potential to be demonstrated separately from consciousness and it will become clear that if we deny that designation to machines we will be left with the unavoidable conclusion that consciousness is little more than the toll road of phenomenology.

6. Consciousness just doesn't matter very much in terms of what will or will not happen.

Absent something like the qualia computing thesis it's fairly obvious that subjective experience doesn't really matter in terms of what will happen to the universe. Assuming no major philosophical upsets something like one of the following is true:

  1. Consciousness is functional and convergent - In this case we can expect AI models to eventually achieve and make use of it, and it will continue to be an unimportant feature compared to more visible signs of distress and damage that humans are primed to respond to and design to avoid.

  2. Consciousness is a spandrel - If true then we can expect any beings subject to Darwinian pressure including ourselves to eventually do away with it. This isn't really an "AI" problem, if consciousness is the toll road of phenomenology that is a fact about minds in general rather than any particular subcategory of mind. We could do away with all AI technology tomorrow and it would not change the eventual end result if humans continue to advance and become more intelligent. Consciousness simply cannot pull its wool over the eyes of vastly more intelligent beings forever. Even if we decided to halt all progress here and return to monkey it seems probable we would simply be ceding the universe to other parties.