Eliezer Yudkowsky Thinks Chickens (and Babies) Aren’t Conscious and I Know Why
(I think)
Yudkowsky is in the spotlight again. This time it’s because of his take on how chickens don’t have qualia.
I did a quick search and he has also said he’d be “more shocked to discover that a newborn baby was sentient than that a cow was sentient.”
That’s unfair, thought I. He said that in 2015! But no. Here he is arguing newborns aren’t conscious at Manifest 2023.
…
That didn’t make any sense to me, so I spent the whole day trying to puzzle it out.
Surprisingly, his position is coherent.
I don’t think it’s plausible. But it is coherent.
Definitions
People often use the same word to mean different things and end up talking past one another. To avoid that I want to define the terms above early on:
Qualia — the raw “what-it’s-like” feel of experience. The redness of 🔴, the sting of pain, the saltiness of salt.
Sentience / Consciousness — basically synonyms for the capacity to have any experience at all.
Those are the intuitive, everyday philosophical meanings.
To understand the puzzle, though, we need a more precise distinction.
In 1995, philosopher Ned Block introduced a distinction that has become foundational in the philosophy of mind. The distinction between:
Phenomenal Consciousness
Phenomenal consciousness is experience; what makes a state phenomenally conscious is that there is something ‘it is like’ to be in that state.
That is, in Block’s terminology, Phenomenal consciousness (P-consciousness) is the state of having qualia. He distinguishes it from:
Access Consciousness
Information that is available for reasoning, reporting, self-awareness, and cognitive control.
The point of his paper was that these are two different things. That, in principle, you could have:
P without A — for example: a rich visual impression you momentarily experience but cannot report, like Sperling’s iconic memory.
A without P — a “philosophical zombie” or a blindsight patient.
Armed with these we’re ready to understand.
What Yudkowsky Actually Thinks
Yudkowsky, in short, collapses the distinction above.
And that is the whole mystery.
After looking over the transcripts, posts, and videos, I think that Yudkowsky’s belief is that
phenomenal consciousness = access consciousness.
Or, no P without A.
He thinks you don’t get to have a “what-it’s-like” unless you can reflect on your own mental states.
In other words, he thinks that:
Conscious experience only arises when the brain runs a sophisticated, self-referential, cognitive algorithm.
From that one belief follow all the wild bullets he bites.
The Wild Consequences (Which He Embraces!)
If phenomenal consciousness requires access consciousness, then:
Chickens don’t have it → they lack the reflective machinery
Newborns probably don’t have it → the architecture isn’t online
Many mammals don’t have it → insufficient self-modeling
Humans sometimes may lose it → flow states
Even an adult human may only have it when the “self-awareness module” is running
These are Yudkowsky’s actual views.
So babies (and other mammals) could have nociceptive pain-signals that make them react the ways we see: by withdrawing or shrieking, but not phenomenal pain because, for him, for pain to be phenomenal—that is, to be experienced—it must be reflectively accessible. And the machinery for reflective access is not operational in newborns.
So What’s the Actual Disagreement?
After mapping all of this out, the answer feels disappointingly simple.
The actual disagreement is not about chickens or babies. The disagreement is about what phenomenal consciousness requires.
For most people, intuitively:
P-consciousness = the base layer
A-consciousness = optional cognitive add-on
Animals and babies have P even if they lack A.
But, for Yudkowsky:
No P without A
No “what-it’s-like” without advanced self-modeling
Therefore chickens, cows, newborns → no consciousness, no “what-it’s-like”
It all follows as a consequence of his metaphysical theory of consciousness.
A wild one, but not an incoherent one.
Stanning Yudkowsky
I think this is the clearest way to visualize the disagreement:
Is the refrigerator light always on? Or only when you look?
In the default view the fridge’s light is always on. Opening the door just lets you notice that it is, indeed, on.
In Yudkowsky’s view the light is always off. It only turns on when you look, which then makes you wrongly think it has been on the whole time.
For Yudkowsky, phenomenal experience is just like the fridge light: it’s not there by default, you looking for it generates it.
If the self-modeling machinery is there, then it’s on. If it’s not, then it’s off. Beings that don’t have the necessary cognitive structures to do the required self-modeling can’t have phenomenal experience.
Simple as. [1]
P.S.: It took me the whole day to puzzle this out. Not sure this was a great use of my time. But, speaking of bad uses of one’s time, if anyone wants to figure out whatever is happening here and here and write an explainer too, I’d be grateful.
[1] Yudkowsky’s position is, as I understand it, a higher-order theory of consciousness. I wish he’d make his terms clean and just call it that. Higher-order theories are a minority position in philosophy of mind. His specific higher-order-theory, in which he says he gives “serious consideration” to the possibility that he is conscious only whenever he wonders whether or not he is conscious would be an even more niche minority. (But still logically coherent!)




I think the motivation for these beliefs is a disbelief in magic, and a belief that somehow, consciousness probably arises from the computation the brain is doing. Otherwise, you have to either believe in souls, or be a panpsychist who thinks gravel is conscious.
If consciousness is computational in nature, then you have to ask what kinds of computations lead to consciousness. Probably very simple computations do not, otherwise your calculator is conscious, and oops the gravel is conscious again because physical reality is essentially performing simple computation all the time.
So you're left with only the possibility that some special kind of complex computation leads to consciousness. Probably something related to recursive processing, or global workspace theory, or something like that. Yudkowsky's beliefs are all downstream from that.
By the same logic, I also think you maybe aren't conscious when you dream, though your memories of having dreamt are experienced while conscious.
Does Eliezer explain why he thinks this?