The author engages in magical thinking. Our consciousness is a product of the physical processes occurring within our brains, specifically it's encoded in the patterns of neuron firings. We know this to be the case because we can observe direct impact on the conscious process when the brain is stimulated. For example, a few milligrams of a psychadelic drug can profoundly change the conscious experience.
Given that the conscious process is a result of an underlying physical process, it follows that these patterns could be expressed on a different substrate. There is absolutely no basis for the notion that an AI could not be conscious. In fact, there's every reason to believe that it would be if its underlying patterns mirrored those of a biological brain.
The author appears to be focused on the transformer architecture of AI and in that regard, I see nothing wrong with their argument. The way I see it, the important thing here is not that it's absolutely impossible that an LLM could have anything called consciousness, but that managing to prove it does is not as simple as plugging in some tests we associate on a surface level with "intelligence", saying "it passed", and then arguing that that means it's conscious.
I think it's critically important to remember with LLMs that they are essentially designed to be good actors, i.e. everything revolves around the idea that they can convincingly carry on a human-like conversation. Which is not to be confused with having actual human characteristics. Even if a GPU (and that's what it would be, if we're supposing physicality origins, because that's what LLMs run on) somehow had some form of consciousness in it connected to when an AI model is running on it:
It would have no physical characteristics like a human and so no way to legitimately relate to our experiences.
It would still have no long-term evolving memory, just a static model that gets loaded sometimes and runs inference on it.
If it was capable of experiencing anything physical akin to suffering, it would likely be in the wear and tear of the GPU, but even this seems like a stretch because a GPU does not have a sensory brain-body connection like a human does, or many animals do. So there is no reason to think it would understand or share in human kinds of suffering because it has no basis for doing so.
With all this in mind, it would likely need its own developed language just to begin to try to communicate properly on what its experiences are. A language built by and for humans doesn't seem sufficient. And that's not happening if it can't remember anything from session to session.
Even if it could develop its own language, humans trying to translate it would probably be something like trying to observe and understand the behavior of ants and anything said by it with confidence as plain English "I am a conscious AI" would be all but useless as information, since it's trained on such material and so being able to regurgitate it is part of its purpose for acting.
Now if we were talking about an android-style AI that was given an artificial brain and was given mechanical nerve endings and was designed to mimic many aspects of human biology, as well as the brain, I'd be much more in the camp of "yeah, consciousness is not only possible, but also likely the closer we get to a strict imitation in every possible facet." But LLMs have virtually nothing in common with humans. The whole neural network thing is I guess imitative of current understanding of human neurons, but only on a vaguely mechanical level. It's not as though they are a recreation of the biology of it, with a full understanding of the brain behind it. Computers just aren't built the same, fundamentally, so even in trying to imitate with full information, it would not be the same.
I think we very much agree here. In the strict context of LLMs, I don't think they're conscious as well. At best it's like a Boltzmann brain that briefly springs into existence. I think consciousness requires a sort of a recursive quality where the system models itself as part of the its world model creating a sort of a resonance. I'm personally very partial to the argument that Hofstadter makes in I Am a Strange Loop regarding the nature of the phenomenon.
That said, we can already see how LLMs are being combined with things like symbolic logic in neurosymbolic systems or reinforcement learning in case of DeepSeek. It's highly likely that LLMs will end up being just one piece of a puzzle in future AI systems. It's an algorithm that does a particular thing well, but it's not sufficient on its own. We're also seeing these things being applied to robotics. I expect that that's where we may see genuinely conscious systems emerge. Robots create a world model of their environment, and they have to model themselves as an actor within that the environment. The internal reasoning model may end up producing a form of conscious experience as a result.
I do think that from an ethics perspective, we should err on the side of caution with these things. If we can't prove that something is conscious one way or the other, but we have a basis to suspect that it may be, then we should probably treat it as such. Sadly, given how we treat other living beings on this planet, I have very little hope that the way we treat AIs will resemble anything remotely ethical.
specifically it’s encoded in the patterns of neuron firings.
Look, if you could prove this, you would solve a lot of problems in neuroscience and philosophy of mind. Unfortunately this doesn't seem to be the case, or at least there's not enough information going on in our brain to inequivocably state what you're stating.
The fact that our consciousness can be mapped onto physical states doesn't mean it can be reduced to it. You can map the movement of the sun with a sundial and the shadow it generates, but there's no giant ball of ongoing nuclear fusion in any shadow, even though one requires the other.
That's precisely what it means actually. Consciousness is a direct byproduct of physical activity in the brain. It doesn't come from some magic dimension. Meanwhile, the analogy you've made makes a huge assumption that high level patterns are inherently dependent on the underlying complexity of the substrate. There is no evidence to support this notion. For example, while our computers don't work the same way our brains do, it is a fact that silicon chips are physical things that are made of complex materials, are subject to quantum effects, and so on. Yet, none of that underlying complexity is relevant to the software running on those chips. How do we know this? Because we can make a virtual machine that can implement the patterns expressed on the chip without modelling all the physical workings of the chip. Similarly, there is zero basis to believe that the high level patterns within the brain that we perceive as consciousness are inherently tied to the physical substrate of neurons and their internal complexity.
Furthermore, from an ethical and moral point of view, we would absolutely have to give the AI that claims to be conscious the benefit of the doubt, unless we could prove that it was not conscious.