Skip Navigation

We need to stop pretending AI is intelligent

theconversation.com

We need to stop pretending AI is intelligent – here’s how

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

16 comments
  • So why is a real “thinking” AI likely impossible? Because it’s bodiless.

    Why would anything need a body to be intelligent? Just because we have bodies and whoever said that cannot imagine different forms of life/intelligence? Not that i think, current LLMs have the experience and creativity to be called intelligent. i just don't think that everything that's intelligent needs an arse :)

  • What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

    Prove to me that this isn't exactly how the human mind -- i.e., "real intelligence" -- works.

    The challenge with asserting how "real" the intelligence-mimicking behavior of LLMs is, is not to convince us that it "just" is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

    The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn't relying on statistical models whose inner principles of working are, essentially, the same.

    All these claims that human creativity is so outstanding that it "obviously" will never be recreated by deterministic statistical models that "only" interpolates into new contexts knowledge picked up from observation of human knowledge: I just don't see it.

    What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity--held as one of the pinnacles of human intelligence--has clear connections to what came before. If you read Einstein's works he is actually very good at explaining how he worked it out in increments from models and ideas - "what happens with a meter stick in space", etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

    To me, the argument in the linked article reads a bit as "LLM AI cannot be 'intelligence' because when I introspect I don't feel like a statistical machine". This seems about as sophisticated as the "I ain't no monkey!" counter- argument against evolution.

    All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don't think we have anything to gain from the "I ain't no statistical machine" line of argument.

  • What things are and what the masses choose to call them, and use them for, are usually two different things.

    Asking the masses to understand a complex subject for themselves, and ascribe to it appropriate nomenclature, when all they actually want is something that echoes what they already think with more eloquence is folly.

    Reference: social media - a method to collect personal data from the masses to use against them, which they willingly and greedily supply, without recompense.

16 comments