We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • pachrist@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    16
    ·
    9 hours ago

    As someone who’s had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn’t or, particularly, can’t be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don’t have kids. The other side usually does.

    When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend’s dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

    And as funny as it would be to argue that they’re both sapient, but not sentient, I don’t think that’s the case. I think you can make the case that without true volition, AI is sentient but not sapient. I’d love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      You might consider reading Turing or Searle. They did a great job of addressing the concerns you’re trying to raise here. And rebutting the obvious ones, too.

      Anyway, you’ve just shifted the definitional question from “AI” to “sentience”. Not only might that be unreasonable, because perhaps a thing can be intelligent without being sentient, it’s also no closer to a solid answer to the original issue.

    • TheodorAlforno@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      5 hours ago

      You’re drawing wrong conclusions. Intelligent beings have concepts to validate knowledge. When converting days to seconds, we have a formula that we apply. An LLM just guesses and has no way to verify it. And it’s like that for everything.

      An example: Perplexity tells me that 9876543210 Seconds are 114,305.12 days. A calculator tells me it’s 114,311.84. Perplexity even tells me how to calculate it, but it does neither have the ability to calculate or to verify it.

      Same goes for everything. It guesses without being able to grasp the underlying concepts.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 hours ago

      Not to get philosophical but to answer you we need to answer what is sentient.

      Is it just observable behavior? If so then wouldn’t Kermit the frog be sentient?

      Or does sentience require something more, maybe qualia or some othet subjective.

      If your son says “dad i got to go potty” is that him just using a llm to learn those words equals going to tge bathroom? Or is he doing something more?

    • terrific@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      8 hours ago

      I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

      I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

      The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.

      • jpeps@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 hours ago

        Couldn’t agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don’t think this is one of them.

        Kids are certainly building a vocabulary and learning about the world, but LLMs don’t learn.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 hours ago

          LLMs don’t learn because we don’t let them, not because they can’t. It would be too expensive to re-train them on every interaction.

          • terrific@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 hours ago

            I know it’s part of the AI jargon, but using the word “learning” to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              2 hours ago

              But that’s exactly how we learn stuff, as well. Artificial neural networks are modelled after how our neuron affect each other while we learn and store memories.

              • terrific@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 hour ago

                Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

                I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.

                AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

    • Russ@bitforged.space
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 hours ago

      Your son and daughter will continue to learn new things as they grow up, a LLM cannot learn new things on its own. Sure, they can repeat things back to you that are within the context window (and even then, a context window isn’t really inherent to a LLM - its just a window of prior information being fed back to them with each request/response, or “turn” as I believe is the term) and what is in the context window can even influence their responses. But in order for a LLM to “learn” something, it needs to be retrained with that information included in the dataset.

      Whereas if your kids were to say, touch a sharp object that caused them even slight discomfort, they would eventually learn to stop doing that because they’ll know what the outcome is after repetition. You could argue that this looks similar to the training process of a LLM, but the difference is that a LLM cannot do this on its own (and I would not consider wiring up a LLM via an MCP to a script that can trigger a re-train + reload to be it doing it on its own volition). At least, not in our current day. If anything, I think this is more of a “smoking gun” than the argument of “LLMs are just guessing the next best letter/word in a given sequence”.

      Don’t get me wrong, I’m not someone who completely hates LLMs / “modern day AI” (though I do hate a lot of the ways it is used, and agree with a lot of the moral problems behind it), I find the tech to be intriguing but it’s a (“very fancy”) simulation. It is designed to imitate sentience and other human-like behavior. That, along with human nature’s tendency to anthropomorphize things around us (which is really the biggest part of this IMO), is why it tends to be very convincing at times.

      That is my take on it, at least. I’m not a psychologist/psychiatrist or philosopher.