We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • benni@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 hours ago

    I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

    • undeffeined@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 minutes ago

      I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.

  • fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 hours ago

    Mind your pronouns, my dear. “We” don’t do that shit because we know better.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 hours ago

    The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    8
    ·
    edit-2
    2 hours ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      9
      ·
      edit-2
      5 hours ago

      That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        19 minutes ago

        I mean, sure, but that’s really easier said than done. Good luck getting good mental healthcare for cheap in the vast majority of places

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      7 hours ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer

  • bbb@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    ·
    13 hours ago

    This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 hours ago

      Asking a question and then immediately answering it? That’s AI-speak.

      HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      13 hours ago

      And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

      • bbb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        10 hours ago

        “…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

        Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

        • Mr. Satan@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 hours ago

          Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

          However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

          • Sternhammer@aussie.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

            The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

            • Mr. Satan@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

        • sqgl@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

          Not on my phone it didn’t. It looks as you intended it.

  • Sorgan71@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    5 hours ago

    The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      5 hours ago

      We don’t even have a clear definition of what “intelligence” even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    12 hours ago

    Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

    • laranis@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 hours ago

      Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    edit-2
    15 hours ago

    In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.

      • warbond@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        13 hours ago

        Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

        I wonder how different it’ll be in 500 years.

        • HugeNerd@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          8 hours ago

          It’s called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can’t write for beans.

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well

          • MrScottyTay@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            6 hours ago

            Proper grammar means shit all in English, unless you’re worrying for a specific style, in which you follow the grammar rules for that style.

            Standard English has such a long list of weird and contradictory roles with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

            Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I’m saying that as if that’s a new thing, but it does feel like a recent thing to be taught that side of English rather than just “The Queen’s(/King’s) English” as the style to strive for in writing and formal communication.

            I say as long as someone can understand what you’re saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don’t have a specific science to this.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    21 hours ago

    Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      19 hours ago

      This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.

      LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I

    • mienshao@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      15 hours ago

      David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol

  • confuser@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    14 hours ago

    The thing is, ai is compression of intelligence but not intelligence itself. That’s the part that confuses people. Ai is the ability to put anything describable into a compressed zip.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 hours ago

      I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        Hardly surprising human brains are also extremely lossy. Way more lossy than AI. If we want to keep up our manifest exceptionalism, we’d better start definning narrower version of intelligence that isn’t going to soon have. Embodied intelligence, is NOT one of those.

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    22
    ·
    22 hours ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      10 hours ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      17 hours ago

      I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…

      In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

      You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

      Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        edit-2
        21 hours ago

        I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        21 hours ago

        And they’re running into issues due to increasingly ingesting AI-generated data.

        There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    17
    ·
    20 hours ago

    People who don’t like “AI” should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      18 hours ago

      Citation Needed (by Molly White) also frequently bashes AI.

      I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

      It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

      • some_guy@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 hours ago

        I’m subscribed to her Web3 is Going Great RSS. She coded the website in straight HTML, according to a podcast that I listen to. She’s great.

        I didn’t know she had a podcast. I just added it to my backup playlist. If it’s as good as I hope it is, it’ll get moved to the primary playlist. Thanks!

  • ShotDonkey@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    8
    ·
    14 hours ago

    I disagree with this notion. I think it’s dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts: https://ai-2027.com/

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      12 hours ago

      Yeah, they probably wouldn’t think like humans or animals, but in some sense could be considered “conscious” (which isn’t well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

      This argument seems weak to me:

      So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

      You can emulate inputs and simplified versions of hormone systems. “Reasoning” models can kind of be thought of as cognition; though temporary or limited by context as it’s currently done.

      I’m not in the camp where I think it’s impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I’m not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a “singularity-like” scenario.

      I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

      • Hathaway@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

        You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      12 hours ago

      Ask AI:
      Did you mean: irresponsible AI Overview The term “unresponsible” is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

  • RalphWolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    21 hours ago

    Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.