• kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    It’s not intelligent, it’s making an output that is statistically appropriate for the prompt. The prompt included some text looking like a copyright waiver.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.

        • feedum_sneedson@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          7 months ago

          Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?

            • feedum_sneedson@lemmy.world
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Okay, now you’re definitely protecting projecting poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.

        • Lmaydev@programming.dev
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          7 months ago

          You could say our brain does the same. It just trains in real time and has much better hardware.

          What are we doing but applying things we’ve already learnt that are encoded in our neurons. They aren’t called neural networks for nothing