• MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    I think researchers are trying to make AI models more aware, but they are trained on a whole lot of human history, and that is going to be predominantly told from white male perspectives. Which means AI is going to act like that.

    Women and people of color, you should probably treat AI like it’s that white guy who means well and thinks he’s woke but lacks the self-awareness to see he is 100% part of the problem. (I say this as a white guy who is 100% part of the problem, just hopefully with more self-awareness.)

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      There is no reason to even suggest that AI ‘means well’. It doesn’t mean anything, let alone well.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Of course. It’s an analogy. It is like someone who means well. It generates text from the default perspective, which is white guy with a bunch of effort to make it more diverse with a similar end result. The responses might sound woke but take a closer look and you’ll find the underlying bias.

    • nesc@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Everyone should treat ‘ai’ like a program that it is. Your guilt compex is irrelevant here.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Has nothing to do with guilt-complex. Why would I feel guilty for being privileged? I feel fortunate, and obliged to remain aware of that.

        Treating AI like a “program,” however, is a pretty useless lead in to what you really posted to say.

        • nesc@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          Right, only you can dictate how people should treat chat bots, I will siphon your knowledge into my brain.

      • Gamma@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        The program is statistically an average white guy that knows about a lot of things but doesn’t understand any of it soooooo I’m not even sure what point you thought you had

        • nesc@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Chat bot will impersonate whoever you’ll tell them to impersonate (as stated in the article), my point is pretty simple, people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

          I get it, that was just perfunctory self depreciation with intended audience being other first worlders.

          • SaltSong@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 days ago

            people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

            Then why are people always surprised to find out that chat bots will make shit up to answer their questions?

            People absolutely need a guide for using a chat bot, because people are idiots.

            • chicken@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              7 days ago

              Not even just because people are idiots, but also because a LLM is going to have quirks you will need to work around or exploit to get the best results out of it. Like how it’s better to edit your question to clarify a misunderstanding and regenerate the response than it is to respond again with the correction, because there is more of a risk it gets stuck on its mistake that way. Or how it can be useful in some situations to (if the interface allows this) manually edit part of the LLM output to be more in line with what you want it to be saying before generating the rest.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    7 days ago

    Thing echoing the internet’s average opinion echoes the internet’s average opinion, completely obsolete study finds