What do you think, ChatGPT? If it can create almost perfect summaries with a prompt; why wouldn’t it work in reverse? AI built into Windows could flag potentially subversives thoughts typed into Notepad or Word, as well as flag “problematic” clicks and compare it to previously profiled behavior. AI built into your GPU could build an behavioral profile based on your interactions with your hentai Sonic the Hedgehog game.

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    Have a few friends over and have them all sit around a table. Have everyone place their smartphones on the table (turned on, of course), and proceed to discuss something like the merits of drills from Harbor Freight versus Ryobi, Milwaukee and DeWalt. Ideally with one person speaking at a time. Wait about a week and ask your friends if any of them noticed an uptick in ads for drills or powertools in general.

    • BearOfaTime@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I saw this in minutes after a conversation in a car with 2 people, 2 phones.

      And it was for a subject which was waaaaay out in left field for us both, something neither of us had ever even thought about before.

    • usualsuspect191@lemmy.ca
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      Hasn’t this been proven to be false? People have monitored the network traffic and phones don’t listen like this; it’s just not practical.

      Instead, they keep track of your browsing, location, contacts, etc and build a profile well enough they don’t need to listen to you.

        • Sludgehammer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          It’d be very easy to take some LLM text about some product, run it through a text to speech converter then quietly expose the phone to it (like put a earbud up to the mic). This way you could easily create a blind or a double blind test, you don’t know what product that this set up has been rambling about into the phone for the past twelve hours and you have to pick it out from the ads you’re served.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Ads? You mean those stickers on a bus?

      Seriously though, use DNS, VPN and other means to block ads and telemetry, so thoughts like that don’t even occur to you.

      • Fonzie!@ttrpg.network
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        VPN doesn’t necessarily block telemetry, and some providers, like NordVPN, have tons of telemetry in their clients alone. Even if they come with “blocking telemetry” in their VPN, I guess they want to be the only one hoarding your data.

        Use tracker blockers/firewalls, TrackerControl is a good open source app on Android for this, a PiHole can block a lot of tracking traffic as well.

  • AlecSadler@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    I think it was during the Cambridge analytics days, but I read an article that the average person is tracked by over 5000 data points. So we’re already kinda f’d.

    • Tetsuo@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Hello, I’m NVIDIA I send every app you use as telemetry. But you know it’s only to know in what apps your driver crash of course. I wouldn’t send that data to telemetry even when it doesn’t crash. Right?

    • sp3ctr4l@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      True, you don’t need AI for security problems…

      …but it is introducing tons of them, for little to no benefit.

      About a month ago I saw a post for a MSFT led AI Security conference.

      None of it, absolutely none of it, was about how to say, leverage LLMs to aid in heuristic scanning for malware, or something like that.

      Literally every talk and booth at the conference was all about all the security flaws with LLMs and how to mitigate them.

      I’ll come back and edit my post with the link to what I’m talking about.

      EDIT: Found it.

      https://www.microsoft.com/en-us/security/blog/2024/09/19/join-us-at-microsoft-ignite-2024-and-learn-to-build-a-security-first-culture-with-ai/

      Unless I am missing something, literally every talk/panel here is about how to mitigate the security risks to your system/db which are introduced by LLM AI.