Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    24 hours ago

    [other data] + clicks ALL the ads like all the other adnauseum people

    adnauseum does not click “all the other ads”, it just clicks some of them. Like normal people do. Only those ads are not relevant to your interests, they’re just random, so it obscures your online profile by filling it with a bunch of random information.

    Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests

    Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      23 hours ago

      adnauseun (firefox add on that will click all ads in the background to obscure preference data)

      is what the top level comment said, so I went off this info. Thanks for explaining.

      Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

      I didn’t mean it like that.

      I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

      In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

      But ofc I may be wrong. Cheers

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        filtering out random false data seems trivial

        As far as I know, none of them had random false data so I’m not sure why you would think that?

        In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

        I feel like you’re greatly exaggerating the level of intelligence at work here. It’s not hard to figure out people’s political affiliations with something as simple as their browsing history, and it’s not hard to manipulate them with propaganda accordingly. They did not have an “exact customized lie” for every individual, they just grouped individuals into categories (AKA profiling) and showed them a select few forms of disinformation accordingly.

        • HelloRoot@lemy.lol
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          21 hours ago

          Good input, thank you.


          As far as I know, none of them had random false data so I’m not sure why you would think that?

          You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) “Moving the knight in front of the bishop is like a punch in the face from mike tyson.”


          There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

          • Ulrich@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            20 hours ago

            You can use topic B as an illustration for topic A

            Sometimes yes. In this case, no.

            Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

            I think the users of such products are extremely low (especially since they’ve been kicked from Google store) that it wouldn’t be worth their time.

            But no, I don’t think they could either. It’s just an automation script that runs actions the same way you would.