Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    20 hours ago

    Good input, thank you.


    As far as I know, none of them had random false data so I’m not sure why you would think that?

    You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) “Moving the knight in front of the bishop is like a punch in the face from mike tyson.”


    There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      19 hours ago

      You can use topic B as an illustration for topic A

      Sometimes yes. In this case, no.

      Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?

      I think the users of such products are extremely low (especially since they’ve been kicked from Google store) that it wouldn’t be worth their time.

      But no, I don’t think they could either. It’s just an automation script that runs actions the same way you would.