Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.
Her take is very interesting: what if we could actually use AI against that?
Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.
Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?
How could this be achieved?
I don’t know if there’s a clean way to do this right now, but I’d love to see a software project dedicated to doing this. Once a data set is poisoned it becomes very difficult to un-poison. The companies would probably implement some semi-effective but heavy-handed means of defending against it if it actually affected them, but I’m all for making them pay for that arms race.