I understand your original comment a lot better now. My understanding of what you said is that open source intelligence that anyone provides through their public persona is revealing more than enough information to be damaging. The little that is sent over encrypted channels is just cherries on the cake. So the only real way to avoid manipulation is to first hope that you have not been a very engaged member of the internet for the last decade, and also primarily communicate over private channels.
I suppose I just underestimated how much people actually post stuff online publicly.
One first instinct response I had was identity isolation. That was something I was going to suggest while writing the original post as well. Practicing identity isolation would mean that even if you post anything publicly the data is just isolated to that identity. Every website, every app, is either compartmentalized or is on a completely different identity. Honestly, though that requires almost perfect OPSEC to not be fingerprintable. Besides just that, it’s also way too inconvenient for people to not just use the same email, and phone number or just log in with google everywhere. So even though I would like to suggest it, no one would actually do it. And as you already mentioned most normal people have just been providing boatloads of free OSINT for the last few decades anyway...
Thinking even more, the training dataset over the entire public internet is basically the centralized database of your data that I am worried about anyway. As you mentioned AIs can find feature representations that we as humans cant. So basically even if you have been doing identity isolation, LLMs (or whatever future model) would still be able to fingerprint you as long as you have been posting enough stuff online. And not posting online is not something that most people are willing to do. Even if they are, they have already given away the game by what they have already posted. So in a majority of cases, identity isolation doesn’t help this particular problem of AI manipulation either...
I have always tried to hold the position that even if it might be possible for other people (or AIs) to do something you don’t like (violate privacy/manipulate you), that doesn’t mean you should give up or that you have to make it easy for them. But off the top of my head, without thinking about this some more I can’t really come up with any good solution for people who have been publicly publishing info on the internet for a while. Thank you for giving me food for thought.
TOR is way too slow and google hates serving content to TOR users. I2P might be faster than TOR but the current adoption is way too low. Additionally, it doesn’t help that identity persistence is a regulatory requirement in most jurisdictions because it helps traceability against identity theft, financial theft, fraud, etc… Cookie cleaning means they have to log in every time which for most people is too annoying.
I acknowledge that there are ways to technically poison existing data. The core problem though is finding things that both normal people and also technically adept (alignment researchers/engineers/...) would actually be willing to do.
The general vibe I see right now is - * shrug shoulders * they already know so I might as well just make my life convenient and continue giving them everything...
Honestly, I don’t really even think it should be the responsibility of the average consumer to have to think about this at all. Should it be your responsibility to check every part of the engine in your car when you want to drive to make sure it is not going to blow up and kill you? Of course not, that responsibility should be on the manufacturer. Similarly, the responsibility for mitigating the adverse effects of data gathering should be on the developing companies not the consumers.