This post would benefit from being clearer about its threat model, and the recommendations seem hard to square with what seems to me to be the most likely. Who are you worried about using your information to trick you? The government, Google/Amazon/Facebook/Apple/etc, independent scammers? Most of your examples seem like the third category, but then your recommendations are mostly about avoiding information being available to the second category.
One of your recommendations in particular, though, seems especially wrong given a wide range of potential privacy threats: “Preferably use Mastodon”. The Fediverse has almost no ability to protect against bulk collection and retention of data, and while people who say “I’m going to be scraping things and archiving them” will get blocked, someone who does it quietly won’t. (I’m strongly in favor of people switching to Mastodon, but privacy is not its strong point.)
Separately, I don’t see a consideration of costs and benefits here. You’ve described some ways in which having more information about you on the public internet could be used to attack you, and advocated some large changes to what technologies and approaches people use, but without acknowledging that those changes have costs or attempting to argue that the costs are worth it. I’d especially be interested in arguments around how much your proposed changes would reduce someone’s exposure by, since the benefits of decreasing information available to scammers aren’t linear (ex: a complete decrease is probably worth much much more than 4x as much as a 25% decrease).
I use a pretty different approach here, and one that I think is a lot more robust: I expect privacy to continue to decay, and that measures that we thought were sufficient to keep things private will regularly turn out to not have been enough. This will happen retroactively, where actions you took revealed a lot more about yourself than you expected at the time (ex: revealing HN alt accounts. So I make “public” my default and operate on the assumption that people already can learn a lot about me if they want to. This means I can use whatever tool is best for the job (which may still be Linux or Mastodon!) and get the benefit of sharing information publicly, and am in a much better position for when it turns out that some bit of privacy protection had actually stopped working years ago.
This post would benefit from being clearer about its threat model, and the recommendations seem hard to square with what seems to me to be the most likely. Who are you worried about using your information to trick you? The government, Google/Amazon/Facebook/Apple/etc, independent scammers? Most of your examples seem like the third category, but then your recommendations are mostly about avoiding information being available to the second category.
One of your recommendations in particular, though, seems especially wrong given a wide range of potential privacy threats: “Preferably use Mastodon”. The Fediverse has almost no ability to protect against bulk collection and retention of data, and while people who say “I’m going to be scraping things and archiving them” will get blocked, someone who does it quietly won’t. (I’m strongly in favor of people switching to Mastodon, but privacy is not its strong point.)
Separately, I don’t see a consideration of costs and benefits here. You’ve described some ways in which having more information about you on the public internet could be used to attack you, and advocated some large changes to what technologies and approaches people use, but without acknowledging that those changes have costs or attempting to argue that the costs are worth it. I’d especially be interested in arguments around how much your proposed changes would reduce someone’s exposure by, since the benefits of decreasing information available to scammers aren’t linear (ex: a complete decrease is probably worth much much more than 4x as much as a 25% decrease).
I use a pretty different approach here, and one that I think is a lot more robust: I expect privacy to continue to decay, and that measures that we thought were sufficient to keep things private will regularly turn out to not have been enough. This will happen retroactively, where actions you took revealed a lot more about yourself than you expected at the time (ex: revealing HN alt accounts. So I make “public” my default and operate on the assumption that people already can learn a lot about me if they want to. This means I can use whatever tool is best for the job (which may still be Linux or Mastodon!) and get the benefit of sharing information publicly, and am in a much better position for when it turns out that some bit of privacy protection had actually stopped working years ago.