1: There’s 2 differences I see; I’d categorize it more as ‘collecting’ than ‘monitoring,’ and despite the many arms of the NSA, I’d bet the CCP is far worse. A way to measure this is network latency: traffic leaving China is noticeably slower, due to the Great Firewall and the amount of filtering CCP agencies do to all data. Traffic leaving the US encounters 0 or minimal latency; so if it’s being monitored, it’s not real-time. I actually have worked with a person who had access to the NSA database during it’s pre-Snowden days. According to him, there was far more data being collected then was being used, for legal reasons and practical ones. Legally, it was not considered monitoring US persons until the traffic was unencrypted; so while they might have a phone call recorded, it’s not Illegal until they decrypt it. (yes, I know, this makes enforcement entirely an internal measure)
2: The most convenient, quiet, and effective way of getting access is legitimate credentials. If you can steal them, that’s great, but if you can send a police officer to tell the company to make you creds, that’s way easier. I agree with you as far as high-value targets go; you do lose some secrecy if you have to bring the server owners on board. But for the average user, I’d guess it’s more efficient to save your ‘hackers’ for more useful stuff, and use bureaucrats as much as possible in their place.
3: VPN usage is growing, but as you pointed out, data-collection is growing too, at what I see as a far faster rate. I know a few optimistic people, but I’m pessimistic, I think these measures will just delay the complete loss of privacy (and therefore the ‘Hari Seldon-ing’ of big businesses).
Is this topic (learning Econ creates AI forecasting blindspots) perhaps a narrow view of a larger problem; some sort of dunning-Kruger “peak of mt stupid”; where econ classes touch enough on AI forecasting that students gain overconfidence?
I’d predict that any curriculum that encompasses some sort of AI forecasting, but which does not have it as a primary focus, ends up with the same forecasting blindspots/problems. As an anecdote, I’ve seen lots of youtube computer coders use their authority as excellent human programmers to overconfidently state that their job is totally secure and their knowledge of deep programming practices leads them to believe that they can treat AI-powered-coding as a paradigm shift only on the scale of a new IDE or code library.
It’s interesting to see the specific breakdown of how it happens to Econ , if anyone has relevent examples for other fields (Law maybe?) I’d be curious to see if they also fall prey to the same problems.