Most ordinary people don’t know that no one understands how neural networks work (or even that modern “Generative A.I.” is based on neural networks). This might be an underrated message since the inferential distance here is surprisingly high.
It’s hard to explain the more sophisticated models that we often use to argue that human dis-empowerment is the default outcome but perhaps much better leveraged to explain these three points:
1) No one knows how A.I models / LLMs / neural nets work (with some explanation of how this is conceptually possible).
2) We don’t know how smart they will get how soon.
3) We can’t control what they’ll do once they’re smarter than us.
At least under my state of knowledge, this is also a particularly honest messaging strategy, because it emphasizes the fundamental ignorance of A.I. researchers.
Most ordinary people don’t know that no one understands how neural networks work (or even that modern “Generative A.I.” is based on neural networks). This might be an underrated message since the inferential distance here is surprisingly high.
It’s hard to explain the more sophisticated models that we often use to argue that human dis-empowerment is the default outcome but perhaps much better leveraged to explain these three points:
1) No one knows how A.I models / LLMs / neural nets work (with some explanation of how this is conceptually possible).
2) We don’t know how smart they will get how soon.
3) We can’t control what they’ll do once they’re smarter than us.
At least under my state of knowledge, this is also a particularly honest messaging strategy, because it emphasizes the fundamental ignorance of A.I. researchers.