You reminded me of that famous tweet:
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus
But more seriously, I think this is a real point that has not been explored enough in alignment circles.
I have encountered a large number of people—in fact probably almost all people I discuss AI with—who I would call “normal people”. Just regular, moderately intelligent people going about their lives for which “don’t invent a God-Like AI” is so obvious it is almost a truism.
It is just patently obvious based on their mental model of Skynet, Matrix, etc that we should not build this thing.
Why are we not capitalizing on that?
This deserves it’s own post, which I might try to write, but I think it boils down to condescension.
LWers know Skynet Matrix is not really how it works under the hood
How it really works under the hood is really really complicated
Skynet / Matrix is a poor mental model
Using poor mental models is bad, we should not do that and we shouldn’t encourage other people to do that
In order to communicate AI risk we need to simplify it enough to make it accessible to people
<produces 5000 word blog post that requires years of implicit domain knowledge to parse>
Ironically most people would be closer to the truth with a Skynet Matrix model, which is the one they already have installed.
We could win by saying: Yes, Skynet is actually happening, please help us stop this.
I completely agree that it made no sense to divert qualified researchers away from actually doing the work. I hope my post did not come across as suggesting that.