As a meta-level version of this, I have to admit that I find it a little concerning that this site was created in the first place partly because Eliezer Yudkowsky wanted to convince people that funding safe AI research was the best possible use of resources, and that much of the logic on this site seems to come to that conclusion, irrespective of which direction the logic goes in to get to that point.
I don’t necessarily disagree with the conclusion, but it is a surprising and suspicious convergence nonetheless.
As a meta-level version of this, I have to admit that I find it a little concerning that this site was created in the first place partly because Eliezer Yudkowsky wanted to convince people that funding safe AI research was the best possible use of resources, and that much of the logic on this site seems to come to that conclusion, irrespective of which direction the logic goes in to get to that point.
I don’t necessarily disagree with the conclusion, but it is a surprising and suspicious convergence nonetheless.
My thoughts exactly.
When I first heard it, it sounded to me like a headline from BuzzFeed: This one weird trick will literally solve all your problems!
Turns out that the trick is to create an IQ 20000 AI, and get it to help you.
(Obviously, Suspicious <> Wrong)