Yes, I was going to say… How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
Let’s try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and—FOOM! -- we have a superintelligence. It’s important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they’d be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn’t worked very well in the domain of finding optimal causes.
I wish I lived on a planet where these were ‘very common appeals to commonsense’. I wonder how much a ticket there would cost?
Yes, I was going to say… How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
I wish I lived on a planet where these were ‘very common appeals to commonsense’. I wonder how much a ticket there would cost?