I don’t believe he’d be satisfied with any conclusion resting purely on thinking (“un-Friendly AI is an imminent existential risk, therefore FAI research is an overriding priority”); I think he needs something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them).
he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?
He could start with shut up and multiply. (Or, perhaps he could just change ‘best’ to ‘most appealing’.)
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.
He could start with shut up and multiply. (Or, perhaps he could just change ‘best’ to ‘most appealing’.)
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.