Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.