Well, it’s up to you to decide how much the uncertainty of outcome should influence your willingness to do something. It’s OK to think it’s worthwhile to follow a certain path even if you don’t know where would it ultimately lead.
“Uncertainty” is different than “no clue.” Or maybe I’m assuming too much about what you mean by “no clue”—to my ear it sounds like saying we have no basis for action.
You don’t have more information about the hundred-year effects of your third-world poverty options than you do about the hundred-year effects of your AI options.
Really? Is this something you’ve said before and I’ve missed it? If true, it has huge implications.
I don’t think I’ve said it before in these words but I may have expressed the same idea.
Why do you think there are huge implications?
If I believe that, I would forget about AI, x-risk and just focus on third-world poverty.
Well, it’s up to you to decide how much the uncertainty of outcome should influence your willingness to do something. It’s OK to think it’s worthwhile to follow a certain path even if you don’t know where would it ultimately lead.
“Uncertainty” is different than “no clue.” Or maybe I’m assuming too much about what you mean by “no clue”—to my ear it sounds like saying we have no basis for action.
Large amounts of uncertainty including the paradoxical possibility of black swans == no clue.
You have no basis for action if you are going to evaluate your actions on the basis of consequences in a hundred years.
You don’t have more information about the hundred-year effects of your third-world poverty options than you do about the hundred-year effects of your AI options.
Effects of work on AI are all about the long run. Working on third-world poverty, on the other hand, has important and measurable short-run benefits.
Good point!