how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don’t do everything they can to make it go the way we need it to?
Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I’d bring up someone like Scott Alexander/Yvain as an example (since he’s repeatedly claimed he’s not good at math and blogs more about politics/general rationality than about AI), but this time, you can just look at yourself. If, as you claim,
I’m a village idiot by LW standards, and especially bad at math, so I don’t think I’d be very useful on the “front lines” so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
then your comparative advantage lies less in theory and more in popularization. Technically, theory might be more important, but if you can net bigger gains elsewhere, then by all means you should do so. To use a (somewhat strained) analogy, think about expected value. Which would you prefer: a guaranteed US $50, or a 10% chance at getting US $300? The raw value of the $300 prize might be greater, but you have to multiply by the probabilities before you can do a comparison. It’s the same here. For some LWers, working on AI is the way to go, but for others who aren’t as good at math, maybe raising money is the best way to do things. And then there’s the even bigger picture: AI might be the most important risk in the end, but what if (say) nuclear war occurs first? A politically-oriented person might do better to go into government or something of the sort, even if that person thinks AI is more important in the long run.
So while it might look somewhat strange that not every LWer is working frantically on AI at first, if you look a little deeper, there’s actually a good reason. (And then there’s also scope insensitivity, hyperbolic discounting, and all that good stuff ChaosMote brought up.) In a sense, you answered your own question when you asked your second.
Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I’d bring up someone like Scott Alexander/Yvain as an example (since he’s repeatedly claimed he’s not good at math and blogs more about politics/general rationality than about AI), but this time, you can just look at yourself. If, as you claim,
then your comparative advantage lies less in theory and more in popularization. Technically, theory might be more important, but if you can net bigger gains elsewhere, then by all means you should do so. To use a (somewhat strained) analogy, think about expected value. Which would you prefer: a guaranteed US $50, or a 10% chance at getting US $300? The raw value of the $300 prize might be greater, but you have to multiply by the probabilities before you can do a comparison. It’s the same here. For some LWers, working on AI is the way to go, but for others who aren’t as good at math, maybe raising money is the best way to do things. And then there’s the even bigger picture: AI might be the most important risk in the end, but what if (say) nuclear war occurs first? A politically-oriented person might do better to go into government or something of the sort, even if that person thinks AI is more important in the long run.
So while it might look somewhat strange that not every LWer is working frantically on AI at first, if you look a little deeper, there’s actually a good reason. (And then there’s also scope insensitivity, hyperbolic discounting, and all that good stuff ChaosMote brought up.) In a sense, you answered your own question when you asked your second.