I see how it “feels” worth doing, but I don’t think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation’s capabilities are irrelevant, and if we get it right, they’re still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
So please help with alignment instead? This doesn’t just need technical work; it needs broad thinkers and those good at starting orgs and spreading ideas, too.
I think we’ve been in a mindset in which we can’t contribute to alignment if we’re not a genius or technically skiled. I think it’s become clear that organization, outreach, and communication also improve our odds nontrivially.
That’s like a 1% chance. It seems far more likely that insufficient effort on alignment will have us all dead long before then.
There’s vastly too little effort on alignment and too much on diversified good works in the world at this point. That may be another neglected area that rationalist would be particularly likely to address, but it seems like any way you do the math the EV is going to be way higher on alignment and related AGI navigation issues.
I see how it “feels” worth doing, but I don’t think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation’s capabilities are irrelevant, and if we get it right, they’re still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
So please help with alignment instead? This doesn’t just need technical work; it needs broad thinkers and those good at starting orgs and spreading ideas, too.
I think we’ve been in a mindset in which we can’t contribute to alignment if we’re not a genius or technically skiled. I think it’s become clear that organization, outreach, and communication also improve our odds nontrivially.
I mean I agree, but I just think this is insufficient worldview hedging. What if AGI takes another sixty years?
That’s like a 1% chance. It seems far more likely that insufficient effort on alignment will have us all dead long before then.
There’s vastly too little effort on alignment and too much on diversified good works in the world at this point. That may be another neglected area that rationalist would be particularly likely to address, but it seems like any way you do the math the EV is going to be way higher on alignment and related AGI navigation issues.