Something that “feels” worth doing to me, even if timelines eventually make it irrelevant, would be starting an independent org to research/verify the claims of embryo selection companies. I think by default there’s going to be a lot of bullshit, and people are going to use that bullshit as an excuse to call for regulation or a shutdown. An independent org might also encourage people who were otherwise skeptical of this technology to use it.
I see how it “feels” worth doing, but I don’t think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation’s capabilities are irrelevant, and if we get it right, they’re still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
So please help with alignment instead? This doesn’t just need technical work; it needs broad thinkers and those good at starting orgs and spreading ideas, too.
I think we’ve been in a mindset in which we can’t contribute to alignment if we’re not a genius or technically skiled. I think it’s become clear that organization, outreach, and communication also improve our odds nontrivially.
That’s like a 1% chance. It seems far more likely that insufficient effort on alignment will have us all dead long before then.
There’s vastly too little effort on alignment and too much on diversified good works in the world at this point. That may be another neglected area that rationalist would be particularly likely to address, but it seems like any way you do the math the EV is going to be way higher on alignment and related AGI navigation issues.
Something that “feels” worth doing to me, even if timelines eventually make it irrelevant, would be starting an independent org to research/verify the claims of embryo selection companies. I think by default there’s going to be a lot of bullshit, and people are going to use that bullshit as an excuse to call for regulation or a shutdown. An independent org might also encourage people who were otherwise skeptical of this technology to use it.
I see how it “feels” worth doing, but I don’t think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation’s capabilities are irrelevant, and if we get it right, they’re still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
So please help with alignment instead? This doesn’t just need technical work; it needs broad thinkers and those good at starting orgs and spreading ideas, too.
I think we’ve been in a mindset in which we can’t contribute to alignment if we’re not a genius or technically skiled. I think it’s become clear that organization, outreach, and communication also improve our odds nontrivially.
I mean I agree, but I just think this is insufficient worldview hedging. What if AGI takes another sixty years?
That’s like a 1% chance. It seems far more likely that insufficient effort on alignment will have us all dead long before then.
There’s vastly too little effort on alignment and too much on diversified good works in the world at this point. That may be another neglected area that rationalist would be particularly likely to address, but it seems like any way you do the math the EV is going to be way higher on alignment and related AGI navigation issues.
I will be calling for regulation and shutdown, and I do not need to resort to pointing to bullshit to do it.