But we’ve gotten equally unlucky on the circumstances surrounding the push for AGI. Even if alignment is dead easy, we’re prone to get it wrong at this breakneck pace and focus on capabilities over alignment. Daniel K summed up the practical difficulties really well in a comment yesterday.
Is this true? I wasn’t on lesswrong back in the day but my I imagined if you told a random user that the two major AI labs would both be well aware of and trying to to mitigate the problem that would have been a positive update. And yes, profit incentives are stronger than perhaps would have been imagined, but that’s because AI progress is slow enough that they are able to be monetizable products something which is beneficial for our chances.
Yes, agreed. I’d say the 3 ways we’ve gotten unlucky is the intractability of NNs, the relative ease of training ASI leading to shorter timelines, and the biggest is that so many people find AI risk inherently implausible, even people who are fixated on building AGI.
I largely agree.
But we’ve gotten equally unlucky on the circumstances surrounding the push for AGI. Even if alignment is dead easy, we’re prone to get it wrong at this breakneck pace and focus on capabilities over alignment. Daniel K summed up the practical difficulties really well in a comment yesterday.
Is this true? I wasn’t on lesswrong back in the day but my I imagined if you told a random user that the two major AI labs would both be well aware of and trying to to mitigate the problem that would have been a positive update. And yes, profit incentives are stronger than perhaps would have been imagined, but that’s because AI progress is slow enough that they are able to be monetizable products something which is beneficial for our chances.
Yes, agreed. I’d say the 3 ways we’ve gotten unlucky is the intractability of NNs, the relative ease of training ASI leading to shorter timelines, and the biggest is that so many people find AI risk inherently implausible, even people who are fixated on building AGI.