Yes, agreed. I’d say the 3 ways we’ve gotten unlucky is the intractability of NNs, the relative ease of training ASI leading to shorter timelines, and the biggest is that so many people find AI risk inherently implausible, even people who are fixated on building AGI.
Yes, agreed. I’d say the 3 ways we’ve gotten unlucky is the intractability of NNs, the relative ease of training ASI leading to shorter timelines, and the biggest is that so many people find AI risk inherently implausible, even people who are fixated on building AGI.