My hope is that the minimum viable pivotal act requires only near human AGI. For example, hack competitor training/​inference clusters to fake an AI winter.
Aligning +2SD human equivalent AGI seems more tractable than straight up FOOMing to ASI safely.
One lab does it to buy time for actual safety work.
Unless things slow down massively we probably die. An international agreement would be better but seems unlikely.
My hope is that the minimum viable pivotal act requires only near human AGI. For example, hack competitor training/​inference clusters to fake an AI winter.
Aligning +2SD human equivalent AGI seems more tractable than straight up FOOMing to ASI safely.
One lab does it to buy time for actual safety work.
Unless things slow down massively we probably die. An international agreement would be better but seems unlikely.