I am sitting here crying as the last remaining bits of diplomatic goodwill and hope for internationally coordinated treaties on coordinating the AI takeoff evaporates.
We can still hope that we won’t get AGI in the next couple of years. Society’s attitude towards AI is already negative, and we’re even seeing some congressmen openly discuss the existential risks. This growing awareness might just lead to meaningful policy changes in the future.
Is this the result you expect by default? Or is this just one of many unlikely scenarios (like Hanson’s ‘The Age of Em’) that are worth considering?