Director & Movement Builder—AI Safety ANZ
Advisory Board Member (Growth) - Giving What We Can
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).
I thought it might be useful to spell that out.
I’ve decided to post something very weird because it might (in some small way) help shift the Overton Window on a topic: as long as the world doesn’t go completely nuts due to AI, I think there is a 5%-20% chance I will reach something close to full awakening / enlightenment in about 10 years. Something close to this: