It is absolutely crazy that Mark Zuckerberg can say that smart glasses will unlock personal superintelligence or whatever incoherent nonsense and be taken seriously. That reflects poorly on AI safety’s comms capacities.
Bostrom’s book should have laid claim to superintelligence! It came out early enough that it should have been able to plant its flag and set the connotations of the term. It should have made it so Zuckerberg could not throw around the word so casually.
I would go further, and say that the early safety writing on AGI should have been enough that the labs were too scared to say in public in 2026 that they’re trying to build AGI. Instead, it’s commonly accepted in Silicon Valley that developing AGI is a shared goal.
I would still bet that the early safety writing has gone a long way toward establishing the credibility and influence of AI safety in the present. But there must have been something we could have done better.
AGI Should Have Been a Dirty Word
Epistemic status: passing thought.
It is absolutely crazy that Mark Zuckerberg can say that smart glasses will unlock personal superintelligence or whatever incoherent nonsense and be taken seriously. That reflects poorly on AI safety’s comms capacities.
Bostrom’s book should have laid claim to superintelligence! It came out early enough that it should have been able to plant its flag and set the connotations of the term. It should have made it so Zuckerberg could not throw around the word so casually.
I would go further, and say that the early safety writing on AGI should have been enough that the labs were too scared to say in public in 2026 that they’re trying to build AGI. Instead, it’s commonly accepted in Silicon Valley that developing AGI is a shared goal.
I would still bet that the early safety writing has gone a long way toward establishing the credibility and influence of AI safety in the present. But there must have been something we could have done better.