“A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn’t stop that.” —HPMOR, Ch. 115
(Not to be confused with the Trevor who works at Open Phil)
For those who might not have noticed, this actually is historic, they’re not just saying that- the top 350 people have effectively “come clean” about this, at once, in a schelling-point **kind-of** way.
The long years of staying quiet about this and avoiding telling other people your thoughts about AI potentially ending the world, because you’re worried that you’re crazy or that you take science fiction too seriously- those days **might have** just ended.
This was a credible signal, none of these 350 high-level people can go back and say “no, I never actually said that AI could cause extinction and AI safety should be a top global priority”, and from now on you and anyone else can cite this announcement to back up your views (instead of saying “Bill Gates, Elon Musk, and Stephen Hawking have all endorsed...”) and go straight to AI timelines (I like sending people Epoch’s Literature review).
EDIT: For the record, this might not be true, or it might not stick, and signatories retain ways of backing out or minimizing their past involvement. I do not endorse unilaterally turning this into more of a schelling point than it was originally intended to be.