I do think that convincing the government to pause AI in a way which sacrifices $3000 billion economic value, is relatively easier than directly spending $3000 billion on AI safety.
Maybe spending $1 is similarly hard to sacrificing $10-$100 of future economic value via preemptive regulation.[1]
But $0.1 billion AI safety spending is so ridiculously little (1000 times less than capabilities spending), increasing it may still be the “easiest” thing to do. Of course we should still push for regulation at the same time (it doesn’t hurt).
PS: what do you think of my open letter idea for convincing the government to increase funding?
Maybe “future economic value” is too complicated. A simpler guesstimate would be “spending $1 is similarly hard to sacrificing $10 of company valuations via regulation.”
I do think that convincing the government to pause AI in a way which sacrifices $3000 billion economic value, is relatively easier than directly spending $3000 billion on AI safety.
Maybe spending $1 is similarly hard to sacrificing $10-$100 of future economic value via preemptive regulation.[1]
But $0.1 billion AI safety spending is so ridiculously little (1000 times less than capabilities spending), increasing it may still be the “easiest” thing to do. Of course we should still push for regulation at the same time (it doesn’t hurt).
PS: what do you think of my open letter idea for convincing the government to increase funding?
Maybe “future economic value” is too complicated. A simpler guesstimate would be “spending $1 is similarly hard to sacrificing $10 of company valuations via regulation.”