Perhaps a more useful prompt for you: suppose something indeed convinces the bulk of the population that AI existential risk is real in a way that’s as convincing as the use of nuclear weapons at the end of World War II. Presumably the government steps in with measures sufficient to constitute a pivotal act. What are those measures? What happens, physically, when some rogue actor tries to build an AGI? What happens, physically, when some rogue actor tries to build an AGI 20 or 40 years in the future when alorithmic efficiency and Moore’s law have lowered the requisite resources dramatically? How do those physical things happen? Who’s involved, what specifically does each of the people involved do, and what ensures that they continue to actually do their job across several decades? What physical infrastructure do they need, where does that infrastructure come from, how much would it cost, what maintenance would it need? What’s the annual budget and headcount for this project?
And then, once you’ve thought through that, ask: what’s the minimum intervention required to make those same things physically happen when a rogue actor tries to build an AGI?
To be clear, I think we at Redwood (and people at spiritually similar places like the AI Futures Project) do think about this kind of question (though I’d quibble about the importance of some of the specific questions you mention here).
Perhaps a more useful prompt for you: suppose something indeed convinces the bulk of the population that AI existential risk is real in a way that’s as convincing as the use of nuclear weapons at the end of World War II. Presumably the government steps in with measures sufficient to constitute a pivotal act. What are those measures? What happens, physically, when some rogue actor tries to build an AGI? What happens, physically, when some rogue actor tries to build an AGI 20 or 40 years in the future when alorithmic efficiency and Moore’s law have lowered the requisite resources dramatically? How do those physical things happen? Who’s involved, what specifically does each of the people involved do, and what ensures that they continue to actually do their job across several decades? What physical infrastructure do they need, where does that infrastructure come from, how much would it cost, what maintenance would it need? What’s the annual budget and headcount for this project?
And then, once you’ve thought through that, ask: what’s the minimum intervention required to make those same things physically happen when a rogue actor tries to build an AGI?
To be clear, I think we at Redwood (and people at spiritually similar places like the AI Futures Project) do think about this kind of question (though I’d quibble about the importance of some of the specific questions you mention here).