Well, clearly it doesn’t, if the reaction is “uhhh what do we do? do we do anything? hurr durr” instead of “holy fuck we must nationalize/ban this before the AGI labs overthrow/kill us all”. Even if you completely dismiss the idea of the AI taking over, the AGI companies are already literally saying they’re going to build God and upend the existing balance of power. The only model under which no decisive, immediate action is warranted, is if you don’t in fact appreciate the gravity of the situation.[1]
At least, if we assume that the AGI labs’ statements are accurate and truthful, and we are about to get to AGI and then ASI. On which I’m personally very skeptical. But I don’t think a reasonable person can be skeptical enough to think that said decisive action isn’t warranted. Not placing at least 10% on ASI by 2028 seems very poorly calibrated, and that’s risk enough.
This is probably correct, but also this is a report about the previous administration.
Normally, there is a lot of continuity in institutional knowledge between administrations, but this current transition is an exception, as the new admin has decided to deliberately break continuity as much as it can (this is very unusual).
And with the new admin, it’s really difficult to say what they think. Vance publicly expresses an opinion worthy of Zuck, only more radical (gas pedal to the floor, forget about brakes). He is someone who believes at the same time that 1) AI will be extremely powerful, so all this emphasis is justified, 2) no safety measures at all are required, accelerate as fast as possible (https://www.lesswrong.com/posts/qYPHryHTNiJ2y6Fhi/the-paris-ai-anti-safety-summit).
Perhaps, he does not care about having a consistent world model, or he might think something different from what he publicly expresses. But he does sound like a CEO of a particularly reckless AI lab.
Well, clearly it doesn’t, if the reaction is “uhhh what do we do? do we do anything? hurr durr” instead of “holy fuck we must nationalize/ban this before the AGI labs overthrow/kill us all”. Even if you completely dismiss the idea of the AI taking over, the AGI companies are already literally saying they’re going to build God and upend the existing balance of power. The only model under which no decisive, immediate action is warranted, is if you don’t in fact appreciate the gravity of the situation.[1]
The government hasn’t a clue.
At least, if we assume that the AGI labs’ statements are accurate and truthful, and we are about to get to AGI and then ASI. On which I’m personally very skeptical. But I don’t think a reasonable person can be skeptical enough to think that said decisive action isn’t warranted. Not placing at least 10% on ASI by 2028 seems very poorly calibrated, and that’s risk enough.
This is probably correct, but also this is a report about the previous administration.
Normally, there is a lot of continuity in institutional knowledge between administrations, but this current transition is an exception, as the new admin has decided to deliberately break continuity as much as it can (this is very unusual).
And with the new admin, it’s really difficult to say what they think. Vance publicly expresses an opinion worthy of Zuck, only more radical (gas pedal to the floor, forget about brakes). He is someone who believes at the same time that 1) AI will be extremely powerful, so all this emphasis is justified, 2) no safety measures at all are required, accelerate as fast as possible (https://www.lesswrong.com/posts/qYPHryHTNiJ2y6Fhi/the-paris-ai-anti-safety-summit).
Perhaps, he does not care about having a consistent world model, or he might think something different from what he publicly expresses. But he does sound like a CEO of a particularly reckless AI lab.