Even if AI capabilities stalled, I would still be at the very least uncertain about whether they will still be free and fair elections in 2028.
In any case I expect substantial organizational continuity to persist in the military-industrial complex in particular, comparable to that in major AI companies.
I expect AI CEOs to be somewhat less likely to be malevolent, and much less likely to be ideological fanatics, than politicans and military officials.
Uncensored models being available to a self-selected elite, and the rest of us getting whatever those elites decide to give us after censorship is more dangerous than giving uncensored models to everyone. AI gatekeeping in the guise of “safety” is going to lead to tangible immediate harms.
Not only I have shorter ASI timelines, I think the AI capabilities required for authoritarianism to be lower than ASI, are already there to some extent, and will already be far more advanced by 2028.
Even if AI capabilities stalled, I would still be at the very least uncertain about whether they will still be free and fair elections in 2028.
In any case I expect substantial organizational continuity to persist in the military-industrial complex in particular, comparable to that in major AI companies.
I expect AI CEOs to be somewhat less likely to be malevolent, and much less likely to be ideological fanatics, than politicans and military officials.
Uncensored models being available to a self-selected elite, and the rest of us getting whatever those elites decide to give us after censorship is more dangerous than giving uncensored models to everyone. AI gatekeeping in the guise of “safety” is going to lead to tangible immediate harms.