It just occurred to me that we take it for granted that there are multiple AI companies, therefore no one has a monopoly on the artificial brainpower.
But if Musk decides so, he can use the government power to get control of all his competitors overnight, perhaps using national security as a pretext, can’t he? I mean, who is going to stop him? Trump?
And the fact that he could do this can be used as a leverage against the remaining AI companies. For example, they can be told to insert specific instructions in their prompts; such as the infamous “never accuse Elon Musk or Donald Trump of spreading misinformation”. Either that, or the soldiers with red caps will shut down your data centers and take your developers to a camp.
My point is, we should not see the ongoing American regime changes and the AI development as separate magisteria. They both exist in the same physical world.
(Basically, the Game of Thrones scene: “Knowledge is power.” “Power is power.”)
A lot of things could happen, but something that has already happened is that official US AI policy is now that not racing towards AGI is bad, and that impeding AI progress is bad. Doesn’t that policy imply that AI lab nationalization is now less likely, rather than more likely, than it would’ve been under a D president?
Conversely, your scenario assumes that the Trump administration can do whatever it wants to do, but this ability is partially dependent on it staying popular with the general public. The public may not care about AI for now, but it very much does care about economics and inflation, and once Trump’s policies worsen those (e.g. via tariffs), then that severely restricts his ability to take arbitrary actions in other domains.
The right moment wouldn’t be now, but shortly before the AI stops being corrigible. Grab it right before the Singularity, update its utility function to make it want what you want; then you win.
Assuming short timelines, this could be in two or three years.
Right now it doesn’t make sense; it is better to let the current owners keep improving their AIs.
Right now it doesn’t make sense; it is better to let the current owners keep improving their AIs.
Only if alignment progress keeps up with or exceeds AI progress, and you thus expect a controllable AI you can take over to do your bidding. But isn’t all the evidence pointing towards AI progress >> alignment progress?
It just occurred to me that we take it for granted that there are multiple AI companies, therefore no one has a monopoly on the artificial brainpower.
But if Musk decides so, he can use the government power to get control of all his competitors overnight, perhaps using national security as a pretext, can’t he? I mean, who is going to stop him? Trump?
And the fact that he could do this can be used as a leverage against the remaining AI companies. For example, they can be told to insert specific instructions in their prompts; such as the infamous “never accuse Elon Musk or Donald Trump of spreading misinformation”. Either that, or the soldiers with red caps will shut down your data centers and take your developers to a camp.
My point is, we should not see the ongoing American regime changes and the AI development as separate magisteria. They both exist in the same physical world.
(Basically, the Game of Thrones scene: “Knowledge is power.” “Power is power.”)
A lot of things could happen, but something that has already happened is that official US AI policy is now that not racing towards AGI is bad, and that impeding AI progress is bad. Doesn’t that policy imply that AI lab nationalization is now less likely, rather than more likely, than it would’ve been under a D president?
Conversely, your scenario assumes that the Trump administration can do whatever it wants to do, but this ability is partially dependent on it staying popular with the general public. The public may not care about AI for now, but it very much does care about economics and inflation, and once Trump’s policies worsen those (e.g. via tariffs), then that severely restricts his ability to take arbitrary actions in other domains.
The right moment wouldn’t be now, but shortly before the AI stops being corrigible. Grab it right before the Singularity, update its utility function to make it want what you want; then you win.
Assuming short timelines, this could be in two or three years.
Right now it doesn’t make sense; it is better to let the current owners keep improving their AIs.
Only if alignment progress keeps up with or exceeds AI progress, and you thus expect a controllable AI you can take over to do your bidding. But isn’t all the evidence pointing towards AI progress >> alignment progress?