There are a number of aspects that make fire alarms less likely in the AI 2027 scenarios compared to what I consider likely—e.g. having 2 projects that matter, whereas I expect more like 3 to 6 such projects.
I agree about the plurality of projects. AI 2027 has an American national project and a Chinese national project, whereas at present both countries have multiple companies competing with each other.
AI 2027 also has the two national AIs do a secret deal with each other. My own thought about superintelligence does treat it as a winner-take-all race, so “deals” don’t have the same meaning as in situations where the parties actually have something to offer each other. There’s really only room for pledges or promises, of the form “If I achieve superintelligence first, I promise that I will use my power in the service of these goals or values”.
So my own model of the future has been that there will be more than two contenders, and that one of them will cross the threshold of superintelligence first and become the ultimate power on Earth. At that point it won’t need anyone or anything else; all other ex-contenders will simply be at the winner’s mercy.
A full-fledged arms race would involve the racers acting as if the second place finisher suffers total defeat.
I don’t get the impression that AI companies currently act like that. They seem to act like first place is worth trillions of dollars, but also like employees at second and third place “finishers” will each likely get something like a billion dollars.
Even before AI, it was my impression of big tech companies like Microsoft, Amazon, Google, that they are quite willing to form cooperative relationships, but they also have no compunction about attempting to become dominant in every area that they can. If any of them could become an ultimate all-pervasive monopoly, they would. Is there anything in the behavior of the AI companies that looks different?
I expect deals between AIs to make sense at the stage that AI 2027 describes because the AIs will be uncertain what will happen if they fight.
If AI developers expected winner-take-all results, I’d expect them to be publishing less about their newest techniques, and complaining more about their competitors’ inadequate safety practices.
Beyond that, I get a fairly clear vibe that’s closer to “this is a fascinating engineering challenge” than to “this is a military conflict”.
I agree about the plurality of projects. AI 2027 has an American national project and a Chinese national project, whereas at present both countries have multiple companies competing with each other.
AI 2027 also has the two national AIs do a secret deal with each other. My own thought about superintelligence does treat it as a winner-take-all race, so “deals” don’t have the same meaning as in situations where the parties actually have something to offer each other. There’s really only room for pledges or promises, of the form “If I achieve superintelligence first, I promise that I will use my power in the service of these goals or values”.
So my own model of the future has been that there will be more than two contenders, and that one of them will cross the threshold of superintelligence first and become the ultimate power on Earth. At that point it won’t need anyone or anything else; all other ex-contenders will simply be at the winner’s mercy.
Even before AI, it was my impression of big tech companies like Microsoft, Amazon, Google, that they are quite willing to form cooperative relationships, but they also have no compunction about attempting to become dominant in every area that they can. If any of them could become an ultimate all-pervasive monopoly, they would. Is there anything in the behavior of the AI companies that looks different?
I expect deals between AIs to make sense at the stage that AI 2027 describes because the AIs will be uncertain what will happen if they fight.
If AI developers expected winner-take-all results, I’d expect them to be publishing less about their newest techniques, and complaining more about their competitors’ inadequate safety practices.
Beyond that, I get a fairly clear vibe that’s closer to “this is a fascinating engineering challenge” than to “this is a military conflict”.