Now existing AI systems are very well in winning in war-like strategic games, like chess and Go, and already reached superhuman performance in them. Military strategic planning and geopolitics could be seen as such a game, and AI able to win in it seems imaginable even on current capabilities.
I also agree that self-improving AI may choose not create its new version, because of the difficulty to solve aligning problem on the new level. In that case it would choose evolutionary development path, which means slower capability gain. I wrote a draft of a paper about levels of self-improvement, where I look in such obstacles in details. I а you are interested, I could share it with you.
AI is good at well-defined strategy games, but (so far) bad at understanding and integrating real-world constraints. I suspect that there are already significant efforts to use narrow AI to help humans with strategic planning, but that these remain secret. For an AGI to defeat that sort of human-computer combination would require considerably superhuman capabilities, which means without an intelligence explosion it would take a great deal of time and resources.
If AI will be able to use humans as outsourced form of intuition like in Mechanical Turk, it may be able to play such games with much less own intelligence.
When I read “Cambridge Analytica isn’t the only company that could pull this off—but it is the most powerful right now.” I immediately think “citation needed”.
Eric Schmidt funded multiple companies to provide technology to get Hillary elected.
But in fact, I don’t want to derail the discussion about AI’s possible decisive advantage in the future in the conspiracy looking discussion about past elections, which I mentioned as a possible example of strategic games, but not as the fact proving that such AI actually exists.
That argument feels circular in nature. You believe that Trump won because of a powerful computer model, simply because Trump won and he was supported by a computer model.
One the other hand, you have a tech billionaire who’s gathering top programmers to fight. On the other hand, you have a company that has to be told by the daughter of that tech-billionaire what software they should use.
Who’s press person said they worked for the leave-campaign and who’s CEO is currently on the record for never having worked for the leave-campaign, neither paid nor unpaid.
But Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates. The campaign stopped using Cambridge’s data entirely after the South Carolina primary.
There’s a lot of irony in the fact that Cambridge Analytica seems to be better at telling spin about its amazing abilities of political manipulation in an untargeted way, than they are actually at helping political campaign.
I just saw on scout.ai’s about page that they see themselves as being in the science fiction business. Maybe I should be less hard on them.
I want to underline again that the fact that I discuss a possibility doesn’t mean that I believe in it. The winning is evidence of intelligent power but given prior about its previous failures, it may be not strong evidence.
Geopolitical forecasting requires you to build a good model of the conflict that you care about. Once you do have a model you can feed the model into a computer like the Bruce Bueno de Mesquita does and the computer might do better at calculating the optimal move. I don’t think that current existing AI system are up to the task of modeling a complicated geopolitical event.
I also don’t think that it is now possible to model full geopolitics, but if some smaller but effective model of it will be created by humans, it may be used by AI.
Bruce Bueno de Mesquita seems to be of the opinion that even 20 years ago computer models outperformed humans once the modeling is finished but modeling seems crucial.
In his 2008 book, he advocates that the best move for Israel/Palestine would be to make a treaty that requires the two countries to share tourism revenue which each other. That’s not the kind of move that an AI like DeepMind would produce without a human coming up with the move beforehand.
So it looks like that if model creation job could be at least partly automated, it would give a strategic advantage in business, politics and military planning.
Now existing AI systems are very well in winning in war-like strategic games, like chess and Go, and already reached superhuman performance in them. Military strategic planning and geopolitics could be seen as such a game, and AI able to win in it seems imaginable even on current capabilities.
I also agree that self-improving AI may choose not create its new version, because of the difficulty to solve aligning problem on the new level. In that case it would choose evolutionary development path, which means slower capability gain. I wrote a draft of a paper about levels of self-improvement, where I look in such obstacles in details. I а you are interested, I could share it with you.
AI is good at well-defined strategy games, but (so far) bad at understanding and integrating real-world constraints. I suspect that there are already significant efforts to use narrow AI to help humans with strategic planning, but that these remain secret. For an AGI to defeat that sort of human-computer combination would require considerably superhuman capabilities, which means without an intelligence explosion it would take a great deal of time and resources.
If AI will be able to use humans as outsourced form of intuition like in Mechanical Turk, it may be able to play such games with much less own intelligence.
Such game may resemble Trump’s election campaign, where cyberweapons, fake news and internet memes was used by some algorithm. There was some speculation about it: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine
We already see superhuman performance in war-simulating games, but nothing like it in AI self-improving.
Mildly superhuman capabilities may be reached without intelligence explosion by the low-level accumulation of hardware, training and knowledge.
When I read “Cambridge Analytica isn’t the only company that could pull this off—but it is the most powerful right now.” I immediately think “citation needed”.
Eric Schmidt funded multiple companies to provide technology to get Hillary elected.
There are many programs which play Go, but only one currently with superhuman performance.
On the Go side, the program with the superhuman performance is run by Eric Schmidt’s company.
What makes you think that Eric Schmidt’s people aren’t the best in the other domain as well?
The fact that H lost?
But in fact, I don’t want to derail the discussion about AI’s possible decisive advantage in the future in the conspiracy looking discussion about past elections, which I mentioned as a possible example of strategic games, but not as the fact proving that such AI actually exists.
That argument feels circular in nature. You believe that Trump won because of a powerful computer model, simply because Trump won and he was supported by a computer model.
One the other hand, you have a tech billionaire who’s gathering top programmers to fight. On the other hand, you have a company that has to be told by the daughter of that tech-billionaire what software they should use.
Who’s press person said they worked for the leave-campaign and who’s CEO is currently on the record for never having worked for the leave-campaign, neither paid nor unpaid.
From a NYTimes article:
There’s a lot of irony in the fact that Cambridge Analytica seems to be better at telling spin about its amazing abilities of political manipulation in an untargeted way, than they are actually at helping political campaign.
I just saw on scout.ai’s about page that they see themselves as being in the science fiction business. Maybe I should be less hard on them.
I want to underline again that the fact that I discuss a possibility doesn’t mean that I believe in it. The winning is evidence of intelligent power but given prior about its previous failures, it may be not strong evidence.
Geopolitical forecasting requires you to build a good model of the conflict that you care about. Once you do have a model you can feed the model into a computer like the Bruce Bueno de Mesquita does and the computer might do better at calculating the optimal move. I don’t think that current existing AI system are up to the task of modeling a complicated geopolitical event.
I also don’t think that it is now possible to model full geopolitics, but if some smaller but effective model of it will be created by humans, it may be used by AI.
Bruce Bueno de Mesquita seems to be of the opinion that even 20 years ago computer models outperformed humans once the modeling is finished but modeling seems crucial.
In his 2008 book, he advocates that the best move for Israel/Palestine would be to make a treaty that requires the two countries to share tourism revenue which each other. That’s not the kind of move that an AI like DeepMind would produce without a human coming up with the move beforehand.
So it looks like that if model creation job could be at least partly automated, it would give a strategic advantage in business, politics and military planning.
Not on current capabilities.For one thing, the set of possible moves is undefined or very very large.