I think that there is tremendous risk from an AI that can beat the world in narrow fields, like finances or war. We might hope to outwit the narrow capability set of a war-planner or equities trader, but if such a super-optimizer works in the accepted frameworks like a national military or a hedge fund, it may be impossible to stop them before it’s too late; world civilization could then be disrupted enough that the AI or its master can then gain control beyond these narrow spheres.
If an equities-trading AI were to gather a significant proportion of the world’s wealth—not just the billions gained by hedge funds today, but trillions and more—that would give sigificant power to its masters. Not total paperclip-the-world power, not even World Emperor power, but enough power to potentially leverage in dangerous new directions, not least of which is research into even more general and powerful AI.
Errhh.. no, it would discredit either electronic trading, or finance full stop. The world would shut down the stockmarkets permanently rather than tolerate that level of dominance from a single actor. No-kidding, the most likely result of this is a world where if you want to buy stock you bloody well show up in person on the trading floor and sign physical paperwork. In blood so they can check you are not a robot.
So, the logic here is not really different from an AI than ran a shipping conglomerate of self-driving cars, trains, boats and planes? Just a business that makes money which the AI can use.
The trader AI would concern me-I guess I would be even more concerned about the shipping conglomerate, because it knows how to interact with the physical world effectively.
To make it a bit clearer: A financial AI that somehow never developed the ability do do anything beyond buy and sell orders could still have catastrophic effects, if it hyperoptimized its trading to the point that it gained some very large percent of the world’s assets. This would have disruptive effects on the economy, and depending on the AI’s goals, that would not stop the AI from hoovering up every asset.
Note that this relies on this one AI being much better than the competition, so similar considerations apply to the usual case of a more general AI suddenly becoming very powerful. One difference is that an intelligence explosion in this case would be via investing money in hiring more labor, rather than via the AI itself laboring.
I think that there is tremendous risk from an AI that can beat the world in narrow fields, like finances or war. We might hope to outwit the narrow capability set of a war-planner or equities trader, but if such a super-optimizer works in the accepted frameworks like a national military or a hedge fund, it may be impossible to stop them before it’s too late; world civilization could then be disrupted enough that the AI or its master can then gain control beyond these narrow spheres.
Please spell out why the financial AGI is so threatening?
If an equities-trading AI were to gather a significant proportion of the world’s wealth—not just the billions gained by hedge funds today, but trillions and more—that would give sigificant power to its masters. Not total paperclip-the-world power, not even World Emperor power, but enough power to potentially leverage in dangerous new directions, not least of which is research into even more general and powerful AI.
Errhh.. no, it would discredit either electronic trading, or finance full stop. The world would shut down the stockmarkets permanently rather than tolerate that level of dominance from a single actor. No-kidding, the most likely result of this is a world where if you want to buy stock you bloody well show up in person on the trading floor and sign physical paperwork. In blood so they can check you are not a robot.
So, the logic here is not really different from an AI than ran a shipping conglomerate of self-driving cars, trains, boats and planes? Just a business that makes money which the AI can use.
The trader AI would concern me-I guess I would be even more concerned about the shipping conglomerate, because it knows how to interact with the physical world effectively.
To make it a bit clearer: A financial AI that somehow never developed the ability do do anything beyond buy and sell orders could still have catastrophic effects, if it hyperoptimized its trading to the point that it gained some very large percent of the world’s assets. This would have disruptive effects on the economy, and depending on the AI’s goals, that would not stop the AI from hoovering up every asset.
Note that this relies on this one AI being much better than the competition, so similar considerations apply to the usual case of a more general AI suddenly becoming very powerful. One difference is that an intelligence explosion in this case would be via investing money in hiring more labor, rather than via the AI itself laboring.
[deleted]