If an AI is developed, and run in such a way that it serves the interests of a select group of rich folk and no one else, then:
a) the friendliness problem has essentially been solved. That’s great! I don’t think it’s likely that this will happen, though.
b) the power of the AI will likely come primarily from research and inventions which will be sold to the general public, resulting in an increase in welfare in general. If this is not the case, then we may have different definitions of AI, or the people using it are not very creative—in which case someone more creative will approach them and it will be worth so much money to them that they should start. This is largely speculative but I don’t think particularly controversial.
c) the source code will get leaked, governments will require the group to turn over their results, or some significant conflict between this group and other global power groups will erupt.
d) all of this would happen easily within 5 years if not 1 year. Talking about a single AI existing (if “single AI” itself has meaning! Our intelligence is highly modular.) for 20 to 30 years is complete nonsense, or highly confused concerning the definition of AI.
There is a huge amount of thinking on this topic by highly intelligent people. If you’re interested in updating on their beliefs, here is a link to the Hanson-Yudkowsky AI-Foom debate which has a huge amount of discussion of possible futures which seem to me much more likely and sophisticated than yours, even if I don’t entirely agree with them.
If an AI is developed, and run in such a way that it serves the interests of a select group of rich folk and no one else, then:
a) the friendliness problem has essentially been solved. That’s great! I don’t think it’s likely that this will happen, though.
b) the power of the AI will likely come primarily from research and inventions which will be sold to the general public, resulting in an increase in welfare in general. If this is not the case, then we may have different definitions of AI, or the people using it are not very creative—in which case someone more creative will approach them and it will be worth so much money to them that they should start. This is largely speculative but I don’t think particularly controversial.
c) the source code will get leaked, governments will require the group to turn over their results, or some significant conflict between this group and other global power groups will erupt.
d) all of this would happen easily within 5 years if not 1 year. Talking about a single AI existing (if “single AI” itself has meaning! Our intelligence is highly modular.) for 20 to 30 years is complete nonsense, or highly confused concerning the definition of AI.
There is a huge amount of thinking on this topic by highly intelligent people. If you’re interested in updating on their beliefs, here is a link to the Hanson-Yudkowsky AI-Foom debate which has a huge amount of discussion of possible futures which seem to me much more likely and sophisticated than yours, even if I don’t entirely agree with them.