We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated.
I think that a more precise description of what your hypothetical AI can do would be useful. Just saying to exclude “magic” isn’t very specific. There might not be a wide agreement as to what counts as “magic”. Nanotechnology definitely does. I believe that so does fast economic domination by cracking the stock market and some people have proposed that. I think that even exploiting software and hardware bugs everywhere to gain total computing dominance should be excluded.
One way to define constraints would be to limit the AI to things that humans have been known to do but allow it to do them with superhuman efficiency. Something like:
Assume the AI has any skill that has ever been possessed by a human being.
It can execute it without making mistakes, getting tired or demotivated.
It can perform an arbitrary high amount of activities simultaneously. To keep with the “no magic” rule, each activity needs to be something a human could plausibly do. So, the AI can act like 10000 genius physicists each solving a different theoretical problem and writing a paper about it, but it can’t be a super-physicist who formulates the theory of everything and gains superpowers by exploiting layers of exotic physical law heretofore unknown to humanity. We should also probably require the AI to get some additional computing power before it ramps up its multitasking too high.
Earning money on the internet:
I think it’s possible nowadays to register an account on an online freelancing site, talk with clients, do work for them and receive money through electronic money transfer services without ever leaving your home. The only problem would be the need to show your face and your voice to the clients. Faking a real-time video feed probably falls under “things that humans can in principle do”.
Moving money around:
A crucial limitation is the availability of money management services that don’t require signing anything physical before you can start using them. I suspect that quite a lot can be done but it’s only a guess. The possibilities should also increase in the future but on the other hand, more regulations could be established making it more difficult. Bitcoin succeeding on a massive scale would make this a non-issue.
Getting more computing power:
This sounds like a problem that’s already solved. If you can earn money online and move it around then you can rent cloud computing resources. This will become easier and cheaper with time.
Acquiring some amount of control over physical reality:
One way is robots. The AI by it’s very existence is a solution to the problem of robot control. If it can build a robot capable of making some useful movements then it should also be able to make it perform those movements. This is good once AI has tools, raw materials, energy and a safe place to work on building even more robots but I don’t know if current robotics technology would allow it to pass for human, even a really weird one who wears a trench coat all the time, when trying to buy those things.
Another way is recruiting helpers. The problem is that the constraint of making the AI only do human-possible things doesn’t really work to prevent postulating “magic” in this area. The AI should profile somewhat gullible people on the internet, give them money, have them join a secret society/cult of its devising and make them fanatically devoted to itself through manipulation and threats, gradually growing the organization and expanding its operations and playing members against each other so that no one ever realizes who’s the real boss. This all sounds doable in principle and it sounds like every specific action needed to be taken is something that plenty of people know how to do, but as a whole it comes across as a different version of “solve nanotech and then eat the world”.
Another way is recruiting helpers. The problem is that the constraint of making the AI only do human-possible things doesn’t really work to prevent postulating “magic” in this area.
This is not so magical on a small scale (given a bunch of unlikely premises). One can imagine an AI to copy Yudkowsky’s success, by writing a much better, different “LessLessWrong”, and ask people for money. Writing a bunch of blog posts would also require little of the sort of skills at which humans are naturally good at. All you need is some seemingly genuine insights, and a cause. And an AI could probably come up with a very convincing (to a certain group of people), albeit exotic, existential risk scenario, and mitigation strategies.
I strongly doubt that this would suffice in order to take over the world. For example, at some point it would have to show up in person somewhere. And people could notice a front man, since they did not write those posts.
But in general, fake existential risk mitigation seems to be a promising strategy if you want to take over the world. Because many require large scale, global interventions, using genuine technology. While the cause itself attracts people featuring the right mix of intelligence, fanaticism, and a perception of moral superiority, in order to commit atrocities, if necessary.
If the AI wants to recruit people by role-playing a person, it can pretend to be a busy person who doesn’t have time for social life. Or something more creative, like a mad genius suffering from extreme social phobia, a paranoid former secret service agent, or a successful businessman who believes that connecting their online persona with their identity would harm their business. There is no need to appear personally anywhere. It’s not like people suspect a random blogger to be an AI in disguise.
Even if you want to create a cult, it’s not necessary to meet people personally. Most Falun Gong members have never seen their leader, and probably don’t even know if he’s still alive. He could easily be an AI with a weird utility function. Maybe some people would refuse to join a movement with an unknown leader. So what? Someone else would join. And when you already have the “inner circle” of humans, other members will be happy to meet the inner circle members in person. Catholics interact with their priests more often than they do with the Pope. And if the Pope would secretly take commands from an AI hiding in the depths of Vatican, most Catholics wouldn’t know.
You could pretend to be a secret society trying to rule the world. If you tell humans “we will help you become a president, but in reality you will be our puppet, and you will not even know our identity”, many people would be okay with that, if you demonstrate them that you have some power. You could start the trust spiral e.g. by writing a successful thesis for them, giving them a good advice, or just sending them money you stole from somewhere; just to prove that if they do what you want from them, you can deliver real-world benefits in return.
If you want to have a blogger persona, you could start by contacting an already good blogger, and make a deal with them that they will start a new blog and publish your articles under their name (because you want to remain anonymous, and in exchange offer them all the fame). You could choose a smart person who already agrees with most of your ideas, so it would seem credible.
Do what Satoshi Nakamoto did and intentionally hide behind internet anonymity. Do this right and it will make you seem like an ultra-cool uber-hacker cyberpunk.
I think that even exploiting software and hardware bugs everywhere to gain total computing dominance should be excluded.
I appreciate your general point, but on this specific one … “the internet of things” really does mean “eternal unfixable Heartbleed everywhere”. Your DSL modem is probably a small Linux box, whose holes will never be fixed. When the attacker gets that box, >90% of fixed PCs are still running Windows. Etc. As a system administrator, I can quite see the modern network of ridiculously ’sploitable always-connected hardware as a playground for even a human-level intelligence, artificial or not, on the Internet. It is an utter, utter disaster, and it’s only beginning.
I think that a more precise description of what your hypothetical AI can do would be useful. Just saying to exclude “magic” isn’t very specific. There might not be a wide agreement as to what counts as “magic”. Nanotechnology definitely does. I believe that so does fast economic domination by cracking the stock market and some people have proposed that. I think that even exploiting software and hardware bugs everywhere to gain total computing dominance should be excluded.
One way to define constraints would be to limit the AI to things that humans have been known to do but allow it to do them with superhuman efficiency. Something like:
Assume the AI has any skill that has ever been possessed by a human being.
It can execute it without making mistakes, getting tired or demotivated.
It can perform an arbitrary high amount of activities simultaneously. To keep with the “no magic” rule, each activity needs to be something a human could plausibly do. So, the AI can act like 10000 genius physicists each solving a different theoretical problem and writing a paper about it, but it can’t be a super-physicist who formulates the theory of everything and gains superpowers by exploiting layers of exotic physical law heretofore unknown to humanity. We should also probably require the AI to get some additional computing power before it ramps up its multitasking too high.
It can open doors and knows no fear.
Some things such a hypothetical AI could do:
Earning money on the internet: I think it’s possible nowadays to register an account on an online freelancing site, talk with clients, do work for them and receive money through electronic money transfer services without ever leaving your home. The only problem would be the need to show your face and your voice to the clients. Faking a real-time video feed probably falls under “things that humans can in principle do”.
Moving money around: A crucial limitation is the availability of money management services that don’t require signing anything physical before you can start using them. I suspect that quite a lot can be done but it’s only a guess. The possibilities should also increase in the future but on the other hand, more regulations could be established making it more difficult. Bitcoin succeeding on a massive scale would make this a non-issue.
Getting more computing power: This sounds like a problem that’s already solved. If you can earn money online and move it around then you can rent cloud computing resources. This will become easier and cheaper with time.
Acquiring some amount of control over physical reality: One way is robots. The AI by it’s very existence is a solution to the problem of robot control. If it can build a robot capable of making some useful movements then it should also be able to make it perform those movements. This is good once AI has tools, raw materials, energy and a safe place to work on building even more robots but I don’t know if current robotics technology would allow it to pass for human, even a really weird one who wears a trench coat all the time, when trying to buy those things.
Another way is recruiting helpers. The problem is that the constraint of making the AI only do human-possible things doesn’t really work to prevent postulating “magic” in this area. The AI should profile somewhat gullible people on the internet, give them money, have them join a secret society/cult of its devising and make them fanatically devoted to itself through manipulation and threats, gradually growing the organization and expanding its operations and playing members against each other so that no one ever realizes who’s the real boss. This all sounds doable in principle and it sounds like every specific action needed to be taken is something that plenty of people know how to do, but as a whole it comes across as a different version of “solve nanotech and then eat the world”.
This is not so magical on a small scale (given a bunch of unlikely premises). One can imagine an AI to copy Yudkowsky’s success, by writing a much better, different “LessLessWrong”, and ask people for money. Writing a bunch of blog posts would also require little of the sort of skills at which humans are naturally good at. All you need is some seemingly genuine insights, and a cause. And an AI could probably come up with a very convincing (to a certain group of people), albeit exotic, existential risk scenario, and mitigation strategies.
I strongly doubt that this would suffice in order to take over the world. For example, at some point it would have to show up in person somewhere. And people could notice a front man, since they did not write those posts.
But in general, fake existential risk mitigation seems to be a promising strategy if you want to take over the world. Because many require large scale, global interventions, using genuine technology. While the cause itself attracts people featuring the right mix of intelligence, fanaticism, and a perception of moral superiority, in order to commit atrocities, if necessary.
If the AI wants to recruit people by role-playing a person, it can pretend to be a busy person who doesn’t have time for social life. Or something more creative, like a mad genius suffering from extreme social phobia, a paranoid former secret service agent, or a successful businessman who believes that connecting their online persona with their identity would harm their business. There is no need to appear personally anywhere. It’s not like people suspect a random blogger to be an AI in disguise.
Even if you want to create a cult, it’s not necessary to meet people personally. Most Falun Gong members have never seen their leader, and probably don’t even know if he’s still alive. He could easily be an AI with a weird utility function. Maybe some people would refuse to join a movement with an unknown leader. So what? Someone else would join. And when you already have the “inner circle” of humans, other members will be happy to meet the inner circle members in person. Catholics interact with their priests more often than they do with the Pope. And if the Pope would secretly take commands from an AI hiding in the depths of Vatican, most Catholics wouldn’t know.
You could pretend to be a secret society trying to rule the world. If you tell humans “we will help you become a president, but in reality you will be our puppet, and you will not even know our identity”, many people would be okay with that, if you demonstrate them that you have some power. You could start the trust spiral e.g. by writing a successful thesis for them, giving them a good advice, or just sending them money you stole from somewhere; just to prove that if they do what you want from them, you can deliver real-world benefits in return.
If you want to have a blogger persona, you could start by contacting an already good blogger, and make a deal with them that they will start a new blog and publish your articles under their name (because you want to remain anonymous, and in exchange offer them all the fame). You could choose a smart person who already agrees with most of your ideas, so it would seem credible.
Do what Satoshi Nakamoto did and intentionally hide behind internet anonymity. Do this right and it will make you seem like an ultra-cool uber-hacker cyberpunk.
I appreciate your general point, but on this specific one … “the internet of things” really does mean “eternal unfixable Heartbleed everywhere”. Your DSL modem is probably a small Linux box, whose holes will never be fixed. When the attacker gets that box, >90% of fixed PCs are still running Windows. Etc. As a system administrator, I can quite see the modern network of ridiculously ’sploitable always-connected hardware as a playground for even a human-level intelligence, artificial or not, on the Internet. It is an utter, utter disaster, and it’s only beginning.