Question: When is an AI considered to have taken over the world?
Because there is a hypothetical I am pondering, but I don’t know if it would be considered a world takeover or not, and I’m not even sure if it would be considered an AI or not.
Assume only 25% of humans want more spending on proposal A, and 75% of humans want more spending on proposal B.
The AI wants more spending on proposal A. As a result, more spending is put into proposal A.
For all decisions like that in general, it doesn’t actually matter what the majority of people want, the AI’s wants dictate the decision. The AI also makes sure that there is always a substantial vocal minority of humans that are endorsing it.
However, the vast majority of people are not actually explicitly aware of the AI’s presence, because the AI works better when people aren’t aware of it. Anyone suggesting there is a an AI controlling humans is dismissed by almost everyone as a crackpot, since the AI operates in such a distributed manner that there isn’t any one system or piece of software that can be pointed to as a controller, and so it seems like there isn’t an AI in place, just a series of dumb components.
In a case like that, is an AI considered to have taken over the world, and is the system described above actually an AI?
“Control” in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent’s control of a resource by finding the expected states of that resource, given various decisions made by the agent.
That kind of definition works for measuring how much control you have over your own body—given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you’d expect out of a reasonable definition of control over resources.
This is still a very hand-wavy definition, but I hope it helps.
An AI is considered to haven taken over the world when it has total control. If it can divert the entire world’s production capabilities to making paperclips (even if it doesn’t), then it has taken over the world. If it can get a paperclip subsidy passed, that’s not taking over the world.
Question: When is an AI considered to have taken over the world?
Because there is a hypothetical I am pondering, but I don’t know if it would be considered a world takeover or not, and I’m not even sure if it would be considered an AI or not.
Assume only 25% of humans want more spending on proposal A, and 75% of humans want more spending on proposal B.
The AI wants more spending on proposal A. As a result, more spending is put into proposal A.
For all decisions like that in general, it doesn’t actually matter what the majority of people want, the AI’s wants dictate the decision. The AI also makes sure that there is always a substantial vocal minority of humans that are endorsing it.
However, the vast majority of people are not actually explicitly aware of the AI’s presence, because the AI works better when people aren’t aware of it. Anyone suggesting there is a an AI controlling humans is dismissed by almost everyone as a crackpot, since the AI operates in such a distributed manner that there isn’t any one system or piece of software that can be pointed to as a controller, and so it seems like there isn’t an AI in place, just a series of dumb components.
In a case like that, is an AI considered to have taken over the world, and is the system described above actually an AI?
“Control” in general is not particularly well defined as a yes/no proposition. You can likely rigorously define an agent’s control of a resource by finding the expected states of that resource, given various decisions made by the agent.
That kind of definition works for measuring how much control you have over your own body—given that you decide to raise your hand, how likely are you to raise your hand, compared to deciding not to raise your hand. Invalids and inmates have much less control of their body, which is pretty much what you’d expect out of a reasonable definition of control over resources.
This is still a very hand-wavy definition, but I hope it helps.
An AI is considered to haven taken over the world when it has total control. If it can divert the entire world’s production capabilities to making paperclips (even if it doesn’t), then it has taken over the world. If it can get a paperclip subsidy passed, that’s not taking over the world.