I will list—just for my own understanding—the no-goal-oriented types of agents.
1. Universal library. This is an agent which create all significant solutions to all possible significant problems and then stops. An example of it is the past biological evolution which invented enormous amount of adaptations: flying solutions, proteins etc, - and could be used for inspiration for the technological progress. Past human history or some unconscious processes in the brain, like dreaming, may be another possible examples.
2. Human-mimicking neural net—this is an example of an agent which is mimicking another agent.
3. Obviously, AI Oracles and AI Tools.
4. “Homeostatic” superintelligence. An example of such system is OS like Windows, which doesn’t do anything in a goal-directed sense, but just supports processes. Most national states also work in this way (except ideologically driven like USSR or Iran).
5. Drexeler’s superintelligence as a sum of narrow services, e.g. Google’s web services.
6. Swarm intelligences which compete to solve a task. If one create a prize for X, many people will compete to get it. The whole swarm is not a goal oriented agent, while its elements are such agents. Scott’s Moloh is a bad example of such swarm behaviour.
Thanks for doing this—it’s helpful for me as well. I have some questions/quibbles:
Isn’t #2 as goal-directed as the human it mimics, in all the relevant ways? If I learn that a certain machine runs a neural net that mimics Hitler, shouldn’t I worry that it will try to take over the world? Maybe I don’t get what you mean by “mimics.”
What exactly is the difference between an Oracle and a Tool? I thought an Oracle was a kind of Tool; I thought Tool was a catch-all category for everything that’s not a Sovereign or a Genie.
I’m skeptical of this notion of “homeostatic” superintelligence. It seems to me that nations like the USA are fully goal-directed in the relevant senses; they exhibit the basic AI drives, they are capable of things like the treacherous turn, etc. As for Windows, how is it an agent at all? What does it do? Allocate memory resources across currently-being-run programs? How does it do that—is there an explicit function that it follows to do the allocation (e.g. give all programs equal resources), or does it do something like consequentialist reasoning?
On #6, it seems to me that it might actually be correct to say that the swarm is an agent—it’s just that the swarm has different goals than each of its individual members. Maybe Moloch is an agent after all! On the other hand, something seems not quite right about this—what is Moloch’s utility function? Whatever it is, Moloch seems particularly uninterested in self-preservation, which makes it hard to think of it as an agent with normal-ish goals. (Argument: Suppose someone were to initiate a project that would, with high probability, kill Moloch forever in 100 years time. Suppose the project has no other effects, such that almost all humans think it’s a good idea. And everyone knows about it. All it would take to stop the project is a million people voting against it. Now, is there a sense in which Moloch would resist it or seek to undermine the project? It would maaaybe incentivize most people not to contribute to the project (tragedy of the commons!) but that’s it. So either Moloch isn’t an agent, or it’s an agent that doesn’t care about dying, or it’s an agent that doesn’t know it’s going to die, or it’s a very weak agent—can’t even stop one project!)
Something could exhibit goal-like behaviour for the outside viewers without having internal structure of an agent. For example, a brick is falling to the ground—we could say that it is aimed on the specific point on the ground, but it is not an agent. The same way an infectious disease can take over the world without being an agent. Moreover, even some humans sometimes are not agent.
In my opinion, Oracle AI output only answers to questions, and Tool AI can do some other staff, like continuous data stream transformation or controlling mechanisms.
National states, human body and OSs—all of them are good and even clever in preserving homeostatic state (except the time of government shutdown) - but they typically achieve it not via high level agential reasoning.
Swarm of agents could exhibit behaviour different from the behaviour or goals of any separate agent.
I will list—just for my own understanding—the no-goal-oriented types of agents.
1. Universal library. This is an agent which create all significant solutions to all possible significant problems and then stops. An example of it is the past biological evolution which invented enormous amount of adaptations: flying solutions, proteins etc, - and could be used for inspiration for the technological progress. Past human history or some unconscious processes in the brain, like dreaming, may be another possible examples.
2. Human-mimicking neural net—this is an example of an agent which is mimicking another agent.
3. Obviously, AI Oracles and AI Tools.
4. “Homeostatic” superintelligence. An example of such system is OS like Windows, which doesn’t do anything in a goal-directed sense, but just supports processes. Most national states also work in this way (except ideologically driven like USSR or Iran).
5. Drexeler’s superintelligence as a sum of narrow services, e.g. Google’s web services.
6. Swarm intelligences which compete to solve a task. If one create a prize for X, many people will compete to get it. The whole swarm is not a goal oriented agent, while its elements are such agents. Scott’s Moloh is a bad example of such swarm behaviour.
Thanks for doing this—it’s helpful for me as well. I have some questions/quibbles:
Isn’t #2 as goal-directed as the human it mimics, in all the relevant ways? If I learn that a certain machine runs a neural net that mimics Hitler, shouldn’t I worry that it will try to take over the world? Maybe I don’t get what you mean by “mimics.”
What exactly is the difference between an Oracle and a Tool? I thought an Oracle was a kind of Tool; I thought Tool was a catch-all category for everything that’s not a Sovereign or a Genie.
I’m skeptical of this notion of “homeostatic” superintelligence. It seems to me that nations like the USA are fully goal-directed in the relevant senses; they exhibit the basic AI drives, they are capable of things like the treacherous turn, etc. As for Windows, how is it an agent at all? What does it do? Allocate memory resources across currently-being-run programs? How does it do that—is there an explicit function that it follows to do the allocation (e.g. give all programs equal resources), or does it do something like consequentialist reasoning?
On #6, it seems to me that it might actually be correct to say that the swarm is an agent—it’s just that the swarm has different goals than each of its individual members. Maybe Moloch is an agent after all! On the other hand, something seems not quite right about this—what is Moloch’s utility function? Whatever it is, Moloch seems particularly uninterested in self-preservation, which makes it hard to think of it as an agent with normal-ish goals. (Argument: Suppose someone were to initiate a project that would, with high probability, kill Moloch forever in 100 years time. Suppose the project has no other effects, such that almost all humans think it’s a good idea. And everyone knows about it. All it would take to stop the project is a million people voting against it. Now, is there a sense in which Moloch would resist it or seek to undermine the project? It would maaaybe incentivize most people not to contribute to the project (tragedy of the commons!) but that’s it. So either Moloch isn’t an agent, or it’s an agent that doesn’t care about dying, or it’s an agent that doesn’t know it’s going to die, or it’s a very weak agent—can’t even stop one project!)
Something could exhibit goal-like behaviour for the outside viewers without having internal structure of an agent. For example, a brick is falling to the ground—we could say that it is aimed on the specific point on the ground, but it is not an agent. The same way an infectious disease can take over the world without being an agent. Moreover, even some humans sometimes are not agent.
In my opinion, Oracle AI output only answers to questions, and Tool AI can do some other staff, like continuous data stream transformation or controlling mechanisms.
National states, human body and OSs—all of them are good and even clever in preserving homeostatic state (except the time of government shutdown) - but they typically achieve it not via high level agential reasoning.
Swarm of agents could exhibit behaviour different from the behaviour or goals of any separate agent.