Agent level parallelism

Let’s suppose that an Em researcher runs 1000 times faster than the equivalent human brain. To the Em researcher who runs faster, it will seem like the experiments take significantly longer to run. So there is more waiting around. I expect that a collection of Ems would still be able to make progress much, much faster than the same number of human researchers. I don’t expect that waiting for experiments to finish will not nearly be as much of a slowdown as one might naively expect. There are multiple reasons to believe this.

  1. It is simply not true that you can’t do anything useful while waiting for experimental results to come in (e.g. refine your theory, plan the next experiment).

  2. Many interesting experiments probably don’t take very long to run.

  3. If the experiment is computational, you might be able to shut down the Em and assign the resources used for running the Em, to run the experiment.

  4. You can switch between executing different agents, always trying to execute the one, that can “have the most useful thoughts”.

The argument does not just apply to Em researchers, but to a wide range of agents.

Doing (4) might be faster than having only one agent that context switches, at least if the agents are as bad at context switching as humans. You can also see (4) as a way to context switch that could work well. Then you can think naturally think of the whole system as a single agent.

Each of the agents could research a specific thing, and context switching might be a good way to optimally use the computational resources available. For humans, it seems to work relatively well to have specialists that focus on a narrow task. Through it principle, there is no reason why any subagent would not have access to all of the knowledge from all of the other subagents.

I got inspired by this:

Francois Chollet: “Our brains themselves were never a significant bottleneck in the AI-design process.”

Eliezer Yudkowsky: “A startling assertion. …”