Here I propose the idea that creating AIs that perform human mimicry would result in similar capabilities and results to an AI made with iterated amplification. However, it may provide are greater degree of flexibility than using any hard-coded iterated amplification, which might make it preferable. I don’t know if this has been brought up before, but I would be interested what people think or links to previous discussion.
The basic idea of iterated amplification is to create a slow, powerful reasoning system, then train a faster system that approximates this powerful power one. Then repeat this process indefinitely. The main way of coming up with the slow procedure I’ve seen proposed is to use HCH, which involves emulating a large number of humans interacting with each other to come to come up with an ideal output.
But suppose you instead had a bunch of human imitators without any fixed use of iterated amplification.
AI is valuable, so human researchers do AI research. Since human imitators do the same thing as humans, they could also create more powerful AI systems.
One way for them to do this is with explicit iterated amplification. If the humans mimics see this as the most effective way to increase their capabilities, then human mimics could just perform iterated amplification on their own by reading about it, rather than requiring it to be hard-coded.
However, iterated amplification is not necessarily the most efficient way to create a more powerful AI, and a human mimics would have the flexibility to choose other techniques. Current AI researchers don’t usually try to increase AI capabilities by iterated amplifications, but instead by coming up with new algorithms. So perhaps for the human mimics, increasing capabilities using some method other than iterated amplification, like what actual AI researchers do, would be more effective.
For example, suppose the human-mimicking AIs see they could use a more powerful pathfinding algorithm than the messy one implicitly implemented in the learned model of a human mimic. Then they could just use their human-level intelligence to program a custom, efficient path-finding algorithm and then modify themselves to make use of it.
There are of course some safety concerns about making incorrect modifications. However, there are similar safety concerns when making an incorrect fast approximations to a slow, powerful reasoning process like in iterated amplification. I don’t see why using pure human mimicry would be more dangerous.
And it could potentially be less dangerous. Iterated amplification has may cause problems due to the fast approximations not being sufficiently faithful approximations of the slow processes. If iterated amplification is repeated, error in approximations has the potential to increase exponentially. However, a human mimic could use its best judgment to see whether it would be safer to increase its capabilities by using iterated amplifications or by using some other method.