Could enough human-imitating artificial agents (running much faster than people) prevent unfriendly AGI from being made?
I think the problem of scale doesn’t necessarily gets solved through quantity—because there are just qualitative issues (eg. loss of human life) that no amount of infrastructure upscale can compensate.
I think the problem of scale doesn’t necessarily gets solved through quantity—because there are just qualitative issues (eg. loss of human life) that no amount of infrastructure upscale can compensate.