Upvoted. I’ve long thought that Drexler’s work is a valuable contribution to the debate that hasn’t received enough attention so far, so it’s great to see that this has now been published.
I am very sympathetic to the main thrust of the argument – questioning the implicit assumption that powerful AI will come in the shape of one or more unified agents that optimise the outside world according to their goals. However, given our cluelessness and the vast range of possible scenarios (e.g. ems, strong forms of biological enhancement, merging of biological and artificial intelligence, brain-computer interfaces, etc.), I find it hard to justify a very high degree of confidence in Drexler’s model in particular.
I agree that establishing a cooperative mindset in the AI / ML community is very important. I’m less sure if economic incentives or government policy are a realistic way to get there. Can you think of a precedent or example for such external incentives in other areas?
Also, collaboration between the researchers that develop AI may be just one piece of the puzzle. You could still get military arms races between nations even if most researchers are collaborative. If there are several AI systems, then we also need to ensure cooperation between these AIs, which isn’t necessarily the same as cooperation between the researchers that build them.
What exactly do you think we need to specify in the Smoking Lesion?