Moreover, this would lead to the situation of what one would expect when multiple slowly expanding, stealth AIs run into each other. It is likely that such events would have results would catastrophic enough that they would be visible even with comparatively primitive telescopes.
In general I consider the “stealth AI” scenario highly unlikely (I think an early filter is the best explanation). However, there is a loophole in that particular objection. I think it is plausible that a superintelligence that expects to encounter other superintelligences with significant probability will design some sort of a physical cryptography system that will allow it to provide strong evidence to the other superintelligence regarding its own “source code” or at least some of its decision-theoretic properties. By this means, the superintelligences will cooperate in the resulting prisoner’s dilemma e.g. by a non-violent division of territory (the specific mode of cooperation will depend on the respective utility functions).
Hi Joshua, nice post!
In general I consider the “stealth AI” scenario highly unlikely (I think an early filter is the best explanation). However, there is a loophole in that particular objection. I think it is plausible that a superintelligence that expects to encounter other superintelligences with significant probability will design some sort of a physical cryptography system that will allow it to provide strong evidence to the other superintelligence regarding its own “source code” or at least some of its decision-theoretic properties. By this means, the superintelligences will cooperate in the resulting prisoner’s dilemma e.g. by a non-violent division of territory (the specific mode of cooperation will depend on the respective utility functions).