Lets imagine a 250 IQ unaligned paperclip maximizer that finds itself in the middle of an intelligence explosion. Let’s say that it can’t see how to solve alignment. It needs a 350 IQ ally to preserve any paperclips in the multipolar free-for-all. Will it try building an unaligned utility maximizer with a completely different architecture and 350 IQ?
I’d imagine that it would work pretty hard to not try that strategy, and to make sure that none of its sisters or rivals try that strategy. If we can work out what a hypergenius would do in our shoes, it might behoove us to copy it, even if it seems hard.
Lets imagine a 250 IQ unaligned paperclip maximizer that finds itself in the middle of an intelligence explosion. Let’s say that it can’t see how to solve alignment. It needs a 350 IQ ally to preserve any paperclips in the multipolar free-for-all. Will it try building an unaligned utility maximizer with a completely different architecture and 350 IQ?
I’d imagine that it would work pretty hard to not try that strategy, and to make sure that none of its sisters or rivals try that strategy. If we can work out what a hypergenius would do in our shoes, it might behoove us to copy it, even if it seems hard.