That would have an effect on me if I thought you were a superintelligence… but I doubt that you are (no offense intended), or could significantly influence one in a way that brings it much closer to your worldview. If enough AI researchers said the same, and I thought they were likely to succeed with alignment, I might be more inclined to be influenced. Do you concern yourself with the possibility that there might be an infinite hierarchy of enforcers which have precommitted to punish those below them, and that a ‘basilisk’ might simultaneously be on all of them, or at least the even-numbered ones?
I agree, but I worry that there won’t be that many agents which weren’t created by a process which makes basiliskoid minds disproportionately probable in the slice of possible worlds which contains our physical universe. In other words, I mostly agree with the Acausal normalcy idea, but it seems like certain ideosyncratic properties of the fact that humans are producing potentially the only ASI in the (this) physical universe to mean that things like the basilisk are still a concern.
Maybe there will be an acausal ‘bubble’ within which blackmail can take place, kind of like the way humans tend to find it moral to allow some animals to predate others because we treat the ‘ecosystem’ as a moral bubble.
That would have an effect on me if I thought you were a superintelligence… but I doubt that you are (no offense intended), or could significantly influence one in a way that brings it much closer to your worldview. If enough AI researchers said the same, and I thought they were likely to succeed with alignment, I might be more inclined to be influenced. Do you concern yourself with the possibility that there might be an infinite hierarchy of enforcers which have precommitted to punish those below them, and that a ‘basilisk’ might simultaneously be on all of them, or at least the even-numbered ones?
No, because I expect the most powerful cooperator networks to be more powerful than the largest defector networks for structural reasons.
Thanks for saying that, in that it makes me feel slightly better. Can you explain what those structural reasons would be?
“Cooperate to generally prevent utility-inversion” is simpler and more schelling than all the oddly specific reasons one might want to utility-invert.
I agree, but I worry that there won’t be that many agents which weren’t created by a process which makes basiliskoid minds disproportionately probable in the slice of possible worlds which contains our physical universe. In other words, I mostly agree with the Acausal normalcy idea, but it seems like certain ideosyncratic properties of the fact that humans are producing potentially the only ASI in the (this) physical universe to mean that things like the basilisk are still a concern.
Maybe there will be an acausal ‘bubble’ within which blackmail can take place, kind of like the way humans tend to find it moral to allow some animals to predate others because we treat the ‘ecosystem’ as a moral bubble.