Your arguments for non-intervention are more interesting. I had not seen the idea of non-intervention being Schelling point which seems novel.
It also applies to the AI risk debate. I’ve made the argument in that context before here on LW. I believe User:Dmytry started to champion it at some point.
Yes, I’ve seen in in that sort of context. It seems much less plausible that an AI would try to reach a Schelling point of that sort. It requires it to have a very human notion of intervention. While it is plausible that other evolved entities would have such a notion, figuring out how to get it to understand that would seem to be possibly extremely difficult.
It also applies to the AI risk debate. I’ve made the argument in that context before here on LW. I believe User:Dmytry started to champion it at some point.
Yes, I’ve seen in in that sort of context. It seems much less plausible that an AI would try to reach a Schelling point of that sort. It requires it to have a very human notion of intervention. While it is plausible that other evolved entities would have such a notion, figuring out how to get it to understand that would seem to be possibly extremely difficult.