Sure. I’m not sure how we want to represent the prisoner’s dilemma, there might be ways of making it more immersive/natural than this (natural instances of prisoner’s dilemmas might look more like the shout “friendly”, or attack from behind choice in every player faces when first meeting another player in ARC Raiders). But the basic way you can do it is, you make your decision in a private UI, you don’t/can’t reveal it until both players have made their decision, then they’re revealed simultaneously. For agents who are acausally entrained, we fake it by just changing the alien’s decision to equal the player’s decision before the reveal.
It’s devious, isn’t it? But, it’s not as if we can be expected to read the player’s decision theory through the webcam and create a circuit within the game code that only cooperates when this specific human player would cooperate despite lacking or ignoring any knowledge of their decision output. In another way, it’s fine to cheat here, because the player isn’t even supposed to be roleplaying as themselves within the game. This has to be a world where like, agents can send verifiable signals about which decision theory contract they implement, so it perhaps couldn’t take place in our world (I’d like to believe that it could, humans do have spooky tacit communication channels, and they certianly want to be trustworthy in this way, but humans also seem to be pretty good at lying afaict)
Though hopefully our world will become more like that soon.
Ooh, one way that you could have it is that the human is actually solving problems by programming a bot to solve them, a bit like in Shenzhen I/O, and the bot starts to meet lookalike bots, that act according to the code you’ve written, but with an opposing goal? And they’re on the other side of the mirror so you can only change your own bot’s behavior
In this case and, tragically, in most cases, I don’t think doing the real thing in a video game (that people would play) is possible.
Common obstacles to making philosophical games entail from the fact that one can’t put a whole person inside a game, we don’t have human-level AI that we could put inside a video game (and even if we did we’d be constrained by the fact that doing so is to some extent immoral although you can make sure the experience component that corresponds to the npc is very small, eg, by making the bulk of the experience that of an actor, performing), and you also can’t rely on human players to roleplay correctly, we can’t temporarily override their beliefs and desires with that of their character, even when they wish we could.
So if we want to make games about people, we have to cheat.
Zachtronics games does part of that. In those games the player doesn’t do the tasks directly, instead they need to program a bot (or other systems) to do the task. While mirroring the player is impossible, it should be possible to mirror the bots programmed by the player.
Interesting! Is there a way to limit the player’s agency such that, within the rules of the game, the mirroring mechanic would be effectively true?
Sure. I’m not sure how we want to represent the prisoner’s dilemma, there might be ways of making it more immersive/natural than this (natural instances of prisoner’s dilemmas might look more like the shout “friendly”, or attack from behind choice in every player faces when first meeting another player in ARC Raiders). But the basic way you can do it is, you make your decision in a private UI, you don’t/can’t reveal it until both players have made their decision, then they’re revealed simultaneously. For agents who are acausally entrained, we fake it by just changing the alien’s decision to equal the player’s decision before the reveal.
It’s devious, isn’t it? But, it’s not as if we can be expected to read the player’s decision theory through the webcam and create a circuit within the game code that only cooperates when this specific human player would cooperate despite lacking or ignoring any knowledge of their decision output. In another way, it’s fine to cheat here, because the player isn’t even supposed to be roleplaying as themselves within the game. This has to be a world where like, agents can send verifiable signals about which decision theory contract they implement, so it perhaps couldn’t take place in our world (I’d like to believe that it could, humans do have spooky tacit communication channels, and they certianly want to be trustworthy in this way, but humans also seem to be pretty good at lying afaict)
Though hopefully our world will become more like that soon.
Ooh, one way that you could have it is that the human is actually solving problems by programming a bot to solve them, a bit like in Shenzhen I/O, and the bot starts to meet lookalike bots, that act according to the code you’ve written, but with an opposing goal? And they’re on the other side of the mirror so you can only change your own bot’s behavior
In this case and, tragically, in most cases, I don’t think doing the real thing in a video game (that people would play) is possible.
Common obstacles to making philosophical games entail from the fact that one can’t put a whole person inside a game, we don’t have human-level AI that we could put inside a video game (and even if we did we’d be constrained by the fact that doing so is to some extent immoral although you can make sure the experience component that corresponds to the npc is very small, eg, by making the bulk of the experience that of an actor, performing), and you also can’t rely on human players to roleplay correctly, we can’t temporarily override their beliefs and desires with that of their character, even when they wish we could.
So if we want to make games about people, we have to cheat.
Zachtronics games does part of that. In those games the player doesn’t do the tasks directly, instead they need to program a bot (or other systems) to do the task. While mirroring the player is impossible, it should be possible to mirror the bots programmed by the player.