I have this concept for a simple thematic/experiential game that demonstrates acausalism that someone might want to make (I’m more of a gameplay system designer than an artist so I don’t make these types of games, but I think they’re still valuable):
The player faces a series of prisoner’s dilemmas against a number of aliens. One day, the alien in question is a perfect mirror image of the player, they look exactly like you, and they move however you move. It’s not clear that the alien is even sapient. It may just be some kind of body mirroring device. Regardless, the decision you should make is still clear, the player wont be allowed to progress to the rest of the game until they realise they should cooperate when faced with a mirror (the player now basically understands acausalism). They then face various aliens who are imperfect mirrors of the player to varying degrees, they don’t move around like a mirror, but they look a lot like you, some of them also reliably cooperate if and only if the player cooperates, some of them only reciprocate with a high or high enough probability.
The player is then given access to the lore of brain ontogeny that determines whether an alien’s decisions are going to be entrained with yours, and they start to cooperate with increasingly alien-looking aliens, aliens which have very different appearances and values than the player, but who are nonetheless linked souls, able to cooperate wherever there’s enough light for it.
The player also meets imposters, who pretend to be mirrors, or who pretend to be decent folk, but there are signs. There will be signatures missing from their identification signals. Records missing from church visitor books. A wrong smell.
One of the reasons I’m currently not expecting to make this myself is (though this is probably neurotic) the mirroring mechanic is in a way, fraudulent? We aren’t really modelling brain ontogeny or verifiable signals of FDT contractualism within the game, so common objections to acausal trade like “you can’t actually tell the difference between a FDT agent and a CDT agent pretending to be a FDT agent” feel very much unaddressed by the underlying systems. We haven’t actually implemented FDT agents and even if we had, the player wouldn’t be able to tell whether they really were or not! A cynic would wonder about the underlying implementation and be disappointed with it and say “but this is a lie, the ai is cheating, this strict entrainment between my choice and their choice couldn’t happen in the real world in this way”. Brain ontogeny lore and discussion of imposters might address those concerns, but we can only get into that stuff later in the game :/ and by then they may have lost patience.
I dunno. Maybe there’s some way of letting false rationality cynics skip most of the tutorials since they probably wouldn’t need the prisoner’s dilemma explained to them. Maybe even skip the mirror guys. And maybe there’d need to be a sequence about the type of profound condition of isolation that comes to those who think themselves bad, and maybe they meet people who are bad in just the same way as them, and they realise that there are mirrors in the world even for them, that there are people who will move as they move, and then they discover ways of changing their nature (through the use of advanced brain modification technology that we don’t have in our world. I’m not personally aware of a presently existing treatment for bad faith in humans), and they realise they have every incentive to do so.
You could make a statistical model. Gather data on various past decisions of various players, and try to predict the players actions. In an iterated context, your decisions the previous times aren’t a Perfect mirror of your decisions this time, but they are pretty close.
Sure. I’m not sure how we want to represent the prisoner’s dilemma, there might be ways of making it more immersive/natural than this (natural instances of prisoner’s dilemmas might look more like the shout “friendly”, or attack from behind choice in every player faces when first meeting another player in ARC Raiders). But the basic way you can do it is, you make your decision in a private UI, you don’t/can’t reveal it until both players have made their decision, then they’re revealed simultaneously. For agents who are acausally entrained, we fake it by just changing the alien’s decision to equal the player’s decision before the reveal.
It’s devious, isn’t it? But, it’s not as if we can be expected to read the player’s decision theory through the webcam and create a circuit within the game code that only cooperates when this specific human player would cooperate despite lacking or ignoring any knowledge of their decision output. In another way, it’s fine to cheat here, because the player isn’t even supposed to be roleplaying as themselves within the game. This has to be a world where like, agents can send verifiable signals about which decision theory contract they implement, so it perhaps couldn’t take place in our world (I’d like to believe that it could, humans do have spooky tacit communication channels, and they certianly want to be trustworthy in this way, but humans also seem to be pretty good at lying afaict)
Though hopefully our world will become more like that soon.
Ooh, one way that you could have it is that the human is actually solving problems by programming a bot to solve them, a bit like in Shenzhen I/O, and the bot starts to meet lookalike bots, that act according to the code you’ve written, but with an opposing goal? And they’re on the other side of the mirror so you can only change your own bot’s behavior
In this case and, tragically, in most cases, I don’t think doing the real thing in a video game (that people would play) is possible.
Common obstacles to making philosophical games entail from the fact that one can’t put a whole person inside a game, we don’t have human-level AI that we could put inside a video game (and even if we did we’d be constrained by the fact that doing so is to some extent immoral although you can make sure the experience component that corresponds to the npc is very small, eg, by making the bulk of the experience that of an actor, performing), and you also can’t rely on human players to roleplay correctly, we can’t temporarily override their beliefs and desires with that of their character, even when they wish we could.
So if we want to make games about people, we have to cheat.
Zachtronics games does part of that. In those games the player doesn’t do the tasks directly, instead they need to program a bot (or other systems) to do the task. While mirroring the player is impossible, it should be possible to mirror the bots programmed by the player.
I have this concept for a simple thematic/experiential game that demonstrates acausalism that someone might want to make (I’m more of a gameplay system designer than an artist so I don’t make these types of games, but I think they’re still valuable):
One of the reasons I’m currently not expecting to make this myself is (though this is probably neurotic) the mirroring mechanic is in a way, fraudulent? We aren’t really modelling brain ontogeny or verifiable signals of FDT contractualism within the game, so common objections to acausal trade like “you can’t actually tell the difference between a FDT agent and a CDT agent pretending to be a FDT agent” feel very much unaddressed by the underlying systems. We haven’t actually implemented FDT agents and even if we had, the player wouldn’t be able to tell whether they really were or not! A cynic would wonder about the underlying implementation and be disappointed with it and say “but this is a lie, the ai is cheating, this strict entrainment between my choice and their choice couldn’t happen in the real world in this way”. Brain ontogeny lore and discussion of imposters might address those concerns, but we can only get into that stuff later in the game :/ and by then they may have lost patience.
I dunno. Maybe there’s some way of letting false rationality cynics skip most of the tutorials since they probably wouldn’t need the prisoner’s dilemma explained to them. Maybe even skip the mirror guys. And maybe there’d need to be a sequence about the type of profound condition of isolation that comes to those who think themselves bad, and maybe they meet people who are bad in just the same way as them, and they realise that there are mirrors in the world even for them, that there are people who will move as they move, and then they discover ways of changing their nature (through the use of advanced brain modification technology that we don’t have in our world. I’m not personally aware of a presently existing treatment for bad faith in humans), and they realise they have every incentive to do so.
Given that, I think this would work.
You could make a statistical model. Gather data on various past decisions of various players, and try to predict the players actions. In an iterated context, your decisions the previous times aren’t a Perfect mirror of your decisions this time, but they are pretty close.
Interesting! Is there a way to limit the player’s agency such that, within the rules of the game, the mirroring mechanic would be effectively true?
Sure. I’m not sure how we want to represent the prisoner’s dilemma, there might be ways of making it more immersive/natural than this (natural instances of prisoner’s dilemmas might look more like the shout “friendly”, or attack from behind choice in every player faces when first meeting another player in ARC Raiders). But the basic way you can do it is, you make your decision in a private UI, you don’t/can’t reveal it until both players have made their decision, then they’re revealed simultaneously. For agents who are acausally entrained, we fake it by just changing the alien’s decision to equal the player’s decision before the reveal.
It’s devious, isn’t it? But, it’s not as if we can be expected to read the player’s decision theory through the webcam and create a circuit within the game code that only cooperates when this specific human player would cooperate despite lacking or ignoring any knowledge of their decision output. In another way, it’s fine to cheat here, because the player isn’t even supposed to be roleplaying as themselves within the game. This has to be a world where like, agents can send verifiable signals about which decision theory contract they implement, so it perhaps couldn’t take place in our world (I’d like to believe that it could, humans do have spooky tacit communication channels, and they certianly want to be trustworthy in this way, but humans also seem to be pretty good at lying afaict)
Though hopefully our world will become more like that soon.
Ooh, one way that you could have it is that the human is actually solving problems by programming a bot to solve them, a bit like in Shenzhen I/O, and the bot starts to meet lookalike bots, that act according to the code you’ve written, but with an opposing goal? And they’re on the other side of the mirror so you can only change your own bot’s behavior
In this case and, tragically, in most cases, I don’t think doing the real thing in a video game (that people would play) is possible.
Common obstacles to making philosophical games entail from the fact that one can’t put a whole person inside a game, we don’t have human-level AI that we could put inside a video game (and even if we did we’d be constrained by the fact that doing so is to some extent immoral although you can make sure the experience component that corresponds to the npc is very small, eg, by making the bulk of the experience that of an actor, performing), and you also can’t rely on human players to roleplay correctly, we can’t temporarily override their beliefs and desires with that of their character, even when they wish we could.
So if we want to make games about people, we have to cheat.
Zachtronics games does part of that. In those games the player doesn’t do the tasks directly, instead they need to program a bot (or other systems) to do the task. While mirroring the player is impossible, it should be possible to mirror the bots programmed by the player.