Two Issues with Playing Chicken with the Universe

If you are unfamiliar with the 5-and-10 problem, please refer to the Action Counterfactuals section of this post on Embedded Agency. I regret that I am unable to recommend a resource for learning about the concept of “Playing Chicken with the Universe”. If any reader has recommendations for such a resource, I welcome their suggestions in the comments section below.

Consider a variant of the 5-and-10 problem where the agent visualises an image of the number they are going to select immediately before making their decision, and never entertains such an image at any other time. Further, imagine that we have access to the agent’s brain scans, we can use it to demonstrate that the agent will select the number 5. It is highly likely that we would similarly be able to prove that the agent would imagine 5, without first proving that it will choose 5. We should also have enough information about the agent to show that conditional on the agent choosing 10, it will first have imagined 10 so it will imagine both 5 and 10. This contradiction is a spurious counterfactual and it would allow us to prove that in the condition where we choose the option 10 would give us whatever utility we want to prove. Playing chicken with the universe doesn’t prevent this as the agent never proves it will or will not take a particular option, but instead proves facts about correlates.

It is possible to demonstrate a similar issue by utilitising perfect predictors instead of making the agent imagine its choice. Imagine we have an agent which chooses 5 and we have access to the agent’s brain scans, plus technical details of how the predictor works. In at least some scenarios, we should be able to use our knowledge of the brain scans + how the predictor works to prove that the predictor will predict the agent choosing 5, without first proving anything about what it will choose. Conditional on choosing 10, we could show it would be predicted to take 10, which would again give us a contradiction as we would then be expecting the predictor to predict both 5 and 10. Again, playing chicken with the universe doesn’t seem to offer a resolution to this issue.

Playing chicken with the universe is a hack. It attempts to solve the paradox of imagining an agent that takes 5 and then conditioning on it taking 10 by sweeping the issue under the rug. However, even though this patch works in some cases, it doesn’t solve the underlying issue that we haven’t really defined how counterfactuals should be constructed, just one contradiction that we should avoid. I recommend trying to solve the hard core of the problem instead (and that is what much of my research focuses on).