I do not understand—and I mean this respectfully—why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible—and if I were ever to find myself in an apparently Newcomblike problem in real life, I’d obviously choose to take both boxes.
If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem. It’s not the same as guaranteeing a win, but it undermines the premise.
Certainly, anybody trying to play pseudo-omega against random-decider would start losing lots of money until they settled on always keeping box B empty.
And if it’s a repeated game where Omega explicitly guarantees it will attempt to keep its accuracy high, choosing only box B emerges as the right choice from non-TDT theories.
If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem.
It’s not a zero-sum game. Using randomness means pseudo-Omega will guess wrong, so he’ll lose, but it doesn’t mean that he’ll guess you’ll one-box, so you don’t win. There is no mixed Nash equilibrium. The only Nash equilibrium is to always one-box.
Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at “convince the brain scanner that you will pick one box”. Newcomblike problems reduce to this multi-stage game too.
Brain scanner are technology that’s very straightforward to think about. Humans reading other humans is a lot more complicated.
People have a hard time accepting that Eliezer won the AI box challenge. “Mind reading” and predicting choices of other people is a task with a similar difficulty than the AI box challenge.
Let’s take contact improvisation as an illustrating example. It’s a dance form without hard rules. If I’m dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I’m in that state and that means that I touch her breast with my arms that’s no real problem.
If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I’m likely to creep her out.
There are plenty of people in the contact improvisation field who’s awareness of other people is good enough to tell the difference.
Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he’s supposed to negotiate and there might be instances where that information leaks.
I don’t think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn’t.
Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.
I believe the application was how a duplicable intelligence like an AI could reason effectively. (Hence TDT thinking in terms of all instances of you.)
Do you think that someone can predict your behavior with maybe 80% accuracy? Like, for example, whether you would one-box or two-box, based on what you wrote? And then confidently leave the $1M box empty because they know you’d two-box? And use that fact to win a bet, for example? Seems very practical.
If I bet $1001 that I’d one-box I’d have a natural incentive to do so.
However, if the boxes were already stocked and I gain nothing for proving pseudo-Omega wrong, then two-boxing is clearly superior. Otherwise I open one empty box, have nothing, yell at pseudo-Omega for being wrong, get a shrug in response, and go to bed regretting that I’d ever heard of TDT.
So as several people said, Omega is probably more within the realm of possibility than you give it credit for, but MORE IMPORTANTLY, Omega is definitely possible for non-humans. As David_Gerard said, the point of this thought exercise is for AI, not for humans. For an AI written by humans, we can know all of its code and predict the answers it will give to certain questions. This means that the AI needs to deal with us as if we are an Omega that can predict the future. For the purposes of AI, you need decision theories that can deal with entities having arbitrarily strong models of each other, recursively. And TDT is one way of trying to do that.
Even if that’s the case, when dealing with AI we more easily have the option of simulation. You can run a program over and over again, and see how it reacts to different inputs.
I do not understand—and I mean this respectfully—why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible—and if I were ever to find myself in an apparently Newcomblike problem in real life, I’d obviously choose to take both boxes.
I don’t think it’s physically impossible for someone to predict my behavior in some situation with a high degree of accuracy.
If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem. It’s not the same as guaranteeing a win, but it undermines the premise.
Certainly, anybody trying to play pseudo-omega against random-decider would start losing lots of money until they settled on always keeping box B empty.
And if it’s a repeated game where Omega explicitly guarantees it will attempt to keep its accuracy high, choosing only box B emerges as the right choice from non-TDT theories.
It’s not a zero-sum game. Using randomness means pseudo-Omega will guess wrong, so he’ll lose, but it doesn’t mean that he’ll guess you’ll one-box, so you don’t win. There is no mixed Nash equilibrium. The only Nash equilibrium is to always one-box.
The idea that we live in a simulation is not a physical impossibility.
At the moment choices can often be predicted 7 seconds in advance by reading brain signals.
Source?
How accurate is this prediction?
A quick googling gives me http://exploringthemind.com/the-mind/brain-scans-can-reveal-your-decisions-7-seconds-before-you-decide as source.
Even if we live in a simulation, I’ve never heard of anybody being presented a newcomblike problem.
Make a coin flip < 7 seconds before deciding.
Most people don’t make coin flips. You can set the rule that making a coin flip is equivalent to picking both boxes.
Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at “convince the brain scanner that you will pick one box”. Newcomblike problems reduce to this multi-stage game too.
Brain scanner are technology that’s very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. “Mind reading” and predicting choices of other people is a task with a similar difficulty than the AI box challenge.
Let’s take contact improvisation as an illustrating example. It’s a dance form without hard rules. If I’m dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I’m in that state and that means that I touch her breast with my arms that’s no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I’m likely to creep her out.
There are plenty of people in the contact improvisation field who’s awareness of other people is good enough to tell the difference.
Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he’s supposed to negotiate and there might be instances where that information leaks.
I don’t think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn’t.
Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.
It’s possible that the channel through which the diplomatic group internally communicates is completely compromised.
I believe the application was how a duplicable intelligence like an AI could reason effectively. (Hence TDT thinking in terms of all instances of you.)
Communication and pre-planning would be a superior coordination method.
This is assuming you know that you might be just one copy of many, at varying points in a timeline.
Do you think that someone can predict your behavior with maybe 80% accuracy? Like, for example, whether you would one-box or two-box, based on what you wrote? And then confidently leave the $1M box empty because they know you’d two-box? And use that fact to win a bet, for example? Seems very practical.
If I bet $1001 that I’d one-box I’d have a natural incentive to do so.
However, if the boxes were already stocked and I gain nothing for proving pseudo-Omega wrong, then two-boxing is clearly superior. Otherwise I open one empty box, have nothing, yell at pseudo-Omega for being wrong, get a shrug in response, and go to bed regretting that I’d ever heard of TDT.
So as several people said, Omega is probably more within the realm of possibility than you give it credit for, but MORE IMPORTANTLY, Omega is definitely possible for non-humans. As David_Gerard said, the point of this thought exercise is for AI, not for humans. For an AI written by humans, we can know all of its code and predict the answers it will give to certain questions. This means that the AI needs to deal with us as if we are an Omega that can predict the future. For the purposes of AI, you need decision theories that can deal with entities having arbitrarily strong models of each other, recursively. And TDT is one way of trying to do that.
In general, predicting what code does can be as hard as executing the code. But I know that’s been considered and I guess that gets into other areas.
Even if that’s the case, when dealing with AI we more easily have the option of simulation. You can run a program over and over again, and see how it reacts to different inputs.
I understood that people here mostly do care about them because of mathematical interest. It’s a part of the “how can we design an AGI” math problem.