I am again speaking from intuition only and don’t want to put more time thinking about this for now. I may not even endorse what I say if I put 5 minutes into thinking.
when we assume non-telepaths we get FDT losing by amounts dependent on the degree of information asymmetry
This seems like a good thing
For CDT, lacking retro-causality, they will only be willing to pay up to whatever their honesty value and signaling value is (i.e. less than the $200 for Will). For the FDT agent, they will be willing to pay up to whatever they value the totality of the outcomes (live and pay vs. die and don’t).
This means CDT-Will will die if Derek’ has a different utility function and is only willing to drive them home for $201+? This is the “other” universes I’m talking about.
In an even more realistic scenario, Will should have a prior for the minimum amount Derek is willing to get to drive them home. I expect this would make FDT-Will get some better calculations.
Take my example with the contracts, I don’t think that is actually a good outcome to be able to impose any contract on a disadvantaged party. Having the world of deals you can impose on someone you find at your mercy, so to say, restricted by what is socially permissible and enforceable seems like a preferable state of affairs. Absent legal/social frameworks, enforceability being limited by agent values and willingness to be beholden to deals seems like a preferable state of affairs to such limits not being in place.
This means CDT-Will will die if Derek’ has a different utility function and is only willing to drive them home for $201+? This is the “other” universes I’m talking about.
Yes, if we assume Derek is a misanthrope he will kill Will if WIll is not willing to pay him some amount greater than his misanthropy. But I do not think that is a realistic state of affairs and I think on the flip side you can get asymmetric information causing FDT agents to behave sub optimally when presented with misanthropic actors.[1]
In an even more realistic scenario, Will should have a prior for the minimum amount Derek is willing to get to drive them home.
In the real world, we are often price takers or price setters and rarely negotiating as equal parties. Will may have the prior in my scenario for what he thinks will would be willing to accept. What his prior is, however, is irrelevant, he is not offered that price and doesn’t get to proposition Derek. His only choices are “do I accept Derek’s offer?” and when they get to town having accepted the offer he gets to decide “do I honor the offer?” If he wouldn’t honor the offer, Derek wouldn’t pick him up so he dies.
E.g., as the first example that comes to mind, let’s say your child has been kidnapped. Your kidnapper just happened to capture your child, by pure chance not intentionally, but you have no way to know that. You think that paying off blackmailers makes it more likely you will be blackmailed. The blackmailer demands a payment (lets say there is an escrow and they cannot cheat), but you, as an FDT agent, decline to negotiate. So the blackmailer kills your kid and disappears. A CDT agent pays the blackmailer, not considering the odds their decision may have on them being blackmailed. Unlike the decent driver, which assumes a lack of information, this assumes a true mistake on the FDT part to be truly worse off. Edit: though you can get individual agents to be worse off under FDT in the standard blackmail dilemma, for this case I am pre-assuming true randomness, in which case FDT would pay if they thought it was truly random as such but would still refuse to pay if they were acting under a, in this case mistaken, assumption that agents that didn’t pay would be extremely unlikely to be blackmailed.
I think you are intuiting the question of “which DT is better” using the real world too heavily in a sort of “I think a world where people all do this is better” → “this DT is better” way. You can’t just hope things work out this way.
This seems like a good thing
I don’t think that is actually a good outcome to be able to impose any contract on a disadvantaged party
Yes, thats why you use laws / precommitments to prevent it. I guess I used “good” and that misled you a bit, I think it is game theoretically good, not morally ideal.
But I do not think that is a realistic state of affairs and I think on the flip side you can get asymmetric information causing FDT agents to behave sub optimally when presented with misanthropic actors.
As I said, this is very close to the no free lunch theorem where any DT benefits you in some universes and hurts you in others. I fully expect you can construct a situation including a hostile telepath where DT A outperforms DT B for any A/B.
What his prior is, however, is irrelevant, he is not offered that price and doesn’t get to proposition Derek.
We are assuming Derek knows everything about Will right? So if Will changes his strategy based on his prior then Derek knows that too.
I think you are intuiting the question of “which DT is better” using the real world too heavily in a sort of “I think a world where people all do this is better” → “this DT is better” kind of way. You can’t just hope things work out this way.
Mostly fair, as i think you said elsewhere, i think I misunderstood you as making a value claim when you meant better in some other terms.
But one of the main reasons Yud and Soares give for preferring FDT over CDT is a belief that FDT leads to better outcomes. That is what I find unconvincing. It seems to me that more realistic assumptions better model observations under CDT (e.g. Braess’s paradox, to use an exampl I did elsewhere) and can lead to better outcomes. That was my central thesis. I do agree, that it is usually trivial to conceive of scenarios where any given theory loses to another in some sense.
Yes, thats why you use laws / precommitments to prevent it
Yes, but I would argue it is good to have mediating forces outside of laws. Derek can get either kf them to sign a contract before hand for a $1,000,199, but only FDT would say that they should honor that contract absent any mechanism to enforce it. While I don’t think it can be proven, it seems sensible before considering enforcement mechanisms we should consider honoring contracts based on how much we value honesty, associated signals and other such considerations. It seems less sensible to say we should honor them based solely on value estimates of the entire scenario they fall under. It alao seems sensible, if we include enforcement mechanism, that such mechanism be aet up to prevent people not following contracts that are generally deemed not unreasonable and preventing unconscionable conditions from being imposed even on agents that rationally consented to them (as would be the case with the agents consenting to a 1,000,200 contract).
We are assuming Derek knows everything about Will right? So if Will changes his strategy based on his prior then Will knows that too.
You mean Derek knows it, right? But it doesn’t change Will’s value calculation, so it shouldn’t change his strategy a priori even if he had a prior for what he thinks Derek would accept. He would change his decision if we assumed he knew how Derek was likely to price set and adapted his strategy on that, though.
I am again speaking from intuition only and don’t want to put more time thinking about this for now. I may not even endorse what I say if I put 5 minutes into thinking.
This seems like a good thing
This means CDT-Will will die if Derek’ has a different utility function and is only willing to drive them home for $201+? This is the “other” universes I’m talking about.
In an even more realistic scenario, Will should have a prior for the minimum amount Derek is willing to get to drive them home. I expect this would make FDT-Will get some better calculations.
Why?
Take my example with the contracts, I don’t think that is actually a good outcome to be able to impose any contract on a disadvantaged party. Having the world of deals you can impose on someone you find at your mercy, so to say, restricted by what is socially permissible and enforceable seems like a preferable state of affairs. Absent legal/social frameworks, enforceability being limited by agent values and willingness to be beholden to deals seems like a preferable state of affairs to such limits not being in place.
Yes, if we assume Derek is a misanthrope he will kill Will if WIll is not willing to pay him some amount greater than his misanthropy. But I do not think that is a realistic state of affairs and I think on the flip side you can get asymmetric information causing FDT agents to behave sub optimally when presented with misanthropic actors.[1]
In the real world, we are often price takers or price setters and rarely negotiating as equal parties. Will may have the prior in my scenario for what he thinks will would be willing to accept. What his prior is, however, is irrelevant, he is not offered that price and doesn’t get to proposition Derek. His only choices are “do I accept Derek’s offer?” and when they get to town having accepted the offer he gets to decide “do I honor the offer?” If he wouldn’t honor the offer, Derek wouldn’t pick him up so he dies.
E.g., as the first example that comes to mind, let’s say your child has been kidnapped. Your kidnapper just happened to capture your child, by pure chance not intentionally, but you have no way to know that. You think that paying off blackmailers makes it more likely you will be blackmailed. The blackmailer demands a payment (lets say there is an escrow and they cannot cheat), but you, as an FDT agent, decline to negotiate. So the blackmailer kills your kid and disappears. A CDT agent pays the blackmailer, not considering the odds their decision may have on them being blackmailed. Unlike the decent driver, which assumes a lack of information, this assumes a true mistake on the FDT part to be truly worse off. Edit: though you can get individual agents to be worse off under FDT in the standard blackmail dilemma, for this case I am pre-assuming true randomness, in which case FDT would pay if they thought it was truly random as such but would still refuse to pay if they were acting under a, in this case mistaken, assumption that agents that didn’t pay would be extremely unlikely to be blackmailed.
I think you are intuiting the question of “which DT is better” using the real world too heavily in a sort of “I think a world where people all do this is better” → “this DT is better” way. You can’t just hope things work out this way.
Yes, thats why you use laws / precommitments to prevent it. I guess I used “good” and that misled you a bit, I think it is game theoretically good, not morally ideal.
As I said, this is very close to the no free lunch theorem where any DT benefits you in some universes and hurts you in others. I fully expect you can construct a situation including a hostile telepath where DT A outperforms DT B for any A/B.
We are assuming Derek knows everything about Will right? So if Will changes his strategy based on his prior then Derek knows that too.
Mostly fair, as i think you said elsewhere, i think I misunderstood you as making a value claim when you meant better in some other terms.
But one of the main reasons Yud and Soares give for preferring FDT over CDT is a belief that FDT leads to better outcomes. That is what I find unconvincing. It seems to me that more realistic assumptions better model observations under CDT (e.g. Braess’s paradox, to use an exampl I did elsewhere) and can lead to better outcomes. That was my central thesis. I do agree, that it is usually trivial to conceive of scenarios where any given theory loses to another in some sense.
Yes, but I would argue it is good to have mediating forces outside of laws. Derek can get either kf them to sign a contract before hand for a $1,000,199, but only FDT would say that they should honor that contract absent any mechanism to enforce it. While I don’t think it can be proven, it seems sensible before considering enforcement mechanisms we should consider honoring contracts based on how much we value honesty, associated signals and other such considerations. It seems less sensible to say we should honor them based solely on value estimates of the entire scenario they fall under. It alao seems sensible, if we include enforcement mechanism, that such mechanism be aet up to prevent people not following contracts that are generally deemed not unreasonable and preventing unconscionable conditions from being imposed even on agents that rationally consented to them (as would be the case with the agents consenting to a 1,000,200 contract).
You mean Derek knows it, right? But it doesn’t change Will’s value calculation, so it shouldn’t change his strategy a priori even if he had a prior for what he thinks Derek would accept. He would change his decision if we assumed he knew how Derek was likely to price set and adapted his strategy on that, though.