Toby, my actual stance on the core issue is that it is a Newcomblike problem. You observe the seal of the confessional for the same reason that you one-box on Newcomb’s Problem, cooperate in the oneshot Prisoner’s Dilemma, or keep your word as Parfit’s Hitchhiker: namely, to win.
And if we were talking about superintelligences dealing with other superintelligences, this would be the whole of the law.
It’s not easy to transport Newcomblike problems to humans—who cannot make rigorous inferences about each other’s probable initial conditions, cannot make rigorous deductions about decisions given the initial condition, and who can only guess at the degree of similarity of decision processes.
But it’s by no means obvious that a human should two-box on Newcomb’s Problem—it seems that people’s choices on Newcomb’s Problem do correlate to other facets of their personality, which means that one-boxers against a human Omega might still do better on average. It’s by no means clear that humans should go around defecting in the Prisoner’s Dilemma, because for us, such situations are often iterated. Our PDs are rarely True PDs where you really don’t care at all about the other person. It’s by no means clear that humans should believe themselves obligated to break their word to Parfit’s Hitchhiker, because we are not perfect liars.
If that lacks the crispness of, for example, the rule that you should not adopt mysterious answers to mysterious questions—well, not every question that I consider has a nice, crisp, massively supported answer. Some of them do. Those are nice. And I even prefer to write about questions that are clear to me, than areas where the borders are fuzzy. But I felt that I had to write about ethics anyway—all things considered.
Toby, my actual stance on the core issue is that it is a Newcomblike problem. You observe the seal of the confessional for the same reason that you one-box on Newcomb’s Problem, cooperate in the oneshot Prisoner’s Dilemma, or keep your word as Parfit’s Hitchhiker: namely, to win.
And if we were talking about superintelligences dealing with other superintelligences, this would be the whole of the law.
It’s not easy to transport Newcomblike problems to humans—who cannot make rigorous inferences about each other’s probable initial conditions, cannot make rigorous deductions about decisions given the initial condition, and who can only guess at the degree of similarity of decision processes.
But it’s by no means obvious that a human should two-box on Newcomb’s Problem—it seems that people’s choices on Newcomb’s Problem do correlate to other facets of their personality, which means that one-boxers against a human Omega might still do better on average. It’s by no means clear that humans should go around defecting in the Prisoner’s Dilemma, because for us, such situations are often iterated. Our PDs are rarely True PDs where you really don’t care at all about the other person. It’s by no means clear that humans should believe themselves obligated to break their word to Parfit’s Hitchhiker, because we are not perfect liars.
If that lacks the crispness of, for example, the rule that you should not adopt mysterious answers to mysterious questions—well, not every question that I consider has a nice, crisp, massively supported answer. Some of them do. Those are nice. And I even prefer to write about questions that are clear to me, than areas where the borders are fuzzy. But I felt that I had to write about ethics anyway—all things considered.