I agree with what you’re saying here, that if my goal was to survive I would pick the Pope. Though I’m not sure how much I’d want to live in a world based off the Pope’s EV. Also, I think the whole point is moot, because the FAI programmers don’t have to pick a Schelling point. They can pick Universal, or a random sample, or call for volunteers, or call for volunteers with some screening test to get rid of sociopaths.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
Actually it might not be that bad. The theology of the body thing they have going might however mess up transhumanist aspirations I have and that would suck. But otherwise I expect a world mostly free of disease and poverty with much longer (but perhaps finite) lifespans where Western traditional values are given a boost.
That’s pretty close to utopia when considering most other outcomes. Certainly it would be a more pleasant place to live than Robin Hansons em world.
the pope would be the biggest one-person schelling point
Yes, exactly.
and it’s not a good choice for initial dynamic
And therefore, choosing a Schelling point for morality as base of CEV is probably not as good idea as it may seem. Unless one believes that ten-person or hundred-person Schelling points for morality would bring dramatically different results.
(And this is basically what I was trying to express in the comment that got so many negative points. Pope could be a Schelling point, Dalai Lama could be a Schelling point… Eliezer Yudkowsky would be a Schelling point inside LW community, but not outside.)
I agree with what you’re saying here, that if my goal was to survive I would pick the Pope. Though I’m not sure how much I’d want to live in a world based off the Pope’s EV. Also, I think the whole point is moot, because the FAI programmers don’t have to pick a Schelling point. They can pick Universal, or a random sample, or call for volunteers, or call for volunteers with some screening test to get rid of sociopaths.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
Actually it might not be that bad. The theology of the body thing they have going might however mess up transhumanist aspirations I have and that would suck. But otherwise I expect a world mostly free of disease and poverty with much longer (but perhaps finite) lifespans where Western traditional values are given a boost.
That’s pretty close to utopia when considering most other outcomes. Certainly it would be a more pleasant place to live than Robin Hansons em world.
Yes, exactly.
And therefore, choosing a Schelling point for morality as base of CEV is probably not as good idea as it may seem. Unless one believes that ten-person or hundred-person Schelling points for morality would bring dramatically different results.
(And this is basically what I was trying to express in the comment that got so many negative points. Pope could be a Schelling point, Dalai Lama could be a Schelling point… Eliezer Yudkowsky would be a Schelling point inside LW community, but not outside.)