whatever line of reasoning you are using to object to some imagined CEV scenario, because that line of reasoning is contained within you, CEV will by its very nature also take into account that line of reasoning
This assumes that CEV actually works as intended (and the intention was the right one), which would be exactly the question under discussion (hopefully), so in that context you aren’t allowed to make that assumption.
The adequate response is not that it’s “correct by definition” (because it isn’t, it’s a constructed artifact that could well be a wrong thing to construct), but an (abstract) explanation of why it will still make that correct decision under the given circumstances. An explanation of why exactly it’s true that CEV will also take into account that line of reasoning, why do you believe that it is its nature to do so, for example. And it aren’t that simple, say it won’t take into account that line of reasoning if it’s wrong, but it’s again not clear how it decides what’s wrong.
This assumes that CEV actually works as intended (and the intention was the right one), which would be exactly the question under discussion (hopefully), so in that context you aren’t allowed to make that assumption.
Right, I am talking about the scenario not covered by your “(hopefully)” clause where people accept for the sake of argument that CEV would work as intended/written but still imagine failure modes. Or subtler cases where you think up something horrible that CEV might do but don’t use your sense of horribleness as evidence against CEV actually doing it (e.g. Rokogate). It seems to me you are talking about people who are afraid CEV wouldn’t be implemented correctly, which is a different group of people that includes basically everyone, no? (I should probably note again that I do not think of CEV as something you’d work on implementing so much as a piece of philosophy and public relations that you should take into account when thinking up FAI research plans. I am definitely not going around saying “CEV is right by definition!”...)
This assumes that CEV actually works as intended (and the intention was the right one), which would be exactly the question under discussion (hopefully), so in that context you aren’t allowed to make that assumption.
The adequate response is not that it’s “correct by definition” (because it isn’t, it’s a constructed artifact that could well be a wrong thing to construct), but an (abstract) explanation of why it will still make that correct decision under the given circumstances. An explanation of why exactly it’s true that CEV will also take into account that line of reasoning, why do you believe that it is its nature to do so, for example. And it aren’t that simple, say it won’t take into account that line of reasoning if it’s wrong, but it’s again not clear how it decides what’s wrong.
Right, I am talking about the scenario not covered by your “(hopefully)” clause where people accept for the sake of argument that CEV would work as intended/written but still imagine failure modes. Or subtler cases where you think up something horrible that CEV might do but don’t use your sense of horribleness as evidence against CEV actually doing it (e.g. Rokogate). It seems to me you are talking about people who are afraid CEV wouldn’t be implemented correctly, which is a different group of people that includes basically everyone, no? (I should probably note again that I do not think of CEV as something you’d work on implementing so much as a piece of philosophy and public relations that you should take into account when thinking up FAI research plans. I am definitely not going around saying “CEV is right by definition!”...)