Here is the post that you linked to, in which you ostensibly prove that an excerpt of my essay was “blatantly false”:
Phlebas:
In other words, the CEV initial dynamic shouldn’t be regarded as discovering what a group of people most desire collectively “by definition”—it is imperfect. If a universal CEV implementation is more difficult for human programmers to do well than a selective CEV, then a selective CEV might not only extrapolate the desires of the group in question more accurately, but also do a better job of reflecting the most effectively extrapolated desires of humanity as a whole.
wedrifid:
I am wary of using arguments along the lines of “CEV is better for everyone than CEV”. If calculating based on a subset happens to be the most practical instrumentally useful hack for implementing CEV then an even remotely competent AI can figure that out itself.
Note that I have made no particular claim about how likely it is that the selective CEV will closer to the ideal CEV of humanity than the universal CEV. I merely claimed that it is not what they most desire collectively “by definition”, i.e. it is not logically necessary that it approximates the ideal human-wide CEV (such as a superintelligence might develop) better than the selective CEV.
[Here] is a comment claiming that CEV most accurately identifies a group’s average desires “by definition” (assuming he doesn’t edit it). So it is not a strawman position that I am criticising in that excerpt.
You argue that even given a suboptimal initial dynamic, the superintelligent AI “can” figure out for a better dynamic and implement that instead. Well of course it “can” – nowhere have I denied that the universal CEV might (with strong likelihood in fact) ultimately produce at least as close an approximation to the ideal CEV of humaity as a selective CEV would.
Nonetheless, high probability =/= logical necessity. Therefore you may wish to revisit your accusation of blatant fallacy. If you are going to use insults, please back them up with a detailed, watertight argument.
How probable exactly is an interesting question, but I shan’t discuss that in this comment since I don’t wish to muddy the waters regarding the nature of the original statement that you were criticising.
Here is the post that you linked to, in which you ostensibly prove that an excerpt of my essay was “blatantly false”:
Phlebas:
wedrifid:
Note that I have made no particular claim about how likely it is that the selective CEV will closer to the ideal CEV of humanity than the universal CEV. I merely claimed that it is not what they most desire collectively “by definition”, i.e. it is not logically necessary that it approximates the ideal human-wide CEV (such as a superintelligence might develop) better than the selective CEV.
[Here] is a comment claiming that CEV most accurately identifies a group’s average desires “by definition” (assuming he doesn’t edit it). So it is not a strawman position that I am criticising in that excerpt.
You argue that even given a suboptimal initial dynamic, the superintelligent AI “can” figure out for a better dynamic and implement that instead. Well of course it “can” – nowhere have I denied that the universal CEV might (with strong likelihood in fact) ultimately produce at least as close an approximation to the ideal CEV of humaity as a selective CEV would.
Nonetheless, high probability =/= logical necessity. Therefore you may wish to revisit your accusation of blatant fallacy. If you are going to use insults, please back them up with a detailed, watertight argument.
How probable exactly is an interesting question, but I shan’t discuss that in this comment since I don’t wish to muddy the waters regarding the nature of the original statement that you were criticising.