I don’t think your arguments support your conclusion. I think the zombies section mostly shows that Eliezer is not good at telling what his interlocutors are trying to communicate, the animal consciousness bit shows that he’s overconfident, but I don’t think you’ve shown animals are conscious, so doesn’t show he’s frequently confidently egregiously wrong, and your arguments against FDT seem lacking to me, and I’d tentatively say Eliezer is right about that stuff. Or at least, FDT is closer to the best decision theory than than CDT or EDT.
I think Eliezer is often wrong, and often overconfident. It would be interesting to see someone try to compile a good-faith track record of his predictions, perhaps separated by domain of subject.
This seems like one among a line of similar posts I’ve seen recently, of which you’ve linked to many in your own which try to compile a list of bad things Eliezer thinks and has said which the poster thinks is really terrible, but which seem benign to me. This is my theory of why they are all low-quality, and yet still posted:
Many have an inflated opinion of Eliezer, and when they realize he’s just as epistemically mortal as the rest of us, they feel betrayed, and so overupdate towards thinking he’s less epistemically impressive than he actually is, so some of those people compile lists of grievances they have against him, and post them on LessWrong, and claim this shows Eliezer is confidently egregiously wrong most of the time he talks about anything. In fact, it just shows that the OP has different opinions in some domains than Eliezer does, or that Eliezer’s track-record is not spotless, or that Eliezer is overconfident. All claims that I, and other cynics & the already disillusioned already knew or could have strongly inferred.
Eliezer is actually pretty impressive both in his accomplishments in epistemic rationality, and especially instrumental rationality. But pretty impressive does not mean godlike or perfect. Eliezer does not provide ground-truth information, and often thinking for yourself about his claims will lead you away from his position, not towards it. Maybe this is something he should have stressed more in his Sequences.
I don’t find Eliezer that impressive, for reasons laid out in the article. I argued for animal sentient extensively in the article. Though the main point of the article wasn’t to establish nonphysicalism or animal consciousness but that Eliezer is very irrational on those subjects.
I don’t know if Eliezer is irrational about animal consciousness. There’s a bunch of reasons you can still be deeply skeptical of animal consciousness even if animals have nocioceptors (RL agents have nocioceptors! They aren’t conscious!), or integrated information theory & global workspace theory probably say animals are ‘conscious’. For example, maybe you think consciousness is a verbal phenomenon, having to do with the ability to construct novel recursive grammars. Or maybe you think its something to do with the human capacity to self-reflect, maybe defined as making new mental or physical tools via methods other than brute force or local search.
I don’t think you can show he’s irrational here, because he hasn’t made any arguments to show the rationality or irrationality of. You can maybe say he should be less confident in his claims, or criticize him for not providing his arguments. The former is well known, the latter less useful to me.
I find Eliezer impressive, because he founded the rationality community which IMO is the social movement with by far the best impact-to-community health ratio ever & has been highly influential to other social moments with similar ratios, knew AI would be a big & dangerous deal before virtually anyone, worked on & popularized that idea, and wrote two books (one nonfiction, and the other fanfiction) which changed many peoples’ lives & society for the better. This is impressive no matter how you slice it. His effect on the world will clearly be felt for long to come, if we don’t all die (possibly because we don’t all die, if alignment goes well and turns out to have been a serious worry, which I am prior to believe). And that effect will be positive almost for sure.
I don’t think your arguments support your conclusion. I think the zombies section mostly shows that Eliezer is not good at telling what his interlocutors are trying to communicate, the animal consciousness bit shows that he’s overconfident, but I don’t think you’ve shown animals are conscious, so doesn’t show he’s frequently confidently egregiously wrong, and your arguments against FDT seem lacking to me, and I’d tentatively say Eliezer is right about that stuff. Or at least, FDT is closer to the best decision theory than than CDT or EDT.
I think Eliezer is often wrong, and often overconfident. It would be interesting to see someone try to compile a good-faith track record of his predictions, perhaps separated by domain of subject.
This seems like one among a line of similar posts I’ve seen recently, of which you’ve linked to many in your own which try to compile a list of bad things Eliezer thinks and has said which the poster thinks is really terrible, but which seem benign to me. This is my theory of why they are all low-quality, and yet still posted:
Many have an inflated opinion of Eliezer, and when they realize he’s just as epistemically mortal as the rest of us, they feel betrayed, and so overupdate towards thinking he’s less epistemically impressive than he actually is, so some of those people compile lists of grievances they have against him, and post them on LessWrong, and claim this shows Eliezer is confidently egregiously wrong most of the time he talks about anything. In fact, it just shows that the OP has different opinions in some domains than Eliezer does, or that Eliezer’s track-record is not spotless, or that Eliezer is overconfident. All claims that I, and other cynics & the already disillusioned already knew or could have strongly inferred.
Eliezer is actually pretty impressive both in his accomplishments in epistemic rationality, and especially instrumental rationality. But pretty impressive does not mean godlike or perfect. Eliezer does not provide ground-truth information, and often thinking for yourself about his claims will lead you away from his position, not towards it. Maybe this is something he should have stressed more in his Sequences.
I don’t find Eliezer that impressive, for reasons laid out in the article. I argued for animal sentient extensively in the article. Though the main point of the article wasn’t to establish nonphysicalism or animal consciousness but that Eliezer is very irrational on those subjects.
I don’t know if Eliezer is irrational about animal consciousness. There’s a bunch of reasons you can still be deeply skeptical of animal consciousness even if animals have nocioceptors (RL agents have nocioceptors! They aren’t conscious!), or integrated information theory & global workspace theory probably say animals are ‘conscious’. For example, maybe you think consciousness is a verbal phenomenon, having to do with the ability to construct novel recursive grammars. Or maybe you think its something to do with the human capacity to self-reflect, maybe defined as making new mental or physical tools via methods other than brute force or local search.
I don’t think you can show he’s irrational here, because he hasn’t made any arguments to show the rationality or irrationality of. You can maybe say he should be less confident in his claims, or criticize him for not providing his arguments. The former is well known, the latter less useful to me.
I find Eliezer impressive, because he founded the rationality community which IMO is the social movement with by far the best impact-to-community health ratio ever & has been highly influential to other social moments with similar ratios, knew AI would be a big & dangerous deal before virtually anyone, worked on & popularized that idea, and wrote two books (one nonfiction, and the other fanfiction) which changed many peoples’ lives & society for the better. This is impressive no matter how you slice it. His effect on the world will clearly be felt for long to come, if we don’t all die (possibly because we don’t all die, if alignment goes well and turns out to have been a serious worry, which I am prior to believe). And that effect will be positive almost for sure.
What does this mean?