Yudkowsky couldn’t be bothered to either live up to his own stated standards
“his own stated standards” could use a link/citation.
regardless of the initial intent, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue.
The original Kolmogorov complicity was an instance of lying to protect one’s intellectual endeavors. But here you/Ben seem to be accusing Eliezer of doing something much worse, and which seems like a big leap from what came before it in the post. How did you/Ben rule out the Kolmogorov complicity hypothesis (i.e., that Eliezer still had genuine intellectual or altruistic aims that he wanted to protect)?
Of what you wrote specifically, “no concrete specific plan” is in my view actually a point in Eliezer’s favor, as it’s a natural consequence of high alignment difficulty and intellectual honesty. “Run out the clock” hardly seems fair, and by “maintaining a facsimile of dialogue” what are you referring to? Are you including things like the 2021 MIRI Conversations and if so are you suggesting that all the other (non-MIRI) participants are being fooled or in on the scam?
But since I did spend my entire adult life in Yudkowsky’s robot cult, trusting him the way a Catholic trusts the Pope
I would be interested to read an account of how this happened, and what might have prevented the error.
which seems like a big leap from what came before it in the post
Sorry, the fifth- to second-to-last paragraphs of the originally published version of this post were egregiously terrible writing on my part. (I was summarizing some things Ben said at the time that felt like a relevant part of the story, but what I actually needed to do was explain in my own words the points that I want to endorsedly convey to my readers.)
I’ve rewritten that passage (now the third- and second-to-last paragraphs). I hope this version is clearer.
I’m not conjecturing anything worse than Kolmogorov complicity. (And the 2021 MIRI conversations were great.) I do think political censorship is significantly more damaging to epistemic conditions than many others seem to. People playing a Kolmogorov complicity strategy typically seem to think that it’s cheap to just avoid a few sensitive topics. But the disturbing thing about the events described in this post was that the distortion didn’t stay confined to sensitive topics: the reversal (in emphasis and practice, if not outright logical contradiction) from “words can be wrong” to “you’re not standing in defense of truth [...]” is about the cognitive function of categorization, a “dry” philosophy topic which you wouldn’t expect to be politically sensitive!
I would be interested to read an account of how this happened
He encourages it, doesn’t he? (Much more to say in a future post, or via PM.)
“his own stated standards” could use a link/citation.
The original Kolmogorov complicity was an instance of lying to protect one’s intellectual endeavors. But here you/Ben seem to be accusing Eliezer of doing something much worse, and which seems like a big leap from what came before it in the post. How did you/Ben rule out the Kolmogorov complicity hypothesis (i.e., that Eliezer still had genuine intellectual or altruistic aims that he wanted to protect)?
Of what you wrote specifically, “no concrete specific plan” is in my view actually a point in Eliezer’s favor, as it’s a natural consequence of high alignment difficulty and intellectual honesty. “Run out the clock” hardly seems fair, and by “maintaining a facsimile of dialogue” what are you referring to? Are you including things like the 2021 MIRI Conversations and if so are you suggesting that all the other (non-MIRI) participants are being fooled or in on the scam?
I would be interested to read an account of how this happened, and what might have prevented the error.
Sorry, the fifth- to second-to-last paragraphs of the originally published version of this post were egregiously terrible writing on my part. (I was summarizing some things Ben said at the time that felt like a relevant part of the story, but what I actually needed to do was explain in my own words the points that I want to endorsedly convey to my readers.)
I’ve rewritten that passage (now the third- and second-to-last paragraphs). I hope this version is clearer.
I’m not conjecturing anything worse than Kolmogorov complicity. (And the 2021 MIRI conversations were great.) I do think political censorship is significantly more damaging to epistemic conditions than many others seem to. People playing a Kolmogorov complicity strategy typically seem to think that it’s cheap to just avoid a few sensitive topics. But the disturbing thing about the events described in this post was that the distortion didn’t stay confined to sensitive topics: the reversal (in emphasis and practice, if not outright logical contradiction) from “words can be wrong” to “you’re not standing in defense of truth [...]” is about the cognitive function of categorization, a “dry” philosophy topic which you wouldn’t expect to be politically sensitive!
He encourages it, doesn’t he? (Much more to say in a future post, or via PM.)