Isn’t it, though?
Zack_M_Davis
Indeed, I notice in your list above you suspiciously do not list the most common kind of attribute that is attributed to someone facing social punishment. “X is bad” or “X sucks” or “X is evil”.
I’m inclined to still count this under “judgments supervene on facts and values.” Why is X bad, sucky, evil? These things can’t be ontologically basic. Perhaps less articulate members of a mass punishment coalition might not have an answer (“He just is; what do you mean ‘why’? You’re not an X supporter, are you?”), but somewhere along the chain of command, I expect their masters to offer some sort of justification with some sort of relationship to checkable facts in the real world: “stupid, dishonest, cruel, ugly, &c.” being the examples I used in the post; we could keep adding to the list with “fascist, crazy, cowardly, disloyal, &c.” but I think you get the idea.
The justification might not be true; as I said in the post, people have an incentive to lie. But the idea that “bad, sucks, evil” are just threats within a social capital system without any even pretextual meaning outside the system flies in the face of experience that people demand pretexts.
Can’t you just say that yourself (not all, caricature, parody, uncharitable, exaggerates, &c.) when sharing it? Death of the author, right?
or that they will be robust to strong optimization at the time when AIs are capable of taking over. I think that’s probably wrong, because (1) LLMs have many more degrees of freedom in their internal representations than e.g. Inception, so the resulting optimized outputs are going to look even stranger
There has been some progress in robust ML since the days of DeepDream (2015).
I feel like Thomas was trying to contribute to this conversation by making an intellectually substantive on-topic remark and then you kind of trampled over that with vacuous content-free tone-policing.
I could be described as supporting the book in the sense that I preordered it, and I bought two extra copies to give as gifts. I’m planning to give one of them tomorrow to an LLM-obsessed mathematics professor at San Francisco State University. But the reason I’m giving him the book is because I want him to read it and think carefully about the arguments on the merits, because I think that mitigating the risk of extinction from AI should be a global priority. It’s about the issue, not supporting MIRI or any particular book.
Thank you for clarifying.
I think this was fairly obvious
No, it was not obvious!
You replied to a comment that said, verbatim, “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”, with, “This paragraph feels righter-to-me”.
That response does prompt the reader to wonder whether you believe the quoted statement by Malcolm McLeod, which was a prominent thesis sentence of the comment that you were endorsing as feeling righter-to-you! I understand that “This feels righter-to-me” does not mean the same thing as “This is right.” That’s why I asked you to clarify!
In your clarification, you have now disavowed the quoted statement with your own statement that “We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.”
I emphatically agree with your statement for the reasons I explained at length in such posts as “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists”, but I don’t think the matter is “fairly obvious.” If it were, I wouldn’t have had to write thousands of words about it.
This is important enough that you should clarify in your own words. Raymond Arnold, as a moderator of lesswrong.com, is it in fact your position that “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”?
One is “We must never abandon this relentless commitment to precise truth. All we say, whether to each other or to the outside world, must be thoroughly vetted for its precise truthfulness.” To which my reply is: how’s that been working out for us so far?
[...]
We can win without sacrificing style and integrity.
But you just did propose sacrificing our integrity: specifically, the integrity of our relentless commitment to precise truth. It was two paragraphs ago. The text is right there. We can see it. Do you expect us not to notice?
To be clear, in this comment, I’m not even arguing that you’re wrong. Given the situation, maybe sacrificing the integrity of our relentless commitment to precise truth is exactly what’s needed!
But you can’t seriously expect people not to notice, right? You are including the costs of people noticing as part of your consequentialist decision calculus, right?
How do you think norm enforcement works, other than by threatening people who don’t comply with the norm?
Like feeling the rain on your skin, no one else can feel it for you.
This is a deliberate reference to the lyrics of Natasha Bedingfield’s thematically-relevant song “Unwritten”, right? (Seems much more likely than coincidence or cryptomnesia.) I can empathize with it feeling too cute not to use, but it seems like a bad (self-undermining) choice in the context of an essay about the importance of struggling to find original words?
(Fixed; thanks for your patience.)
Followup question: you thought criticism was useful in April 2023. What changed your mind?
your attempts at posting good-faith critiques in the comments of most LW posts are costlier to [...] the community you care about, than they are beneficial.
Why? What are the costs to the community?
Thanks for commenting!
Describing them as posts versus comments probably isn’t ideal, but I think it’s mostly okay.
Yes, in retrospect, I wish I had done a better job of flagging the metonymy. I’m glad the idea got through despite that.
I claim that yes, these two different types of writing are significantly different activities.
Different in what respect? When I write a critical post (arguing that author is wrong about because ), it feels like relevantly the same activity as when I write a “non-critical” post (just arguing that because without reference to any reputedly mistaken prior work) in terms of what cognitive skills I’m using: the substance is about working out how implies . That’s the aspect relevant to the playing/coaching metaphor. Whether there happens to be an in the picture doesn’t seem to change the essential character of the work. (Right? Does your subjective assessment differ?)
The effect of rendering these bytes as text preceded by my username does not need to be the same as the effect of rendering these bytes as text preceded by another username!
It doesn’t need to, but should it? The section titled “However, Critic Contributions Can Inform Uncertain Estimates of Comment Value” describes one reason why it should. My bold philosophical claim is that that’s the only reason. (I’m counting gjm’s comment about known expertise as relevantly “the same reason.”)
Alternatively, for the purpose of the argument in that section, we can instead imagine that we’re talking about a blog where the commenting form has a blank “Author name” field, rather than a site with passworded accounts: the name could be forged just as easily as the comment content, and the “comment” is the (author-name, content) pair. That would restore the screening-off property.
When discussing rationality, I typically use the word normative to refer to what idealized Bayesian reasoners would do, often in contrast to what humans do.
(Example usage, bolding added: “Normatively, theories are preferred to the quantitative extent that they are simple and predict the observed data [...] For contingent evolutionary-psychological reasons, humans are innately biased to prefer ‘their own’ ideas, and in that context, a ‘principle of charity’ can be useful as a corrective heuristic—but the corrective heuristic only works by colliding the non-normative bias with a fairness instinct [...]”)
As Schopenhauer observes, the entire concept of adversarial debate is non-normative!
“[N]ot demand[ing] [...] that a compelling argument be immediately accepted” is normatively correct insofar as even pretty idealized Bayesian reasoners would face computational constraints, but a “stubborn defense of one’s starting position—combined with a willingness [...] to change one’s mind later” isn’t normatively correct, because the stubbornness part comes from humans’ innate vanity rather than serving any functional purpose. You could just say, “Let me think about that and get back to you later.”
And yet here you demand I immediately change my mind in response to reason and evidence.
I think this is an improperly narrow interpretation of the word now in the grandparent’s “I’ll take that retraction and apology now.” A retraction and apology in a few days after you’ve taken some time to cool down and reflect would be entirely in line with Schopenhauer’s advice. I await the possibility with cautious optimism.
Zack Davis describes that position as “laughable, obviously wrong, and deeply corrosive”
I mean, I do think that (recall that I actually did the experiment with an LLM to demonstrate), but do you understand the rhetorical device I was invoking by using those exact words in the comment in question?
You had just disparagingly characterized Achmiz as “describing [interlocutors’] positions as laughable, obviously wrong, deeply corrosive, etc”. I was deliberately “biting the bullet” by choosing to express my literal disagreement with your hyperbolic insult using those same words verbatim, in order to stick up for the right to express disagreement using strong language when appropriate.
Just checking that you “got the joke.”
“normatively correct”. You guys
Please note that I had put a Disagree react on the phrase “normatively correct” on the comment in question. (The react was subsequently upvoted by Drake Morrison and Habryka.)
My actual position is subtler: I think Schopenhauer is correct to point out that it’s possible to concede an argument too early and that good outcomes often result from being obstinate in the heat of an argument and then reflecting at leisure later, but I think describing the obstinacy behavior as “normatively correct” is taking it way too far; that’s not what the word normative means.
Thank you for answering my question.
Allowing lots of top-level posts
As it happens, I was planning (in due time) to write my own top-level reaction post to your post of 22 August. I had assumed this would be allowed, as I have written well-received top-level reaction posts to other Less Wrong posts many times before: for example, “Relevance Norms” (which you evidently found valuable enough to cite in your post of 22 August) or “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” (which was Curated).
Will I be permitted to post?
will inevitably then cause me to have to spend another 100+ hours on this
I don’t think “have to” is warranted. You don’t have to reply if you don’t want to. But other people have a legitimate interest in publicly discussing your public statements among themselves, independently of whether you think it’s worth your time to reply.
Did you read the book? Chapter 4, “You Don’t Get What You Train For”, is all about this. I also see reasons to be skeptical, but have you really “not seen MIRI arguing that it’s overwhelmingly likely to be false”?