Thank you for clarifying.
I think this was fairly obvious
No, it was not obvious!
You replied to a comment that said, verbatim, “what we should indeed sacrifice is our commitment to being anal-retentive about practices that we think associate with getting the precise truth, over and beyond saying true stuff and contradicting false stuff”, with, “This paragraph feels righter-to-me”.
That response does prompt the reader to wonder whether you believe the quoted statement by Malcolm McLeod, which was a prominent thesis sentence of the comment that you were endorsing as feeling righter-to-you! I understand that “This feels righter-to-me” does not mean the same thing as “This is right.” That’s why I asked you to clarify!
In your clarification, you have now disavowed the quoted statement with your own statement that “We absolutely should have more practices that drive at the precise truth than saying true stuff and contradicting false stuff.”
I emphatically agree with your statement for the reasons I explained at length in such posts as “Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think” and “Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists”, but I don’t think the matter is “fairly obvious.” If it were, I wouldn’t have had to write thousands of words about it.
I could be described as supporting the book in the sense that I preordered it, and I bought two extra copies to give as gifts. I’m planning to give one of them tomorrow to an LLM-obsessed mathematics professor at San Francisco State University. But the reason I’m giving him the book is because I want him to read it and think carefully about the arguments on the merits, because I think that mitigating the risk of extinction from AI should be a global priority. It’s about the issue, not supporting MIRI or any particular book.