As for the Understanding Shoulds section, that’s another example of the document being tailor-made for a specific target audience; most people are indeed “taking far too seriously” their “utterly useless shoulds,” but the CFAR workshop audience was largely one pendulum swing ahead of that state, and needing the next round of iterated advice.
Emailing CFAR is the best way to find out; previously the question wasn’t considered in depth because “well, we’re not selling it, and we’re also not sharing it.” Now, the state is “well, they’re not selling it, but they are sharing it,” so it’s unclear.
(Things like the XKCD comic being uncited came about because in context, something like 95% of participants recognized XKCD immediately and the other 5% were told in person when lecturers said stuff like “If you’ll look at the XKCD comic on page whatever...” In other words, it was treated much more like an internal handout shared among a narrowly selected, high-context group, than as product that needed to dot all of the i’s and cross all of the t’s. I agree that Randall Munroe deserves credit for his work, and that future edits would likely correct things like that.)
Emailing people at CFAR directly is the best way to find out, I think (I dunno how many of them are checking this thread).
Note that this handbook covers maybe only about 2⁄3 of the progress made in that private beta branch, with the remaining third divided into “happened while I was there but hasn’t been written up (hopefully ‘yet’)” and “happened since my departure, and unclear whether anyone will have the time and priority to export it.”
I don’t know the answer; the team made their decision and then checked to see if I was okay with it; I wasn’t a part of any deliberations or discussions.
What I meant by the word “our” was “the broader context culture-at-large,” not Less Wrong or my own personal home culture or anything like that. Apologies, that could’ve been clearer.
I think there’s another point on the spectrum (plane?) that’s neither “overt anti-intellectualism” nor “It seems to me that engaging with you will be unproductive and I should disengage.” That point being something like, “It’s reasonable and justified to conclude that this questioning isn’t going to be productive to the overall goal of the discussion, and is either motivated-by or will-result-in some other effect entirely.”
Something stronger than “I’m disengaging according to my own boundaries” and more like “this is subtly but significantly transgressive, by abusing structures that are in place for epistemic inquiry.”
If the term “sealioning” is too tainted by connotation to serve, then it’s clearly the wrong word to use; TIL. But I disagree that we don’t need or shouldn’t have any short, simple handle in this concept space; it still seems useful to me to be able to label the hypothesis without (as Oliver did) having to write words and words and words and words. The analogy to the usefulness of the term “witchhunt” was carefully chosen; it’s the sort of thing that’s hard to see at first, and once you’ve put forth the effort to see it, it’s worth … idk, cacheing or something?
I agree that you’ve said this multiple times, in multiple places; I wanted you to be able to say it shortly and simply. To be able to do something analogous to saying “from where I’m currently standing, this looks to me like a witchhunt” rather than having to spell out, in many different sentences, what a witchhunt is and why it’s bad and how this situation resembles that one.
My caveats and hedges were mainly not wanting to be seen as putting words in your mouth, or presupposing your endorsement of the particular short sentence I proposed.
I note that we, as a culture, have reified a term for this, which is “sealioning.”
Naming the problem is not solving the problem; sticking a label on something is not the same as winning an argument; the tricky part is in determining which commentary is reasonably described by the term and which isn’t (and which is controversial, or costly-but-useful, and so forth).
But as I read through this whole comment chain, I noticed that I kept wanting Oliver to be able to say the short, simple sentence:
“My past experience has led me to have a prior that threads from you beginning like this turn out to be sealioning way more often than similar threads from other people.”
Note that that’s my model of Oliver; the real Oliver has not actually expressed that [edit: exact] sentiment [edit: in those exact words] and may have critical disagreements with my model of him, or critical caveats regarding the use of the term.
(I expect the answer to 2 will still be the same from your perspective, after reading this comment, but I just wanted to point out that not all influences of a CFAR staff member cash out in things-visible-in-the-workshop; the part of my FB post that you describe as 2 was about strategy and research and internal culture as much as workshop content and execution. I’m sort of sad that multiple answers have had a slant that implies “Duncan only mattered at workshops/Duncan leaving only threatened to negatively impact workshops.”)
On reading Anna’s above answer (which seems true to me, and also satisfies a lot of the curiosity I was experiencing, in a good way), I noted a feeling of something like “reading this, the median LWer will conclude that my contribution was primarily just ops-y and logistical, and the main thing that was at threat when I left was that the machine surrounding the intellectual work would get rusty.”
It seems worth noting that my model of CFAR (subject to disagreement from actual CFAR) is viewing that stuff as a domain of study, in and of itself—how groups cooperate and function, what makes up things like legibility and integrity, what sorts of worldview clashes are behind e.g. people who think it’s valuable to be on time and people who think punctuality is no big deal, etc.
But this is not necessarily something super salient in the median LWer’s model of CFAR, and so I imagine the median LWer thinking that Anna’s comment means my contributions weren’t intellectual or philosophical or relevant to ongoing rationality development, even though I think Anna-and-CFAR did indeed view me as contributing there, too (and thus the above is also saying something like “it turned out Duncan’s disappearance didn’t scuttle those threads of investigation”).
In general, if you don’t understand what someone is saying, it’s better to ask “what do you mean?” than to say “are you saying [unrelated thing that does not at all emerge from what they said]??” with double punctuation.
They do. The distinction seems to me to be something like endorsement of a “counting up” strategy/perspective versus endorsement of a “counting down” one, or reasonable disagreement about which parts of the dog food are actually beneficial to eat at what times versus which ones are Goodharting or theater or low payoff or what have you.
I’m not saying that, either.
I request that you stop jumping to wild conclusions and putting words in people’s mouths, and focus on what they are actually saying.
That’s not the question that was asked, so … no.
Edit: more helpfully, I found it valuable for thinking about rationality and thinking about CFAR from a strategic perspective—what it was, what it should be, what problems it was up against, how it interfaced with the rest of society.
I’m reading the replies of current CFAR staff with great interest (I’m a former staff member who ended work in October 2018), as my own experience within the org was “not really; to some extent yes, in a fluid and informal way, but I rarely see us sitting down with pen and paper to do explicit goal factoring or formal double crux, and there’s reasonable disagreement about whether that’s good, bad, or neutral.”
Historically, CFAR had the following concerns (I haven’t worked there since Oct 2018, so their thinking may have changed since then; if a current staff member gets around to answering this question you should consider their answer to trump this one):
The handbook material doesn’t actually “work” in the sense that it can change lives; the workshop experience is crucial to what limited success CFAR *is* able to have, and there’s concern about falsely offering hope
There is such a thing as idea inoculation; the handbook isn’t perfect and certainly can’t adjust itself to every individual person’s experience and cognitive style. If someone gets a weaker, broken, or uncanny-valley version of a rationality technique out of a book, not only may it fail to help them in any way, but it will also make subsequently learning [a real and useful skill that’s nearby in concept space] correspondingly more difficult, both via conscious dismissiveness and unconscious rounding-off.
To the extent that certain ideas or techniques only work in concert or as a gestalt, putting the document out on the broader internet where it will be chopped up and rearranged and quoted in chunks and riffed off of and likely misinterpreted, etc., might be worse than not putting it out at all.
[Disclaimer: have not been at CFAR since October 2018; if someone currently from the org contradicts this, their statement will be more accurate about present-day CFAR]
No (CFAR’s mission has always been narrower/more targeted) and no (not in any systematic, competent fashion).
In case no one who currently works at CFAR gets around to answering this (I was there from Oct 2015 to Oct 2018 in a pretty influential role but that means I haven’t been around for about fourteen months):
Meditations on Moloch is top of the list by a factor of perhaps four
Different Worlds as a runner up
Lots of social dynamic stuff/how groups work/how individuals move within groups:
Social Justice and Words, Words, Words
I Can Tolerate Anything Except The Outgroup
Guided By The Beauty Of Our Weapons
Yes, We Have Noticed The Skulls
Book Review: Surfing Uncertainty
I’d be curious for an answer to this one too, actually.