Can you say more about these for the benefit of folks like me who don’t know about them? What kind of “bad reception” or “controversial” was it? Was it woo-flavored, or something else?
AnnaSalamon
I think “you should one-box on Newcomb’s problem” is probably an example. By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of “yes it would be tempting to one-box but that’s woo thinking that has me thinking that.”
I want to state more explicitly where I’m coming from, about LW and woo.
One might think: “LW is one of few places on the internet that specializes in having only scientific materialist thoughts, without the woo.”
My own take is more like: “LW is one of few places on the internet that specializes in trying to have principled, truth-tracking models and practices about epistemics, and on e.g. trying to track that our maps are not the territory, trying to ask what we’d expect to see differently if particular claims are true/not-true, trying to be a “lens that sees its own flaws.””
Something I don’t want to see on LW, that I think at least sometimes happens under both the headings of “fake frameworks” and the headings of “woo” (and some other places on LW too), is something like “let’s not worry about the ultimate nature of the cosmos, or what really cleaves nature at the joints right now. Let’s say some sentences because saying these sentences seems locally useful.”
I worry about this sort of thing being on LW because, insofar as those sentences make truth-claims about the cosmos, deciding to “take in” those sentences “because they’re useful,” without worrying about the nature of the cosmos means deciding to acquire intentionally-unreflective views on the nature of the cosmos, which is not the thing I hope we’re here for. And it risks muddying the rationality project thereby.
(Alternate version I’m fine with, that is a bit close to this: “observe that people seem to get use out of taking in sentences X and Y. Ask what this means about the cosmos.”)
(Additional alternate version I’m fine with: notice that a hypothesis seems to be paying at least some rent, despite being false. Stay interested in both facts. Play around and see how much more rent you can extract from the hypothesis, while still tracking that it is false or probably-false, and while still asking what the unified ultimate nature of the cosmos might be, that yields this whole thing. I think this is another thing people sometimes do under the heading of “fake frameworks,” and I like this one.)
Something else I don’t want to see on LW (well, really the same thing again, but stated differently because I think it might be perceived differently) is: “let’s not read author X, or engage with body of material Y, or hypothesis Z, because it’s woo.” (Or: “… because people who engaged with that seem to have come to bad ends” or “because saying those sentences seems to cause instrumental harm.”) I don’t want this because this aversion, at least stated in this way, doesn’t seem principled, and LW is one of the few places on the internet where many folks aspire to avoiding unprincipled social mimicry of “let’s think this way and not that way,” and toward instead asking how our minds work and how epistemics work and what this means about what ever works for forming accurate maps.
(I really like having meta-level conversations about whether we should talk in these ways, though! And I like people who think we should talk in the ways I’m objecting to stating their disagreements with me/whoever, and the reasons for their disagreements, and then folks trying together to figure out what’s true. That’s part of how we can do the actually principled thing. By not shaming/punishing particular perspectives, but instead arguing with them.)
One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like “woo.” (They applied this also to humans, at least sometimes.)
The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the “intent” of a given cellular signal or similar. I found it worth reading. (HT: Adam Scholl, for showing me the article.)
Thanks. Are you up for saying more about what algorithm (you in hindsight notice/surmise) you were following internally during that time, and how it did/didn’t differ from the algorithm you were following during your “hyper-analytical programmer” times?
Can you say a bit more about why?
Do you agree that the social pressure in the pineapple and nose-picking examples isn’t backchained from something like “don’t spoil our game, we need everyone in this space to think/speak a certain way about this or our game will break”?
For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you’ll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don’t care about the game, you’ll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it’s not clear to me that they have an incentive to try.
This seems right to me about most go clubs, but there’re a lot of other places that seem to me different on this axis.
Distinguishing features of Go clubs from my POV:
A rapid and trustworthy feedback loop, where everyone wins and loses at non-rigged games of Go regularly. (Opposite of schools proliferating without evidence.)
A lack of need to coordinate individuals. (People win or lose Go games on their own, rather than by needing to organize other people into coordinating their play.)
Some places where I expect “being in sync with the narrative” would diverge more from “just figuring out how to get stronger / how to do the object-level task in a general way”:
A hypothetical Go club that somehow twisted around to boost a famous player’s ego about how very useful his particular life-and-death problems were, or something, maybe so they could keep him around and brag about how they had him at their club, and so individual members could stay on his good side. (Doesn’t seem very likely, but it’s a thought experiment.)
Many groups with an “ideological” slant, e.g. the Sierra Club or ACLU or a particular church
(?Maybe? not sure about this one) Many groups that are trying to coordinate their members to follow a particular person’s vision for coordinated action, e.g. Ikea’s or most other big companies’ interactions with their staff, or even a ~8-employee coffee shop that’s trying to realize a particular person’s vision
I haven’t been able to construct any hypotheticals where I’d use it…. tl;dr I think narrative syncing is a natural category but I’m much less confident that “narrative syncing disguised as information sharing” is a problem worth noting,
I’m curious what you think of the examples in the long comment I just made (which was partly in response to this, but which I wrote as its own thing because I also wish I’d added it to the post in general).
I’m now thinking there’re really four concepts:
Narrative syncing. (Example: “the sand is lava.”)
Narrative syncing that can easily be misunderstood as information sharing. (Example: many of Fauci’s statements about covid, if this article about it is correct.)
Narrative syncing that sets up social pressure not to disagree, or not to weaken the apparent social norm about how we’ll talk about that. (Example: “Gambi’s is a great restaurant and we are all agreed on going there,” when said in an irate tone of voice after a long and painful discussion about which restaurant to go to.”)
Narrative syncing that falls into categories #2 and #3 simultaneously. (Example: “The 911 terrorists were cowards,” if used to establish a norm for how we’re going to speak around here rather than to share honest impressions and invite inquiry.)
I am currently thinking that category #4 is my real nemesis — the actual thing I want to describe, and that I think is pretty common and leads to meaningfully worse epistemics than an alternate world where we skillfully get the good stuff without the social pressures against inquiry/speech.
I also have a prediction that most (though not all) instances of #2 will also be instances of #3, which is part of why I think there’s a “natural cluster worth forming a concept around” here.
I agree with some commenters (e.g. Taran) that the one example I gave isn’t persuasive on its own, and that I can imagine different characters in Alec’s shoes who want and mean different things. But IMO there is a thing like this that totally happens pretty often in various contexts. I’m going to try to give more examples, and a description of why I think they are examples, to show why I think this.
Example: I think most startups have a “plan” for success, and a set of “beliefs” about how things are going to go, that the CEO “believes” basically for the sake of anchoring the group, and that the group members feel pressure to avow, or to go along with in their speech and sort-of in their actions, as a mechanism of group coordination and of group (morale? something like this). It’s not intended as a neutral prediction individuals would individually be glad to accept/reject bets based on. And the (admittedly weird and rationalist-y) CEOs I’ve talked to about this have often been like “yes, I felt pressure to have beliefs or to publicly state beliefs that would give the group a positive, predictable vision, and I found this quite internally difficult somehow.”
When spotting “narrative syncing” in the wild, IMO a key distinguisher of “narrative syncing” (vs sharing of information) is whether there is pressure not to differ from the sentences in question, and whether that pressure is (implicitly) backchained from “don’t spoil the game / don’t spoil our coordination / don’t mess up an apparent social consensus.” So, if a bunch of kids are playing “the sand is lava” and you say “wait, I’m not sure the sand is lava” or “it doesn’t look like lava to me,” you’re messing up the game. This is socially discouraged. Also, if a bunch of people are coordinating their work on a start-up and are claiming to one another that it’s gonna work out, and you are working there too and say you think it isn’t, this… risks messing up the group’s coordination, somehow, and is socially discouraged.
OTOH, if you say “I like pineapples on my pizza” or “I sometimes pick my nose and eat it” and a bunch of people are like “eww, gross”… this is social pressure, but the pressure mostly isn’t backchained from anything like “don’t spoil our game / don’t mess up our apparent social consensus”, and so it is probably not narrative syncing.
Or to take an intermediate/messy example: if you’re working at the same start-up as in our previous example, and you say “our company’s logo is ugly” to the others at that start-up, this… might be socially insulting, and might draw frowns or other bits of social pressure, but on my model the pressure will be weaker than the sort you’d get if you were trying to disagree with the core narrative the start-up is using to coordinate (“we will do A, then B, and thereby succeed as a company”), and what push-back you do get will have less of the “don’t spoil our game!” “what if we stop being able to coordinate together!” nature (though still some) and more other natures such as “don’t hurt the feelings of so-and-so who made the logo” or “I honestly disagree with you.” It’s… still somewhat narrative syncing-y to my mind, in that folks in the particular fictional start-up I’m imagining are e.g. socially synchronizing some around the narrative “everything at our company is awesome,” but … less.
It seems to me that when a person or set of people “take offense” at particular speech, this is usually (always?) an attempt to enforce narrative syncing.
Another example: sometimes a bunch of people are neck-deep in a discussion, and a new person wanders in and disagrees with part X of the assumed model, and people really don’t want to have to stop their detailed discussion to talk again about whether X is true. So sometimes, in such cases, people reply with social pressure to believe/say X — for example, a person will “explain” to the newcomer “why X is true” in a tone that invites capitulation rather than inquiry, and the newcomer will feel as though they are being a bit rude if they don’t buy the argument. I’d class this under “don’t mess up our game”-type social pressure, and under “narrative syncing disguised as information exchange.” IMO, a better thing to do in this situation is to make the bid to not mess up the game explicit, by e.g. saying “I hear you, it makes sense that you aren’t sure about X, but would you be up for assuming X for now for the sake of the argument, so we can continue the discussion-bit we’re in the middle of?”
Narrative Syncing
Some components of my own models, here:
I think most of the better-funded EA organizations would not prefer most LWers working there for $1M/yr, nor for a more typical salary, nor for free.
(Even though many of these same LW-ers are useful many other places.)
I think many of the better-funded EA organizations would prefer (being able to continue employing at least their most useful staff members) to (receiving an annual donation equal to 30x what that staff member could make in industry).
If a typical LWer somehow really decided, deep in themselves, to try to do good with all their heart and all their mind and creativity… or to do as much of this as was compatible with still working no more than 40 hrs/week and having a family and a life… I suspect this would be quite considerably more useful than donating 10% of their salary to some already-funded-to-near-saturation EA organization. (Since the latter effect is often small.) (Though some organizations are not that well-funded! So this varies by organization IMO.)
2 and 3 are as far as I can get toward agreeing with the OPs estimated factor of 300. It doesn’t get me all the way there (well, I guess it might for the mean person, but certainly not for the median; plus also there’re assumptions implicit in trying to use a multiplier here that I don’t buy or can’t stomach.). But it makes me sort of empathize with how people can utter sentences like those.
In terms of what to make of this:
Sometimes people jam 1 and 2 together, to get a perspective like “most people are useless compared to those who work at EA organizations.” I think this is not quite right, because “scaling an existing EA organization’s impact” is not at all the only way to do good, and my guess is that the same people may be considerably better at other ways to do good than they are at adding productivity to an [organization that already has as many staff as it knows how to use].
One possible alternate perspective:
“Many of the better funded EA organizations don’t much know how to turn additional money, or additional skilled people, into doing their work faster/more/better. So look for some other way to do good and don’t listen too much to them for how to do it. Rely on your own geeky ideas, smart outside friends who’ve done interesting things before, common sense and feedback loops and experimentation and writing out your models and looking for implications/inconsistencies, etc. in place of expecting EA to have a lot of pre-found opportunities that require only your following of their instructions.”
I could still be missing something, but I think this doesn’t make sense. If the marginal numbers are as you say and if EA organizations started paying everyone 40% of their counterfactual value, the sum of “EA financial capital” would go down, and so the counterfactual value-in-“EA”-dollars of marginal people would also go down, and so the numbers would probably work out with lower valuations per person in dollars. Similarly, if “supply and demand” works for finding good people to work at EA organizations (which it might? I’m honestly unsure), the number of EA people would go up, which would also reduce the counterfactual value-in-dollars of marginal EA people.
More simply, it seems a bit weird to start with “money is not very useful on the margin, compared to people” and get from there to “because of how useless money is compared to people, if we spend money to get more people, this’ll be a worse deal than you’d think.”
Although, I was missing something / confused about something prior to reading your reply: it does seem likely to me on reflection that losing all of EAs dollars, but keeping the people, would leave us in a much better position than losing all of EAs people (except a few very wealthy donors, say) but losing its dollars. So in that sense it seems likely to me that EA has much more value-from-human-capital than value-from-financial-capital.
Great use of logic to try to force us to have models, and to make those models explicit!
I don’t know; finding a better solution sounds great, but there aren’t that many people who talk here, and many of us are fairly reflective and ornery, so if a small group keeps repeatedly requesting this and doing it it’d probably be sufficient to keep “aspiring rationalist” as at least a substantial minority of what’s said.
because EA has much more human capital than financial capital
Is this a typo? It seems in direct contradiction with the OPs claim that EA is people-bottlenecked and not funding-bottlenecked, which I otherwise took you to be agreeing with.
This is a bit off-topic with respect to the OP, but I really wish we’d more often say “aspiring rationalist” rather than “rationalist.” (Thanks to Said for doing this here.) The use of “rationalist” in parts of this comment thread and elsewhere grates on me. I expect most uses of either term are just people using the phrase other people use (which I have no real objection to), but it seems to me that when we say “aspiring rationalist” we at least sometimes remember that to form a map that is a better predictor of the territory requires aspiration, effort, forming one’s beliefs via mental motions that’ll give different results in different worlds. While when we say “rationalist”, it sounds like it’s just a subculture.
TBC, I don’t object to people describing other people as “self-described rationalists” or similar, just to using “rationalist” as a term to identify with on purpose, or as the term for what LW’s goal is. I’m worried that if we intentionally describe ourselves as “rationalists,” we’ll aim to be a subculture (“we hang with the rationalists”; “we do things the way this subculture does them”) instead of actually asking the question of how we can form accurate beliefs.
I non-confidently think “aspiring rationalist” used to be somewhat more common as a term, and its gradual disappearance (if it has been gradually disappearing; I’m not sure) is linked with some LWers and similar having less of an aspirational identity, and more of a “we’re the set of people who tread water while affiliating with certain mental habits or keywords or something” identity.
To elaborate a bit where I’m coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as “notice your confusion,” “if you have a gut feeling against something, look into it and don’t just override it,” “expect things to usually add up to normality” that can help us survive losing that immune system. (Advantage of losing it: you can reason!)
My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn’t used to think this but I do now.
An LW post from 2009 that seems relevant (haven’t reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder
Do you have a principled model of what an “epistemic immune system” is and why/whether we should have one?
It seems “taboo” to me. Like, when I go to think about this, I feel … inhibited in some not-very-verbal, not-very-explicit way. Kinda like how I feel if I imagine asking an inane question of a stranger without a socially sensible excuse, or when a clerk asked me why I was buying so many canned goods very early in Covid.
I think we are partly seeing the echoes of a social flinch here, somehow. It bears examining!
I notice you are noticing confusion. Any chance either the data, or the code, has a bug?