Update, 11/28/2020: I wouldn’t write the comment below today. I’ve been meaning to revise it for awhile, and was having trouble coming up with a revision that didn’t itself seem to me to have a bunch of problems, but this comment of mine was just cited again by an outside blog as a reason why folks shouldn’t associate with Michael, so maybe I should stop trying to revise my old comment perfectly and just try to do it at all. I’m posting my current, updated opinion in a comment-reply; my original comment from Feb 27, 2019 is left unedited below, since it played a role in a bunch of community decisions and so should be recorded somewhere IMO.
----
I used to make the argument in the OP a lot. I applied it (among other applications) to Michael Vassar, who many people complained to me about (“I can’t believe he made obviously-fallacious argument X; why does anybody listen to him”), and who I encouraged them to continue listening to anyhow. I now regret this.
Here are the two main points I think past-me was missing:
1. Vetting and common knowledge creation are important functions, and ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can’t) help fill these functions.
(Communities — like the community of physicists, or the community of folks attempting to contribute to AI safety — tend to take a bunch of conclusions for granted without each-time-reexamining them, while trying to add to the frontier of knowledge/reasoning/planning. This can be useful, and it requires a community vetting function. This vetting function is commonly built via having a kind of “good standing” that thinkers/writers can be ruled out of (and into), and taking a claim as “established knowledge that can be built on” when ~all “thinkers in good standing” agree on that claim.
I realize the OP kind-of acknowledges this when discussing “social engineering”, so maybe the OP gets this right? But I undervalued this function in the past, and the term “social engineering” seems to me dismissive of a function that in my current view contributes substantially to a group’s ability to produce new knowledge.)
2. Even when a reader is seeking help brainstorming hypotheses (rather than vetting conclusions), they can still be lied-to and manipulated, and such lies/manipulations can sometimes disrupt their thinking for long and costly periods of time (e.g., handing Ayn Rand to the wrong 14-year-old; or, in my opinion, handing Michael Vassar to a substantial minority of smart aspiring rationalists). Distinguishing which thinkers are likely to lie or manipulate is a function more easily fulfilled by a group sharing info that rules thinkers out for past instances of manipulative or dishonest tactics (rather than by the individual listener planning to ignore past bad arguments and to just successfully detect every single manipulative tactic on their own).
So, for example, Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.
Similarly, back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false). Despite these and similar instances, I continued to recommend people talk to him because I had “ruled him in” as a source of some good novel ideas, and I did this without warning people about the rest of it. I think this was a mistake. (I also think that my recommending Michael led to considerable damage over time, but trying to establish that claim would require more discussion than seems to fit here.)
To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.
While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people’s filters are less functional (and charisma is more powerful) in the real-life medium.
On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across “The Nurture Assumption”, which was very important for my intellectual growth and which I think has held up well. I’ve seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn’t a necessary source of this, but I think earlier for me he was).
There have been some books, including “Inadequate Equilibria” and “Zero To One”, that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition—it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can’t identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there’s no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.
I still think on net it’s worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.
Can someone please fill me in, what are some of Michael Vassar’s best ideas, that made him someone who people “ruled in” and encouraged others to listen to?
Some examples of valuable true things I’ve learned from Michael:
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Examples of potentially valuable medical innovation that never see wide application are abundant.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
How intersubjectivity works; “objective” reality refers to the conserved *patterns* or *relationships* between different perspectives.
People who have coherent philosophies—even opposing ones—have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Seems right to me, as I was never tied to such a narrative in the first place.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they’re not taking enough risks even given their current values?
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
That seems almost certain to be true, but I don’t see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I’m not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Couldn’t this just be that we’ve picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
Examples of potentially valuable medical innovation that never see wide application are abundant.
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
“Doesn’t occur to them” seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
How intersubjectivity works; “objective” reality refers to the conserved patterns or relationships between different perspectives.
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I’m not sure if I’m just missing the point here.
People who have coherent philosophies—even opposing ones—have more in common in the way they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on “something we’ll figure out in the future”.
Also a meta question: Why haven’t these been written up or discussed online more? In any case, please don’t feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.
I think in part these could be “lessons relevant to Sarah”, a sort of a philosophical therapy that can’t be completely taken out of context. Which is why some of these might seem of low relevance or obvious.
>> Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
suggests really valuable contributions are more bottlenecked on obsession rather than being good at directing attention in a “valuable” direction
For example, for the very ambitious, the bus ticket theory suggests that the way to do great work is to relax a little. Instead of gritting your teeth and diligently pursuing what all your peers agree is the most promising line of research, maybe you should try doing something just for fun. And if you’re stuck, that may be the vector along which to break out.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
According to the official narrative, the Enron scandal is mostly about people engaging in actions that benefit themselves. I don’t know whether that’s true as I don’t have much insight into it. If it’s true that’s not what is meant.
It’s not about actions that are actually self benefitial.
Let’s say I’m at a lunch with a friend. I draw benefit most benefit from my lunch when I have a conversation as intellectual equals. At the same time there’s sometimes an impulse to say something to put my friend down and to demostrate that I’m higher then him in the social pecking order. If I follow that instinct and say something to put my friend down, I’m engaging in evil in the sense Vassar talks about.
The instinct has some value in a tribal context where it’s important to fight about the social pecking order but I’m drawing no value from it in the lunch with my friend.
I’m a person who has some self awareness and I try not to go down such roads when those evolutionary instincts come up. On the other hand you have people in middle management of immoral mazes who spent a lot of their time following such instincts and being evil.
I would say it’s possible, just at a lower probability proportional to the difference in intelligence. More intelligence will still correspond to better ideas on average. That said, it was not acclaimed scientists or ivy-league research teams that invented the airplane. It was two random high-school dropouts in Ohio. This is not to say that education or prestige are the same thing as intelligence[1], simply that brilliant innovations can sometimes be made by the little guy who’s not afraid to dream big.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Oh hey, I’ve accidentally tried this just by virtue of my personality!
Results: high-variance ideas are high-variance. YMMV, but so far I haven’t had a “hit”. (My friend politely calls my ideas “hits-based ideas”, which is a great term.)
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
My sense is that his worldview was ‘very sane’ in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that’s not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.
I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were ‘original’ and which weren’t. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)
Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the “subtle skill of seeing the world with fresh eyes” potentially extremely valuable, which is why I suppose Anna kept on encouraging people.
I think I have this skill, but I don’t know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn’t the primary factor, but was probably a significant one.
Another thing that’s quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a ‘safe’ way?
Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own
When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there’s a way in which the truth isn’t “safe” if safety is measured by something like ‘reversibility’ or ‘ability to continue being the way they were.’ It is also often the case that one can’t make a single small change, and then move on; if, say, you manage to convince a Christian that God isn’t real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.
Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a ‘reversibility’ criterion is sort of ludicrous; it’s not like people can undo puberty, or so on. If you present them with an alternative system, they don’t need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you’re in something of a double bind; it’s somewhat irresponsible to break people’s functioning systems without giving them a replacement, and it’s somewhat creepy if you break people’s functioning systems to pitch your replacement. (And since ‘functioning’ is value-laden, it’s easy for you to think their system needs replacing.)
He is referring to HPMOR, where the following happens (major spoiler for the first 25 chapters):
Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.
Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.
I suppose it was noteworthy that I don’t think I got very damaged, and most of that was via… just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)
So, I guess the short answer is “if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don’t spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific.”
My current take is something like “the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you’re kinda careful about when you go ask for their advice.”
FWIW, I’ve never had a clear sense that Vassar’s ideas were especially good (but, also, not had a clear sense that they weren’t). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.
Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.
(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)
The short version of my current stance on Vassar is that:
(1) I would not trust him to conform to local rules or norms. He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often, to come closer to advocating physical violence than I would like (e.g. this tweet), and to have conversational patterns that often disorient his interlocutors and leave them believing different things while talking to Michael than they do a bit later.
(2) I don’t have overall advice that people ought to avoid Vassar, in spite of (1), because it now seems to me that he is trying to help himself and others toward truth, and I think we’re bottlenecked on that enough that I could easily imagine (2) overshadowing (1) for individuals who are in a robust place (e.g., who don’t feel like they are trapped or “have to” talk to a person or do a thing) and who are choosing who they want to talk to. (There were parts of Michael’s conversational patterns that I was interpreting as less truth-conducive a couple years ago than I am now. I now think that this was partly because I was overanchored on the (then-recent) example of Brent, as well as because I didn’t understand part of how he was doing it, but it is possible that it is current-me who is wrong.) (As one example of a consideration that moved me here: a friend of mine whose epistemics I trust, and who has known Vassar for a long time, said that she usually in the long-run ended up agreeing with her while-in-the-conversation self, and not with her after-she-left-the-conversation self.)
Also I was a bit discomfited when my previous LW comment was later cited by folks who weren’t all that LW-y in their conversational patterns as a general “denouncement” of Vassar, although I should probably have predicted this, so, that’s another reason I’d like to try to publicly state my revised views. To be clear, I do not currently wish to “denounce” Vassar, and I don’t even think that’s what I was trying to do last time, although I think the fault was mostly mine that some people read my previous comment as a general denouncement.
Also, to be clear, what I am saying here is just that on the strength of my own evidence (which is not all evidence), (1) and (2) seem true to me. I am not at all trying to be a court here, or to evaluate any objections anyone else may have to Vassar, or to claim that there are no valid objections someone else might have, or anything like that. Just to share my own revised impression from my own limited first-hand observations.
He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often
This is true, but I’m confused about how to relate to it. Part of Michael’s explicit strategy seems to be identifying people stuck in bad equilibria, and destabilizing them out of it. If I were to take an evolutionary-psychology steelman of what a psychotic episode is, a (highly uncertain) interpretation I might make is that a psychotic episode is an adaptation for escaping such equilibria, combined with a negative retrospective judgment of how that went. Alternatively, those people might be using psychedelics (which I believe are in fact effective for breaking people out of equilibria), and getting unlucky with the side effects. This is bad if it’s not paired with good judgment about which equilibria are good vs. bad ones (I don’t have much opinion on how good his judgment in this area is). But this seems like an important function, which not enough people are performing.
I decided to ignore Michael after our first in-person conversation, where he said I shouldn’t praise the Swiss healthcare system which I have lots of experience with, because MetaMed is the only working healthcare system in the world (and a roomful of rationalists nodded along to that, suggesting that I bet money against him or something).
This isn’t to single out Michael or the LW community. The world is full of people who spout nonsense confidently. Their ideas can deserve close attention from a few “angel investors”, but that doesn’t mean they deserve everyone’s attention by default, as Scott seems to say.
There’s a really good idea slipped into the above comment in passing; the purpose of this comment is to draw attention to it.
close attention from a few “angel investors”
Scott’s article, like the earlier “epistemic tenure” one, implicitly assumes that we’re setting a single policy for whose ideas get taken how seriously. But it may make sense for some people or communities—these “angel investors”—to take seriously a wider range of ideas than the rest of us, even knowing that a lot of those ideas will turn out to be bad ones, in the hope that they can eventually identify which ones were actually any good and promote those more widely.
Taking the parallel a bit further, in business there are more levels of filtering than that. You have the crazy startups; then you have the angel investors; then you have the early-stage VCs; then you have the later VCs; and then you have, I dunno, all the world’s investors. There are actually two layers of filtering at each stage—investors may choose not to invest, and the company may fail despite the investment—but let’s leave that out for now. The equivalent in the marketplace of ideas would be a sort of hierarchy of credibility-donors: first of all you have individuals coming up with possibly-crackpot ideas, then some of them get traction in particular communities, then some of those come to the attention of Gladwell-style popularizers, and then some of the stuff they popularize actually makes it all the way into the general public’s awareness. At each stage it should be somewhat harder to get treated as credible. (But is it? I wouldn’t count on it. In particular, popularizers don’t have the best reputation for never latching onto bad ideas and making them sound more credible than they really are...)
(Perhaps the LW community itself should be an “angel investor”, but not necessarily.)
Are there further details to these accusations? The linked post from 8 months ago called for an apology in absence of further details. If there are not further details, a new post with an apology is in order.
Um, good point. I am not sure which details you’re asking about, but I am probably happy to elaborate if you ask something more specific.
I hereby apologize for the role I played in Michael Vassar’s ostracism from the community, which AFAICT was both unjust and harmful to both the community and Michael. There’s more to say here, and I don’t yet know how to say it well. But the shortest version is that in the years leading up to my original comment Michael was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about Michael were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have Michael around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about Michael and related matters also helped drive this home for me.)
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that Michael has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let Michael speak at SSC meetups), it is my opinion that Michael ought to be on the “include” list.
Thoughts/comments welcome, and probably helpful for getting to shared accurate pictures about any of what’s above.
There’s a bunch of different options for interacting with a person/group/information source:
Read what they write
Go to talks by them and ask a question
Talk with them on comments on their blogs
Have 1-1 online conversations with them (calls/emails)
Invite them into your home and be friends with them
Naturally there’s a difference between “telling your friend that they should ignore the CDC” and “not letting a CDC leadership staff member into your home for dinner”. I’m much more sympathetic to the latter.
Related: As a somewhat extreme example I’ve thought about in the past in other situations with other people, I think that people who have committed crimes (e.g. theft) could be great and insightful contributors to open research problems, but might belong geographically in jail and be important to not allow into my home. Especially for insightful people with unique perspectives who were intellectually productive I’d want to put in a lot of work to ensure they can bring their great contributions in ways that aren’t open to abuse or likely to leave my friends substantially hurt on some key dimension.
–––
Thx for your comment. I don’t have a clear sense from your comment what you’re trying to suggest for Michael specifically — I’ve found it quite valuable to read his Twitter, but more than that. Actually, here’s what I suspect you’re saying. I think you’re saying that the following things seem worthwhile to you: have 1-1 convos with Michael, talk to Michael at events, reply to his emails and talk with him online. And then you’re not making an active recommendation about whether to: have Michael over for dinner, have Michael stay at your house, date Michael, live with Michael, lend Michael money, start a business with Michael, etc, and you’re aiming to trust people to figure that out for themselves.
It’s not a great guess, but it’s my best (quick) guess. Thoughts?
Specifically it’s given there’s some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it’s more than that - (I think Vaniver alludes to it with the creepy comment).
Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.
Update, 8/17/2021 See my more recent comment below.
Update, 11/28/2020: I wouldn’t write the comment below today. I’ve been meaning to revise it for awhile, and was having trouble coming up with a revision that didn’t itself seem to me to have a bunch of problems, but this comment of mine was just cited again by an outside blog as a reason why folks shouldn’t associate with Michael, so maybe I should stop trying to revise my old comment perfectly and just try to do it at all. I’m posting my current, updated opinion in a comment-reply; my original comment from Feb 27, 2019 is left unedited below, since it played a role in a bunch of community decisions and so should be recorded somewhere IMO.
----
I used to make the argument in the OP a lot. I applied it (among other applications) to Michael Vassar, who many people complained to me about (“I can’t believe he made obviously-fallacious argument X; why does anybody listen to him”), and who I encouraged them to continue listening to anyhow. I now regret this.
Here are the two main points I think past-me was missing:
1. Vetting and common knowledge creation are important functions, and ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can’t) help fill these functions.
(Communities — like the community of physicists, or the community of folks attempting to contribute to AI safety — tend to take a bunch of conclusions for granted without each-time-reexamining them, while trying to add to the frontier of knowledge/reasoning/planning. This can be useful, and it requires a community vetting function. This vetting function is commonly built via having a kind of “good standing” that thinkers/writers can be ruled out of (and into), and taking a claim as “established knowledge that can be built on” when ~all “thinkers in good standing” agree on that claim.
I realize the OP kind-of acknowledges this when discussing “social engineering”, so maybe the OP gets this right? But I undervalued this function in the past, and the term “social engineering” seems to me dismissive of a function that in my current view contributes substantially to a group’s ability to produce new knowledge.)
2. Even when a reader is seeking help brainstorming hypotheses (rather than vetting conclusions), they can still be lied-to and manipulated, and such lies/manipulations can sometimes disrupt their thinking for long and costly periods of time (e.g., handing Ayn Rand to the wrong 14-year-old; or, in my opinion, handing Michael Vassar to a substantial minority of smart aspiring rationalists). Distinguishing which thinkers are likely to lie or manipulate is a function more easily fulfilled by a group sharing info that rules thinkers out for past instances of manipulative or dishonest tactics (rather than by the individual listener planning to ignore past bad arguments and to just successfully detect every single manipulative tactic on their own).
So, for example, Julia Galef helpfully notes a case where Steven Pinker straightforwardly misrepresents basic facts about who said what. This is helpful to me in ruling out Steven Pinker as someone who I can trust not to lie to me about even straightforwardly checkable facts.
Similarly, back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false). Despite these and similar instances, I continued to recommend people talk to him because I had “ruled him in” as a source of some good novel ideas, and I did this without warning people about the rest of it. I think this was a mistake. (I also think that my recommending Michael led to considerable damage over time, but trying to establish that claim would require more discussion than seems to fit here.)
To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.
Thanks for this response.
I mostly agree with everything you’ve said.
While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people’s filters are less functional (and charisma is more powerful) in the real-life medium.
On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across “The Nurture Assumption”, which was very important for my intellectual growth and which I think has held up well. I’ve seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn’t a necessary source of this, but I think earlier for me he was).
There have been some books, including “Inadequate Equilibria” and “Zero To One”, that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition—it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can’t identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there’s no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.
I still think on net it’s worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.
Can someone please fill me in, what are some of Michael Vassar’s best ideas, that made him someone who people “ruled in” and encouraged others to listen to?
Some examples of valuable true things I’ve learned from Michael:
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Examples of potentially valuable medical innovation that never see wide application are abundant.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
How intersubjectivity works; “objective” reality refers to the conserved *patterns* or *relationships* between different perspectives.
People who have coherent philosophies—even opposing ones—have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
Thanks! Here are my reactions/questions:
Seems right to me, as I was never tied to such a narrative in the first place.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they’re not taking enough risks even given their current values?
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
That seems almost certain to be true, but I don’t see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I’m not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Couldn’t this just be that we’ve picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
“Doesn’t occur to them” seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I’m not sure if I’m just missing the point here.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on “something we’ll figure out in the future”.
Also a meta question: Why haven’t these been written up or discussed online more? In any case, please don’t feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.
I think in part these could be “lessons relevant to Sarah”, a sort of a philosophical therapy that can’t be completely taken out of context. Which is why some of these might seem of low relevance or obvious.
I suspect this might be a subtler point?
http://paulgraham.com/genius.html
suggests really valuable contributions are more bottlenecked on obsession rather than being good at directing attention in a “valuable” direction
According to the official narrative, the Enron scandal is mostly about people engaging in actions that benefit themselves. I don’t know whether that’s true as I don’t have much insight into it. If it’s true that’s not what is meant.
It’s not about actions that are actually self benefitial.
Let’s say I’m at a lunch with a friend. I draw benefit most benefit from my lunch when I have a conversation as intellectual equals. At the same time there’s sometimes an impulse to say something to put my friend down and to demostrate that I’m higher then him in the social pecking order. If I follow that instinct and say something to put my friend down, I’m engaging in evil in the sense Vassar talks about.
The instinct has some value in a tribal context where it’s important to fight about the social pecking order but I’m drawing no value from it in the lunch with my friend.
I’m a person who has some self awareness and I try not to go down such roads when those evolutionary instincts come up. On the other hand you have people in middle management of immoral mazes who spent a lot of their time following such instincts and being evil.
One thing I’ve wondered about is, how true is this for someone who’s dumber than others?
(Asking for, uh, a friend.)
I would say it’s possible, just at a lower probability proportional to the difference in intelligence. More intelligence will still correspond to better ideas on average.
That said, it was not acclaimed scientists or ivy-league research teams that invented the airplane. It was two random high-school dropouts in Ohio. This is not to say that education or prestige are the same thing as intelligence[1], simply that brilliant innovations can sometimes be made by the little guy who’s not afraid to dream big.
By all accounts the Wright Brothers were intelligent
Oh hey, I’ve accidentally tried this just by virtue of my personality!
Results: high-variance ideas are high-variance. YMMV, but so far I haven’t had a “hit”. (My friend politely calls my ideas “hits-based ideas”, which is a great term.)
http://paulgraham.com/genius.html seems to be promoting a similar idea
My sense is that his worldview was ‘very sane’ in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that’s not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.
I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were ‘original’ and which weren’t. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)
Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the “subtle skill of seeing the world with fresh eyes” potentially extremely valuable, which is why I suppose Anna kept on encouraging people.
I think I have this skill, but I don’t know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn’t the primary factor, but was probably a significant one.
Another thing that’s quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a ‘safe’ way?
Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own
When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there’s a way in which the truth isn’t “safe” if safety is measured by something like ‘reversibility’ or ‘ability to continue being the way they were.’ It is also often the case that one can’t make a single small change, and then move on; if, say, you manage to convince a Christian that God isn’t real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.
Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a ‘reversibility’ criterion is sort of ludicrous; it’s not like people can undo puberty, or so on. If you present them with an alternative system, they don’t need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you’re in something of a double bind; it’s somewhat irresponsible to break people’s functioning systems without giving them a replacement, and it’s somewhat creepy if you break people’s functioning systems to pitch your replacement. (And since ‘functioning’ is value-laden, it’s easy for you to think their system needs replacing.)
Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?
He is referring to HPMOR, where the following happens (major spoiler for the first 25 chapters):
Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.
I edited your comment to add the spoiler cover. FYI the key for this is > followed by ! and then a space.
Ah, great, thank you :)
Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.
I suppose it was noteworthy that I don’t think I got very damaged, and most of that was via… just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)
So, I guess the short answer is “if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don’t spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific.”
My current take is something like “the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you’re kinda careful about when you go ask for their advice.”
FWIW, I’ve never had a clear sense that Vassar’s ideas were especially good (but, also, not had a clear sense that they weren’t). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.
Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.
(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)
The short version of my current stance on Vassar is that:
(1) I would not trust him to conform to local rules or norms. He also still seems to me to precipitate psychotic episodes in his interlocutors surprisingly often, to come closer to advocating physical violence than I would like (e.g. this tweet), and to have conversational patterns that often disorient his interlocutors and leave them believing different things while talking to Michael than they do a bit later.
(2) I don’t have overall advice that people ought to avoid Vassar, in spite of (1), because it now seems to me that he is trying to help himself and others toward truth, and I think we’re bottlenecked on that enough that I could easily imagine (2) overshadowing (1) for individuals who are in a robust place (e.g., who don’t feel like they are trapped or “have to” talk to a person or do a thing) and who are choosing who they want to talk to. (There were parts of Michael’s conversational patterns that I was interpreting as less truth-conducive a couple years ago than I am now. I now think that this was partly because I was overanchored on the (then-recent) example of Brent, as well as because I didn’t understand part of how he was doing it, but it is possible that it is current-me who is wrong.) (As one example of a consideration that moved me here: a friend of mine whose epistemics I trust, and who has known Vassar for a long time, said that she usually in the long-run ended up agreeing with her while-in-the-conversation self, and not with her after-she-left-the-conversation self.)
Also I was a bit discomfited when my previous LW comment was later cited by folks who weren’t all that LW-y in their conversational patterns as a general “denouncement” of Vassar, although I should probably have predicted this, so, that’s another reason I’d like to try to publicly state my revised views. To be clear, I do not currently wish to “denounce” Vassar, and I don’t even think that’s what I was trying to do last time, although I think the fault was mostly mine that some people read my previous comment as a general denouncement.
Also, to be clear, what I am saying here is just that on the strength of my own evidence (which is not all evidence), (1) and (2) seem true to me. I am not at all trying to be a court here, or to evaluate any objections anyone else may have to Vassar, or to claim that there are no valid objections someone else might have, or anything like that. Just to share my own revised impression from my own limited first-hand observations.
This is true, but I’m confused about how to relate to it. Part of Michael’s explicit strategy seems to be identifying people stuck in bad equilibria, and destabilizing them out of it. If I were to take an evolutionary-psychology steelman of what a psychotic episode is, a (highly uncertain) interpretation I might make is that a psychotic episode is an adaptation for escaping such equilibria, combined with a negative retrospective judgment of how that went. Alternatively, those people might be using psychedelics (which I believe are in fact effective for breaking people out of equilibria), and getting unlucky with the side effects. This is bad if it’s not paired with good judgment about which equilibria are good vs. bad ones (I don’t have much opinion on how good his judgment in this area is). But this seems like an important function, which not enough people are performing.
I decided to ignore Michael after our first in-person conversation, where he said I shouldn’t praise the Swiss healthcare system which I have lots of experience with, because MetaMed is the only working healthcare system in the world (and a roomful of rationalists nodded along to that, suggesting that I bet money against him or something).
This isn’t to single out Michael or the LW community. The world is full of people who spout nonsense confidently. Their ideas can deserve close attention from a few “angel investors”, but that doesn’t mean they deserve everyone’s attention by default, as Scott seems to say.
There’s a really good idea slipped into the above comment in passing; the purpose of this comment is to draw attention to it.
Scott’s article, like the earlier “epistemic tenure” one, implicitly assumes that we’re setting a single policy for whose ideas get taken how seriously. But it may make sense for some people or communities—these “angel investors”—to take seriously a wider range of ideas than the rest of us, even knowing that a lot of those ideas will turn out to be bad ones, in the hope that they can eventually identify which ones were actually any good and promote those more widely.
Taking the parallel a bit further, in business there are more levels of filtering than that. You have the crazy startups; then you have the angel investors; then you have the early-stage VCs; then you have the later VCs; and then you have, I dunno, all the world’s investors. There are actually two layers of filtering at each stage—investors may choose not to invest, and the company may fail despite the investment—but let’s leave that out for now. The equivalent in the marketplace of ideas would be a sort of hierarchy of credibility-donors: first of all you have individuals coming up with possibly-crackpot ideas, then some of them get traction in particular communities, then some of those come to the attention of Gladwell-style popularizers, and then some of the stuff they popularize actually makes it all the way into the general public’s awareness. At each stage it should be somewhat harder to get treated as credible. (But is it? I wouldn’t count on it. In particular, popularizers don’t have the best reputation for never latching onto bad ideas and making them sound more credible than they really are...)
(Perhaps the LW community itself should be an “angel investor”, but not necessarily.)
Are there further details to these accusations? The linked post from 8 months ago called for an apology in absence of further details. If there are not further details, a new post with an apology is in order.
Um, good point. I am not sure which details you’re asking about, but I am probably happy to elaborate if you ask something more specific.
I hereby apologize for the role I played in Michael Vassar’s ostracism from the community, which AFAICT was both unjust and harmful to both the community and Michael. There’s more to say here, and I don’t yet know how to say it well. But the shortest version is that in the years leading up to my original comment Michael was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about Michael were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have Michael around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about Michael and related matters also helped drive this home for me.)
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that Michael has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let Michael speak at SSC meetups), it is my opinion that Michael ought to be on the “include” list.
Thoughts/comments welcome, and probably helpful for getting to shared accurate pictures about any of what’s above.
There’s a bunch of different options for interacting with a person/group/information source:
Read what they write
Go to talks by them and ask a question
Talk with them on comments on their blogs
Have 1-1 online conversations with them (calls/emails)
Invite them into your home and be friends with them
Naturally there’s a difference between “telling your friend that they should ignore the CDC” and “not letting a CDC leadership staff member into your home for dinner”. I’m much more sympathetic to the latter.
Related: As a somewhat extreme example I’ve thought about in the past in other situations with other people, I think that people who have committed crimes (e.g. theft) could be great and insightful contributors to open research problems, but might belong geographically in jail and be important to not allow into my home. Especially for insightful people with unique perspectives who were intellectually productive I’d want to put in a lot of work to ensure they can bring their great contributions in ways that aren’t open to abuse or likely to leave my friends substantially hurt on some key dimension.
–––
Thx for your comment. I don’t have a clear sense from your comment what you’re trying to suggest for Michael specifically — I’ve found it quite valuable to read his Twitter, but more than that. Actually, here’s what I suspect you’re saying. I think you’re saying that the following things seem worthwhile to you: have 1-1 convos with Michael, talk to Michael at events, reply to his emails and talk with him online. And then you’re not making an active recommendation about whether to: have Michael over for dinner, have Michael stay at your house, date Michael, live with Michael, lend Michael money, start a business with Michael, etc, and you’re aiming to trust people to figure that out for themselves.
It’s not a great guess, but it’s my best (quick) guess. Thoughts?
Hi Anna, since you’ve made the specific claim publicly (I assume intended as a warning), would you mind commenting on this
https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD
Specifically it’s given there’s some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it’s more than that - (I think Vaniver alludes to it with the creepy comment).
Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.