I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
So, rationality largely isn’t actually about doing thinking clearly (which requires having correct information about what things actually work, e.g., well-calibrated priors, and not adding noise to conversations about these), it’s an aesthetic identify movement around HPMoR as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a “Vassar crowd” that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you’ve done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael’s friends in the bunch too, along with as much of the rationality community as I feel like? After all, you’re all friends, and you decided to make the effort together, even though you each made your own individual contributions.
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
So, rationality largely isn’t actually about doing thinking clearly [...] it’s an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
So, the United States of America largely isn’t actually about being a land of freedom to which the world’s people may flock (which requires having everyone’s civil liberties consistently upheld, e.g., robust support for the rule of law, and not adding noise to conversations about these), it’s an aesthetic identify movement around the Founding Fathers as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of America, America ought to be replaced with something very, very different.
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.
I’m going to flip this comment on you, so you can understand how I’m seeing it, and thus I fail to see why the point you’re trying to make matters.
One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I’ll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we’ll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.
Also, in this comment I indicated my awareness of what was once known as the “Vassar crowd”, which I recall you were a part of:
While we’re here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?
This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.
Specifically: if you fail to make a hard mental disinction between “rationality”-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called “rationalists” about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance (“Am I crazy? Are they crazy? What’s going on?? Auuuuuugh”) in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn’t.
But … it shouldn’t. Sure, self-identification with the “rationalist” brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that’s an empirical question that you can’t answer by taking the brand name literally.
I thought the “rationalist” æsthetic-identity-movement’s marketing literature expressed this very poetically—
Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected “the community” to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander’s comments in “The Ideology Is Not the Movement” (“[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture”).
This isn’t to say that the so-called “rationalist” community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don’t see any better community to run away to—at the moment. (Though I’m keeping an eye on the Quillette people.) But if attempts to analyze how we’re collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!
(Full disclosure: uh, I guess I would also count as part of the “Vassar crowd” these days??)
For Ben’s criticisms of EA, it’s my opinion that while I agree with many of his conclusions, I don’t agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn’t respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn’t himself agree with, his interlocutors are persistently acting in bad faith. I haven’t interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven’t been following as closely how Ben construes ‘bad faith’, and I haven’t taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don’t find them a compelling account of people’s real motivations in discourse.
I think the most relevant post by Ben here is “Bad Intent Is a Disposition, Not a Feeling”. (Highly recommended!)
Recently I’ve often found myself wishing for better (widely-understood) terminology for phenomena that it’s otherwise tempting to call “bad faith”, “intellectual dishonesty”, &c. I think it’s pretty rare for people to be consciously, deliberately lying, but motivated bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that’s worth distinguishing from “innocent” mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
If our discourse norms require us to “assume good faith”, but there’s an important sense in which that assumption isn’t true (because motivated misunderstandings resist correction in a way that simple mistakes don’t), but we can’t talk about the ways it isn’t true without violating the discourse norm, then that’s actually a pretty serious problem for our collective sanity!
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
I’ll take a look at these links. Thanks.
I understand the “Vassar Crowd” to be a group of Michael Vassar’s friends who:
were highly critical of EA.
were critical of somewhat less so of the rationality community.
were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.
Maybe you meet those qualifications, but as I understand it the “Vassar Crowd” started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn’t posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance’s Long-Term World Improvement mailing list.
It doesn’t seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn’t appear from the outside it is as cohesive anymore, I assume in large part because of Vassar’s decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.
So while I appreciate the disclosure, I don’t know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn’t think that?)
Geeks, Mops, Sociopaths happened to the rationality community, not just EA.
I don’t think it’s unique! I think it’s extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!
I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.
It’s possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people’s posts) say what’s wrong about the current state of the rationality community publicly. I’ve already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)
Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the ‘aesthetic identity movement’ model might be lacking. If a theory makes the same predictions everywhere, it’s useless. I feel like the ‘aesthetic identity movement’ model might be one of those theories that is too general and not specific enough for me to understand what I’m supposed to take away from its use. For example:
Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn’t be as confused, if I knew what I am supposed to do with this information.
An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.
It’s possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.
It’s possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn’t just doing signalling, etc.
Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.
Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)
(Note, EA isn’t only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)
It seems like the concept of “aesthetic identity movement” I’m using hasn’t been communicated to you well; if you want to see where I’m coming from more in more detail, read the following.
Geeks, MOPs, and sociopaths
Identity and its Discontents
Naming the Nameless
On Drama
Optimizing for Stories (vs. Optimizing Reality)
Excerpts from a larger discussion about simulacra
(no need to read all of these if it doesn’t seem interesting, of course)
I will take a look at them. Thanks.
I don’t think you didn’t think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).
I asked because it’s frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I’m guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn’t be as willing to criticize them.
I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don’t feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be “good” or not for you to do it, though.
I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don’t do that much public criticism of EA either, so this seems like a strange complaint about me regardless)
Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.
It doesn’t seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.
If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending “advice” about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.