I propose a term “Pascal’s abuser” (analogical to Pascal’s mugger) to describe something that seems to happen repeatedly in the rationalist community, to make it easier to notice the same pattern in the future.
Pascal’s abuser is a person or an organization who go like: “Yeah, what we are doing here hurts people, but it is okay because there is a tiny chance it might contribute to saving the world.”
Examples: Black Lotus, Leverage Research… and maybe also MAPLE. But more importantly, other such people and organizations that I expect to appear in the rationality community in future, so I want us to recognize the pattern faster.
Disagree: the problem with rationalist cults is that they’re wrong about [the effects of their behavior / whether what they’re doing is optimific], not that what they’re doing is Pascalian. (And empirically you should expect that subcommunities that really hurt their members do so for bad reasons.)
Yeah I’m pretty familiar with Black Lotus and Leverage and don’t think they used pascalian reasoning at all. The problem with Black Lotus was it was designed to meet the psychological needs of it’s leader (and his needs were not healthy).
I’m less confident about what actually went wrong at Leverage, but pretty confident it wasn’t that.
“Yeah, what we are doing here hurts people, but it is okay because there is a tiny chance it might contribute to saving the world.”
In the case of Leverage and MAPLE, it is/was definitely not “we have a tiny chance of saving the world.” In both cases, the leader told me directly and explicitly that their basic template was working/growing, and they expected to succeed in transforming the world (in MAPLE’s case, with some caveat about potentially not accomplishing the goal fast enough).
I don’t think that the people drawn into these groups typically have an attitude of “there’s a slim probability that this could matter”. It’s usually more like “This is the most important project. Nothing else has a shot at succeeding.”
I don’t know about Black Lotus, since I never interacted with them much. From a distance, they also seem importantly different than the other two—less messianic?
Perhaps it is something the people a few circles out say? That is, someone who is just inside the group, justifying themselves to someone who is just outside the group?
Or what outsiders think when they look at the project and something feels fishy—but then they decide to ignore that feeling because there is a tiny chance that...
“Perhaps” as in “maybe some people say this, but I haven’t seen it”? Like is this just speculation that maybe people said stuff like this, or do you have particular reason to think that?
It’s pretty counter to how everyone at leverage, who I talked to, talked to me. Mostly they though that Leverage was clearly massively more promising than any other project (and sometimes had an air of being kind of...smug about the things that they knew and understood?).
I can think of exceptions of people who I would guess might have had a different attitude, but even those people would not have been like “but there’s a small chance it will have a huge impact” And I don’t see why they would justify themselves to outsiders that way, since that’s a super lame defense of your work.
I have a model where the followers of an ideology with a charismatic leader often say different or less coherent things than the founder, partly because the founder had to somehow be more impressive than everyone else in order to get their power over people (e.g. most Marxists relative to Marx; most Christians relative to the Pope). So in this case it wouldn’t surprise me if they sometimes used Pascalian arguments while they were not used by the founder of the group.
But yes, I was barely acquainted with Leverage when it existed, and didn’t know what Black Lotus was until after it exploded, I am not reporting from experience.
Is this kind of like being married to an abusive cop? “My partner is abusive, but the work they are doing is so important and stressful, I should just accept the abuse and be grateful that I can support the cause by supporting this person.”
We know that sports teams that engage in hazing underperform teams that do not. Fear based leadership also doesn’t produce results.
Therefore, the more important the work being done, the more pressing the need to resolve the organizational psychology issues. Toxic organizational dynamics and individual misbehavior are, unfortunately, luxuries we cannot afford when saving the world.
Startups typically don’t exploit naive altruistic people. They pay their employees with money, or at least with a promise of money (i.e. shares). If they tried to recruit in the rationalist community, most people would know what to expect (expert for the part where the shares predictably turn out to be worthless because of some clever legal trick).
FTX is the only potentially relevant example I know, and I think even they paid their employees nicely, i.e. the abuse was proportional to the salary.
Startups quite often pay less than the person might make working elsewhere and justify that with the promise of equity. The founder then tells the employees a story about the likely value of that oversells the chance that the equity is worth a lot.
It seems like this is a problem when the argument is wrong or there’s no support for the claim, not a problem with the argument structure. Right? I’m not familiar with each of those orgs’ arguments in detail, and I assume you mean to make a point that doesn’t require that familiarity.
Pascal’s mugger is a mugger because they don’t provide any support for their claim, so they could be (and probably are) just making stuff up. I assume that’s the problem with the orgs and claims you’re labeling Pascal’s abusers?
If what they’re doing hurts people now but provides a noticeable chance of saving the world, and there’s no real downside to other world-saving efforts, then it seems like the validity comes down to either the particulars or your utilitarian vs virtue ethics preferences.
I propose a term “Pascal’s abuser” (analogical to Pascal’s mugger) to describe something that seems to happen repeatedly in the rationalist community, to make it easier to notice the same pattern in the future.
Pascal’s abuser is a person or an organization who go like: “Yeah, what we are doing here hurts people, but it is okay because there is a tiny chance it might contribute to saving the world.”
Examples: Black Lotus, Leverage Research… and maybe also MAPLE. But more importantly, other such people and organizations that I expect to appear in the rationality community in future, so I want us to recognize the pattern faster.
Disagree: the problem with rationalist cults is that they’re wrong about [the effects of their behavior / whether what they’re doing is optimific], not that what they’re doing is Pascalian. (And empirically you should expect that subcommunities that really hurt their members do so for bad reasons.)
Yeah I’m pretty familiar with Black Lotus and Leverage and don’t think they used pascalian reasoning at all. The problem with Black Lotus was it was designed to meet the psychological needs of it’s leader (and his needs were not healthy).
I’m less confident about what actually went wrong at Leverage, but pretty confident it wasn’t that.
In the case of Leverage and MAPLE, it is/was definitely not “we have a tiny chance of saving the world.” In both cases, the leader told me directly and explicitly that their basic template was working/growing, and they expected to succeed in transforming the world (in MAPLE’s case, with some caveat about potentially not accomplishing the goal fast enough).
I don’t think that the people drawn into these groups typically have an attitude of “there’s a slim probability that this could matter”. It’s usually more like “This is the most important project. Nothing else has a shot at succeeding.”
I don’t know about Black Lotus, since I never interacted with them much. From a distance, they also seem importantly different than the other two—less messianic?
Perhaps it is something the people a few circles out say? That is, someone who is just inside the group, justifying themselves to someone who is just outside the group?
Or what outsiders think when they look at the project and something feels fishy—but then they decide to ignore that feeling because there is a tiny chance that...
“Perhaps” as in “maybe some people say this, but I haven’t seen it”? Like is this just speculation that maybe people said stuff like this, or do you have particular reason to think that?
It’s pretty counter to how everyone at leverage, who I talked to, talked to me. Mostly they though that Leverage was clearly massively more promising than any other project (and sometimes had an air of being kind of...smug about the things that they knew and understood?).
I can think of exceptions of people who I would guess might have had a different attitude, but even those people would not have been like “but there’s a small chance it will have a huge impact” And I don’t see why they would justify themselves to outsiders that way, since that’s a super lame defense of your work.
I have a model where the followers of an ideology with a charismatic leader often say different or less coherent things than the founder, partly because the founder had to somehow be more impressive than everyone else in order to get their power over people (e.g. most Marxists relative to Marx; most Christians relative to the Pope). So in this case it wouldn’t surprise me if they sometimes used Pascalian arguments while they were not used by the founder of the group.
But yes, I was barely acquainted with Leverage when it existed, and didn’t know what Black Lotus was until after it exploded, I am not reporting from experience.
Is this kind of like being married to an abusive cop? “My partner is abusive, but the work they are doing is so important and stressful, I should just accept the abuse and be grateful that I can support the cause by supporting this person.”
We know that sports teams that engage in hazing underperform teams that do not. Fear based leadership also doesn’t produce results.
Therefore, the more important the work being done, the more pressing the need to resolve the organizational psychology issues. Toxic organizational dynamics and individual misbehavior are, unfortunately, luxuries we cannot afford when saving the world.
Would you consider nearly every startup that makes people work 80 hour days to consider to belong to the category?
Startups typically don’t exploit naive altruistic people. They pay their employees with money, or at least with a promise of money (i.e. shares). If they tried to recruit in the rationalist community, most people would know what to expect (expert for the part where the shares predictably turn out to be worthless because of some clever legal trick).
FTX is the only potentially relevant example I know, and I think even they paid their employees nicely, i.e. the abuse was proportional to the salary.
Startups quite often pay less than the person might make working elsewhere and justify that with the promise of equity. The founder then tells the employees a story about the likely value of that oversells the chance that the equity is worth a lot.
Which person from Leverage Research do you think defended Leverage Research based on that ground? Which for Maple?
It seems like this is a problem when the argument is wrong or there’s no support for the claim, not a problem with the argument structure. Right? I’m not familiar with each of those orgs’ arguments in detail, and I assume you mean to make a point that doesn’t require that familiarity.
Pascal’s mugger is a mugger because they don’t provide any support for their claim, so they could be (and probably are) just making stuff up. I assume that’s the problem with the orgs and claims you’re labeling Pascal’s abusers?
If what they’re doing hurts people now but provides a noticeable chance of saving the world, and there’s no real downside to other world-saving efforts, then it seems like the validity comes down to either the particulars or your utilitarian vs virtue ethics preferences.
Right?