As a guy on the internet, I mostly agree with this post in the sense that I think points you bring up to warrant a ban. That said...
Suppose instead you are running a trading fund, and someone previously convicted of fraud sends you an idea for a new financial instrument. Here, it seems like you should be much more suspicious, not just of the idea but also of your ability to successfully notice the trap if there is one. It seems relevant now to check both whether the idea is true and whether or not it is manipulative. Rather than just performing a process that catches simple mistakes or omissions, one needs to perform a process that’s robust to active attempts to mislead the judging process.
...
I think the middle case is closest to the situation we’re in now, for reasons like those discussed in comments by jimrandomh and by Zack_M_Davis. Much of ialdabaoth’s output is claims about social dynamics and reasoning systems that seem, at least in part, designed to manipulate the reader, either by making them more vulnerable to predation or more likely to ignore him / otherwise give him room to operate.
Having read Affordance Widths and seeing the way that it may be used to justify awful behavior, I don’t see the risks of these kinds of posts being much higher than the lot of Less Wrong and Rationalist style writing. Less Wrong and Rationalist style writing by nature talks in the abstract about a lot of really broad ideas that can have significant implications for how someone should make decisions in real life and, unless you’re already a very skilled rationalists, you can botch those implications in really damaging ways (personally speaking, reading Eliezer’s meta-ethics sequence when I was 14 was a mistake. But scrupolosity in general can also be a mine-field). Also, epistemic learned helplessness is a thing and it’s especially a rationalist thing.
So, regarding the above justification:
from an epistemic point of view, Ialdabaoth’s post (Affordance Widths) does not strike me as intrinsically more harmful than other posts
from a manipulation point of view, Ialdabaoth’s post (Affordance Widths) does not strike me as intrinsically more manipulator-friendly than a lot of other posts
while Affordance Widths is more manipulator friendly than a lot of other posts in the sense that at least one manipulator (Ialdabaoth) knows that it can be used for manipulation, I do not think this is very relevant because
[Epistemological Status: Maybe the rationalist community dynamic is unusual and I’m mis-gauging things here].
1. Using Affordance Widths to manipulate people into doing things for you is basically a fancy pseudo-rationalist way of manipulating people into doing things for you by making them feel guilty and responsible. This is such a common way for people to get manipulated that, from a pragmatic perspective, I’m skeptical that Affordance Widths allows manipulators to be more dangerous than they would have otherwise been just engaging in direct emotional manipulation.
2. Even the epistemics used in Affordance Widths already exist. People who have been disenfranchised in various ways do use their own personal struggles as a way to convey an implicit duty to those around them pretty frequently. In my circles, memetic immune systems have even built up against these sorts of things (ie the phrase “mental health is not an excuse”). Affordance Widths strikes me as epistemically superfluous in the context of the world’s current epistemic environment. Moreover, I could imagine good people who don’t do things like Ialdabaoth did writing a post very similar to Affordance Widths and, if someone else wrote this post, I really doubt that it would be banned.
3. As you note, posts that may be both manipulative but also epistemically useful (as Affordance Widths is) merit consideration if you believe that the post’s safety can be screened. Less Wrong has a uniquely intelligent community and a pretty well-regarded comments section so my expectation would be that someone here should be around to identify epistemological traps in posts in general. If this expectation is appropriate, then accepting this kind of post shouldn’t be considered risky. If it isn’t appropriate, yall have bigger problems.
As a caveat, it’s of course possible for someone being manipulated to gloss over the comments. But the set of people who get manipulated if and only if they are subjected to epistemologically manipulative (rather than emotionally manipulative) traps who also gloss ove the comments on an article is probably smaller than the set of such people who would read the comments and update away from Ialdabaoth’s claims. Of course someone could get emotionally manipulated enough to gloss over the comments but is somehow resistant to full manipulation in a way that can only be achieved through abstract epistemic posts but thisi a really specific trajectory compared to a lot of others
Of course, despite my dislike of the analogy and its focus on potentially harmful Less Wrong posts, I still support the ban. It’s important to have an epistemic immune system and because:
#1. Less Wrong, and any community focused on self-analysis and improvement, requires a high trust environment
#2. Ialdabaoth has demonstrated clearly manipulative behaviors in real life, causing a lot of harm
#3. We cannot separate Ialdabaoth’s real life manipulative behavior from manipulative behavior on Less Wrong
#4. Ialdabaoth should therefore be banned on Less Wrong for the sake of maintaining a high-trust environment
While posts like Affordance Widths are supporting evidence of #3, I think that, given things Ialdabaoth has done, claim #3 should really be treated as the default assumption even sans that kind of supporting evidence. And this is even more true in this particular context where Less Wrong’s community apparently overlaps so much with his real life community. We shouldn’t give people the benefit of the doubt about compartmentalizing bad behavior just to areas that don’t affect us and we definitely shouldn’t give them the benefit of the doubt when the areas with and without bad behavior aren’t mutually distinct.
I think instead of ‘high trust environment’ I would want a phrase like ‘intention to cooperate’ or ‘good faith’, where I expect that the other party is attempting to seek the truth and is engaging in behaviors that will help move us towards the truth, but could be deeply confused themselves and so it makes sense to ‘check their work’ and ensure that, if I’m confused on a particular point, they can explain it to me or otherwise demonstrate that it is correct instead of just trusting that they have it covered.
To be clear, I would not ban someone for only writing Affordance Widths; I think it is one element of a long series of deceit and manipulation, and it is the series that is most relevant to my impression.
I think instead of ‘high trust environment’ I would want a phrase like ‘intention to cooperate’ or ‘good faith’, where I expect that the other party is attempting to seek the truth and is engaging in behaviors that will help move us towards the truth
I agree—I think ‘intention to cooperate’ or ‘good faith’ are much more appropriate terms that get more at the heart of things. To move towards the truth or improve yourself or what-have-you, you don’t necessarily need to trust people in general but you do need to be willing to admit some forms of vulnerability (ie “I could be wrong about this” or “I could do better”). And the expectation or presence of adversarial manipulation (ie “I want you to do X for me but you don’t want to so I’ll make you feel wrong about what you want) heavily disincentivizes these forms of vulnerability.
To be clear, I would not ban someone for only writing Affordance Widths; I think it is one element of a long series of deceit and manipulation, and it is the series that is most relevant to my impression.
Thanks for clarifying—and I think this point is also born out by many statements in your original post. My response was motivated less by Affordance Widths specifically and more by the trading firm analogy. To me, the problem with Ialdabaoth isn’t that his output may pose epistemic risks (which would be analogous to the fraud-committer’s output posing legal risks); it’s that Ialdabaoth being in a good-faith community would hurt the community’s level of good faith.
This is an important distinction because the former problem would isolate Ialdabaoth’s manipulativeness just to Bayesian updates about the epistemic risks of his output on Less Wrong (which I’m skeptical about being that risky) while the latter problem would considers Ialdabaoth’s general manipulativeness in the context of community impact (which I think may be potentially more serious and does definitely take into consideration things like sex crimes, to address Zack_M_Davis’s comments a little bit).
As a guy on the internet, I mostly agree with this post in the sense that I think points you bring up to warrant a ban. That said...
Having read Affordance Widths and seeing the way that it may be used to justify awful behavior, I don’t see the risks of these kinds of posts being much higher than the lot of Less Wrong and Rationalist style writing. Less Wrong and Rationalist style writing by nature talks in the abstract about a lot of really broad ideas that can have significant implications for how someone should make decisions in real life and, unless you’re already a very skilled rationalists, you can botch those implications in really damaging ways (personally speaking, reading Eliezer’s meta-ethics sequence when I was 14 was a mistake. But scrupolosity in general can also be a mine-field). Also, epistemic learned helplessness is a thing and it’s especially a rationalist thing.
So, regarding the above justification:
from an epistemic point of view, Ialdabaoth’s post (Affordance Widths) does not strike me as intrinsically more harmful than other posts
from a manipulation point of view, Ialdabaoth’s post (Affordance Widths) does not strike me as intrinsically more manipulator-friendly than a lot of other posts
while Affordance Widths is more manipulator friendly than a lot of other posts in the sense that at least one manipulator (Ialdabaoth) knows that it can be used for manipulation, I do not think this is very relevant because
[Epistemological Status: Maybe the rationalist community dynamic is unusual and I’m mis-gauging things here].
1. Using Affordance Widths to manipulate people into doing things for you is basically a fancy pseudo-rationalist way of manipulating people into doing things for you by making them feel guilty and responsible. This is such a common way for people to get manipulated that, from a pragmatic perspective, I’m skeptical that Affordance Widths allows manipulators to be more dangerous than they would have otherwise been just engaging in direct emotional manipulation.
2. Even the epistemics used in Affordance Widths already exist. People who have been disenfranchised in various ways do use their own personal struggles as a way to convey an implicit duty to those around them pretty frequently. In my circles, memetic immune systems have even built up against these sorts of things (ie the phrase “mental health is not an excuse”). Affordance Widths strikes me as epistemically superfluous in the context of the world’s current epistemic environment. Moreover, I could imagine good people who don’t do things like Ialdabaoth did writing a post very similar to Affordance Widths and, if someone else wrote this post, I really doubt that it would be banned.
3. As you note, posts that may be both manipulative but also epistemically useful (as Affordance Widths is) merit consideration if you believe that the post’s safety can be screened. Less Wrong has a uniquely intelligent community and a pretty well-regarded comments section so my expectation would be that someone here should be around to identify epistemological traps in posts in general. If this expectation is appropriate, then accepting this kind of post shouldn’t be considered risky. If it isn’t appropriate, yall have bigger problems.
As a caveat, it’s of course possible for someone being manipulated to gloss over the comments. But the set of people who get manipulated if and only if they are subjected to epistemologically manipulative (rather than emotionally manipulative) traps who also gloss ove the comments on an article is probably smaller than the set of such people who would read the comments and update away from Ialdabaoth’s claims. Of course someone could get emotionally manipulated enough to gloss over the comments but is somehow resistant to full manipulation in a way that can only be achieved through abstract epistemic posts but thisi a really specific trajectory compared to a lot of others
Of course, despite my dislike of the analogy and its focus on potentially harmful Less Wrong posts, I still support the ban. It’s important to have an epistemic immune system and because:
#1. Less Wrong, and any community focused on self-analysis and improvement, requires a high trust environment
#2. Ialdabaoth has demonstrated clearly manipulative behaviors in real life, causing a lot of harm
#3. We cannot separate Ialdabaoth’s real life manipulative behavior from manipulative behavior on Less Wrong
#4. Ialdabaoth should therefore be banned on Less Wrong for the sake of maintaining a high-trust environment
While posts like Affordance Widths are supporting evidence of #3, I think that, given things Ialdabaoth has done, claim #3 should really be treated as the default assumption even sans that kind of supporting evidence. And this is even more true in this particular context where Less Wrong’s community apparently overlaps so much with his real life community. We shouldn’t give people the benefit of the doubt about compartmentalizing bad behavior just to areas that don’t affect us and we definitely shouldn’t give them the benefit of the doubt when the areas with and without bad behavior aren’t mutually distinct.
I think instead of ‘high trust environment’ I would want a phrase like ‘intention to cooperate’ or ‘good faith’, where I expect that the other party is attempting to seek the truth and is engaging in behaviors that will help move us towards the truth, but could be deeply confused themselves and so it makes sense to ‘check their work’ and ensure that, if I’m confused on a particular point, they can explain it to me or otherwise demonstrate that it is correct instead of just trusting that they have it covered.
To be clear, I would not ban someone for only writing Affordance Widths; I think it is one element of a long series of deceit and manipulation, and it is the series that is most relevant to my impression.
I agree—I think ‘intention to cooperate’ or ‘good faith’ are much more appropriate terms that get more at the heart of things. To move towards the truth or improve yourself or what-have-you, you don’t necessarily need to trust people in general but you do need to be willing to admit some forms of vulnerability (ie “I could be wrong about this” or “I could do better”). And the expectation or presence of adversarial manipulation (ie “I want you to do X for me but you don’t want to so I’ll make you feel wrong about what you want) heavily disincentivizes these forms of vulnerability.
Thanks for clarifying—and I think this point is also born out by many statements in your original post. My response was motivated less by Affordance Widths specifically and more by the trading firm analogy. To me, the problem with Ialdabaoth isn’t that his output may pose epistemic risks (which would be analogous to the fraud-committer’s output posing legal risks); it’s that Ialdabaoth being in a good-faith community would hurt the community’s level of good faith.
This is an important distinction because the former problem would isolate Ialdabaoth’s manipulativeness just to Bayesian updates about the epistemic risks of his output on Less Wrong (which I’m skeptical about being that risky) while the latter problem would considers Ialdabaoth’s general manipulativeness in the context of community impact (which I think may be potentially more serious and does definitely take into consideration things like sex crimes, to address Zack_M_Davis’s comments a little bit).