Alright. Well, here’s one starting point, I guess. (You can also Cmd-F in the comments on that post for “insult” and “social attack”; I think that should get you to most of the relevant subthreads.)
(There are many other examples, but this will do for now.)
I spent a few minutes trying to do so and feel overwhelmed. I’m not motivated to continue.
Edit:
If you wouldn’t mind, I’d appreciate a concise summary. No worries if you’d prefer not to though.
In particular, I’m wondering why you might think that your approach to commenting leads to more winning than the more gentle approach I referred to.
Is it something you enjoy? That brings you happiness? More than other hobbies or sources of entertainment? I suspect not.
Are your motivations altruistic? Maybe it’s that despite being not fun to you personally, you feel you are doing the community a service by defending certain norms. This seems somewhat plausible to me but also not too likely.
My best guess is that the approach to commenting you have taken is not actually a thoughtful strategy that you expect will lead to the most winning, but instead is the result of being unable to resist the impulse of someone being wrong on the internet. (I say this knowing that you are the type of person who appreciates candidness.)
Replying to the added-by-edit parts of the parent comment.
My approach to commenting is the correct one.
(Or so I claim! Obviously, others disagree. But you asked about my motivations, and that’s the answer.)
Part of the answer to your question is the “gentle approach” you refer to is not real. It’s a fantasy. In reality, there is my approach, and there are other approaches which don’t accomplish the same things. There is no such thing as “saying all the same things that Said says, but more nicely, and without any downsides”. Such an option simply does not exist.
I strongly suspect that you are capable of writing in such a way where people don’t perceive this hostile-flavored subtext. A softer, gentler type of writing.
Well, setting aside the question of whether I can write in a “softer, gentler” way, it’s clear enough that many other people can write like that, and often do. One can see many examples of such writing on the EA Forum, for instance.
Of course, the EA forum is also almost entirely useless as a place to have any kind of serious, direct discussion of difficult questions. The cause of this is, largely, a very strong, and zealously moderator-enforced, norm for precisely that sort of “softer, gentler” writing.
Regardless of whether I can write like that, I certainly won’t. That would be wrong, and bad—for me, and for any intellectual community of which I am a member. To a first approximation, no one should ever write like that, on a forum like LessWrong.
My best guess is that the approach to commenting you have taken is not actually a thoughtful strategy that you expect will lead to the most winning, but instead is the result of being unable to resist the impulse of someone being wrong on the internet. (I say this knowing that you are the type of person who appreciates candidness.)
Indeed I do appreciate candidness.
As far as “the most winning” goes, I can’t speak to that. But the “softer, gentler” path is the path of losing—of that, I am very sure.
As far as the xkcd comic goes… well. I must tell you that, while of course I cannot prove this, I suspect that that single comic is responsible for a large chunk of why the Internet, and by extension the world, is in the shape that it’s in, these days.[1] (Some commentary on my own views on the subject of arguing with people who are wrong on the internet can be found in this comment.)
I am not sure if it’s worse than the one about free speech as far as long-term harm goes, but xkcd #386 at least a strong contender for the title of “most destructive webcomic strip ever posted”.
Given your beliefs, I understand why you won’t apply this “softer, gentler” writing style. You would find it off-putting and you think it would do harm to the community.
There is something that I don’t understand and would like to understand though. Simplifying, we can say that some people enjoy your engagement style and others don’t. What I don’t understand is why you choose to engage with people who clearly don’t enjoy your engagement style.
I suspect that your thinking is that the responsibility falls on them to disengage if they so desire. But clearly some people struggle with that (and I would pose the same question to them as well: why continue engaging). So from your perspective, if you’re aiming to win, why continue to engage with such people?
Does it make you happy? Does it make them happy? Is it an altruistic attempt to enforce community norms?
Or is it just that duty calls and you are not in fact making a conscious attempt to win? I suspect this is what is happening.
(And I apologize if this is too “gentle”, but hey, zooming out, being agent-y, and thinking strategically about whether what you’re doing is the best way to win is not easy. I certainly fail at it the large majority of the time. I think pretty much everyone does.)
Does it make you happy? Does it make them happy? Is it an altruistic attempt to enforce community norms?
Or is it just that duty calls and you are not in fact making a conscious attempt to win? I suspect this is what is happening.
None of the above.
The answer is that thinking of commenting on a public discussion forum simply as “engaging with” some specific single person is just entirely the wrong model.
It’s not like I’m having a private conversation with someone, they say “Um I don’t think I want to talk to you anymore” and run away, and I chase after them, yelling “Come back here and respond to my critique! You’re not getting away from me that easily! I have several more points to make!!”, while my hapless victim frantically looks for an alley to hide in.
LessWrong is a public discussion forum. The point of commenting is for the benefit of everyone—yourself, the person you’re replying to, any other participants in the discussion, any readers of the discussion, any future readers of the discussion…
Frankly, the view where someone finding your comments aversive is a general reason to not reply to their comments or post under their posts, strikes me as bizarre. Why would someone who only considered the impact of their comments on the specific user they were replying to, even bother commenting on LessWrong? It seems like a monstrously inefficient use of one’s time and energy…
Let me make this more concrete. Suppose you are going back and forth with a single user in a comments thread—call them Bob—and there have been nine exchanges. Bob wrote the ninth comment. You get the sense that Bob is finding the conversation unpleasant, but he continues to respond anyway.
You have the option of just not responding. Not writing that tenth comment. Not continuing to respond in that comment thread at all. (I don’t think you’d dispute this.)
And so my question is: why write the tenth comment? You point out that, as a public discussion forum, when you write that tenth comment in response to Bob, it is not just for Bob, but for anyone who might read or end up contributing to the conversation.
But that observation itself is, I think you’d agree, insufficient to explain why it’d make sense to write the tenth comment. To the extent your goals are altruistic, you’d have to posit that this tenth comment is having a net benefit to the general public. Is that your position? That despite potentially causing harm to Bob, it is worth writing the tenth comment because you expect there to be enough benefit to the general public?
And so my question is: why write the tenth comment?
Why not write the tenth comment…? I mean, presumably, in this scenario, I have some reason why I am posting any comments on this hypothetical thread at all, right? Some argument that I am making, some point that I am explaining, some confusion that I am attempting to correct (whether that means “a confusion on Bob’s part, which I am correcting by explaining whatever it is”, or “a confusion on my part, which I think that the discussion with Bob may help me resolve”), something I am trying to learn or understand, etc. Well, why should that reason not still apply to the tenth comment, just as it did to the first…?
To the extent your goals are altruistic, you’d have to posit that this tenth comment is having a net benefit to the general public. Is that your position? That despite potentially causing harm to Bob, it is worth writing the tenth comment because you expect there to be enough benefit to the general public?
I don’t accept this “causing harm to Bob” stipulation. It’s basically impossible for that to happen (excepting certain scenarios such as “I post Bob’s private contact info” or “I reveal an important secret of Bob’s” or something like that; presumably, this is not what we’re talking about).
That aside: yes, the purpose of participating in a public discussion on a public discussion forum is (or should be!) public benefit. That is how I think about commenting on LessWrong, at any rate.
I will again note that I find it perplexing to have to explain this. The alternative view (where one views a discussion in the comments on a LessWrong post as merely an interaction between two individuals, with no greater import or impact) seems nigh-incomprehensible to me.
Thank you for clarifying that your motivation in writing the tenth comment is to altriusitically benefit the general public at large. That you are making a conscious attempt to win in this scenario by writing the tenth comment.
I suspect that this is belief in belief. Suppose that we were able to measure the impact of your tenth comment. If someone offered you a bet that this tenth comment would have a net negative overall impact on the general public, at 1-to-1 odds, for a large sum of money, I don’t think you would take it because I don’t think you actually predict the tenth comment to have this net positive impact.
Well, why should that reason not still apply to the tenth comment, just as it did to the first…?
Because you have more information after the first nine comments. You have reason to believe that Bob finds the discussion to be unpleasant, that you are unlikely to update his beliefs, and that he is unlikely to update yours.
I don’t accept this “causing harm to Bob” stipulation.
Hm. “Cause” might be oversimplifying. In the situation I’m describing let’s suppose that Bob is worse off in the world where you write the tenth comment than he is in the counterfactual world where you don’t. What word/phrase would you use to describe this?
I will again note that I find it perplexing to have to explain this. The alternative view (where one views a discussion in the comments on a LessWrong post as merely an interaction between two individuals, with no greater import or impact) seems nigh-incomprehensible to me.
My belief here is that impact beyond the two individuals varies. Sometimes lots of other people are following the conversation. Sometimes they get value out of it, sometimes it has a net negative impact on them. Sometimes few other people follow the conversation. Sometimes zero other people follow it.
I expect that you share this set of beliefs and that basically everyone else shares this set of beliefs.
Thank you for clarifying that your motivation in writing the tenth comment is to altriusitically benefit the general public at large. That you are making a conscious attempt to win in this scenario by writing the tenth comment.
This is not an accurate summary.
It seems like you’re trying very hard to twist my words so as to make my views fit into your framework. But they don’t.
Well, why should that reason not still apply to the tenth comment, just as it did to the first…?
Because you have more information after the first nine comments. You have reason to believe that Bob finds the discussion to be unpleasant, that you are unlikely to update his beliefs, and that he is unlikely to update yours.
None of that is either particularly relevant to the considerations described, that affect my decision to write a comment.
I don’t accept this “causing harm to Bob” stipulation.
Hm. “Cause” might be oversimplifying. In the situation I’m describing let’s suppose that Bob is worse off in the world where you write the tenth comment than he is in the counterfactual world where you don’t. What word/phrase would you use to describe this?
I would describe it like you just did there, I guess, if I were inclined to describe it at all. But I generally wouldn’t be. (I say more about this in the thread I linked earlier.)
I will again note that I find it perplexing to have to explain this. The alternative view (where one views a discussion in the comments on a LessWrong post as merely an interaction between two individuals, with no greater import or impact) seems nigh-incomprehensible to me.
My belief here is that impact beyond the two individuals varies. Sometimes lots of other people are following the conversation. Sometimes they get value out of it, sometimes it has a net negative impact on them. Sometimes few other people follow the conversation. Sometimes zero other people follow it.
This seems to be some combination of “true but basically irrelevant” (of course more people read some comment threads than others, but so what?) and “basically not true” (a net negative impact? seems unlikely unless I lie or otherwise behave unethically, which I do not). None of this has any bearing on the fact that comments on a public forum aren’t just written for one person.
I usually find that I get negative value out of “said posts many comments drilling into an author to get a specific concern resolved”. usually, if I get value from a Said comment thread, it’s one where said leaves quickly, either dissatisfied or satisfied; when Said makes many comments, it feels more like polluting the commons by inducing compute for me to figure out whether the thread is worth reading (and I usually don’t think so). if I were going to make one change to how said comments, it’s to finish threads with “okay, well, I’m done then” almost all the time after only a few comments.
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
What specific practical difference do you envision between the thing that you’re describing as what you want me to believe, and the thing that you think I currently believe? Like, what actual, concrete things do you imagine I would do differently, if your wish came true?
(EDIT: I ask this because I do not recognize, in your description, anything that seems like it accurately describes my beliefs. But maybe I’m misunderstanding you—hence the question.)
well, in this example, you are applying a pattern of “What specific practical difference do you envision”, and so I would consider you to be putting high probability on that being a good question. I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already. and so in the original claim, I was saying that you seem to have frameworks that prescribe behaviors like “what practical difference”, which are things like—at a guess—“if a suggestion isn’t specific enough to be sure I’ve interpreted correctly, ask for clarification”. I do that sometimes, but you do it more. and there are many more things like this, the more general pattern is my point.
anyway gonna follow my own instructions and cut this off here. if you aren’t able to extract useful bits from it, such as by guessing how I’d have answered if we kept going, then oh well.
I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already.
I see… well, maybe it will not surprise you to learn that, based on long and much-repeated experience, I consider that approach to be vastly inferior. In my experience, it is impossible for me to guess what anyone means, and also it is impossible for anyone else to guess what I mean. (Perhaps it is possible for other people to guess what other people mean, but what I have observed leads me to strongly doubt that, too.) Trying to do this impossible thing reliably leads to much more wasted communication. Asking is far, far superior.
In short, it is not that I haven’t considered doing things in the way that you suggest. I have considered it, and tried it, and had it tried on me, many times. My conclusion has been that it’s impossible to succeed and a very bad idea to try.
Hm. I’m realizing that I’ve been presuming that you are at least roughly consequentialist and are trying to take actions that lead to good consequences for affected parties. Maybe that’s not true though.
But if it is true, here is how I am thinking about it. We can divide affected parties into 1) you, 2) Bob, and 3) others. We’ve stipulated that with the tenth comment you expect it to negatively affect Bob. So then, I’d think that’d mean that your reason for posting the tenth comment is that you expect the desirable consequences for you and others to outweigh the undesirable consequences for Bob.
Furthermore, you’ve emphasized “public benefit” and the fact that this is a public forum. You also haven’t indicated that you have particularly selfish motives that would make you want to do things that benefit you at the expense of others, at least not to an unusual degree. So then, I presume that the expected benefit to the third group—others—is the bulk of your reason for posting the tenth comment.
It seems like you’re trying very hard to twist my words so as to make my views fit into your framework. But they don’t.
I’m sorry that it came across that way. I promise that I am not trying to twist your words. I just would like to understand where you are coming from.
I’m realizing that I’ve been presuming that you are at least roughly consequentialist and are trying to take actions that lead to good consequences for affected parties. Maybe that’s not true though.
“Roughly consequentialist” is a basically apt label. But as I have written a few times, act consequentialism is pretty obviously non-viable; the only reasonable way to be a consequentialist is rule consequentialism.
This makes your the reasoning you outline in your second paragraph inapplicable and inappropriate.
I describe my views on this a bit in the thread I linked earlier. Some more relevant commentary can be found in this comment (Cmd-F “I say and write things” for the relevant ~3 paragraphs, although that entire comment thread is at least partly relevant to this discussion, as it talks about consequentialism and how to implement it, etc.).
One thing I want to note is that I hear you and agree with you about how these comments are taking place in public forums and that we need to consider their effects beyond the commenter and the person being replied to.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect. I will outline some things about my model of the world and would love to hear about how it meshes with your model.
Components of my model:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
When perceived that way, people tend to enter a soldier-like mindset.
People are rather bad at updating their believes when they have such a mindset.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
I guess this doesn’t mesh well with the hypothetical I’ve outlined, but I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
(I’m at the gym on my phone and was a little loose with my language and thinking.)
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect.
… I get the sense that you haven’t been reading my comments at all.
I didn’t claim that I “expect [my] hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect”. I explicitly disclaimed the view (act consequentialism) which involves evaluation of this question at all. The last time you tried to summarize my view in this way, I specifically said that this is not the right summary. But now you’re just repeating that same thing again. What the heck?
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
That having been said, I do want to comment on your outlined model a bit:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
First of all, “the effects on Bob” of my comments are Bob’s own business, not mine.
Let’s be clear about what it is that we’re not discussing. We’re not talking about “effects on Bob” that are of the form “other people read my comment and then do things that are bad for Bob” (which would happen if e.g. I doxxed Bob, or posted defamatory claims, etc.). We’re not talking about “effects on Bob” that come from the comment just existing, regardless of whether Bob ever read it (e.g., erroneous and misleading descriptions of Bob’s ideas). And we’re definitely not talking about some sort of “basilisk hack” where my comment hijacks Bob’s brain in some weird way and causes him to have seizures (perhaps due to some unfortunate font rendering bug).
No, the sorts of “effects” being referred to, here, are specifically and exclusively the effects, directly on Bob, of Bob reading my comments (and understanding them, and thinking about them, etc.), in the normal way that humans read ordinary text.
Well, for one thing, if Bob doesn’t want to experience those effects, he can just not read the comment. That’s a choice that Bob can make! “Don’t like, don’t read” applies more to some things than others… but it definitely applies to some obscure sub-sub-sub-thread of some discussion deep in the weeds of the comment section of a post on Less Wrong dot com.
But also, and more generally, each person is responsible for what effects reading some text has on them. (We are, again, not talking about some sort of weird sci-fi infohazard, but just normal reading of ordinary text written by humans.) Part of being an adult is that you take this sort of very basic responsibility for how things affect your feelings, and if you don’t like doing something, you stop doing it. Or not! Maybe you do it anyway, for any number of reasons. That’s your call! But the effects on you are your business, not anyone else’s.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
If that perception is correct, then it is right and proper to perceive it thus. If it is incorrect, then the one who mis-perceives it thus should endeavor to correct their error.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
Maintaining good epistemics in the face of pressure is an important rationality skill—one which it benefits everyone to develop. And the “pressure” involved in arguing with some random nobody on LessWrong is one of the mildest, most consequence-free forms of pressure imaginable—the perfect situation for practicing those skills.
I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
If our hypothetical Bob thinks this, then he should have no problem at all disengaging from the discussion, and ignoring all further replies in the given thread. “I think that this is not important enough for me to continue spending my time on it, so thank you for the discussion thus far, but I won’t be replying further” is a very easy to thing to say.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
Then perhaps these hypothetical readers should develop and practice the skill of “not continuing to waste their time reading things which they can see is a waste of their time”. “Somehow finding yourself doing something which you don’t endorse” is a general problem, and thus admits of general solutions. It is pointless to try to take responsibility for the dysfunctional internet-forum-reading habits of anyone who might ever read one’s comments on LessWrong.
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
I don’t have the strongest grasp of what rule consequentialism actually means. I’m also very prone to thinking about things in terms of expected value. I apologize if either of these things has lead to confusion or misattribution.
My understanding of rule consequentialism is that you choose rules that you think will lead to the best consequences and then try to follow those rules. But it is also my understanding that it is often a little difficult to figure out what rules apply to what situations, and so in practice some object level thinking about expected consequences bleeds in.
It sounds like that is not the case here though. It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
This isn’t a perfect analogy, but I think that it is gesturing at something that is important and in the ballpark of what we’re talking about. I’m having trouble putting my finger on it. Do you think there is something useful here, perhaps with some amendments? Would you like to comment on where you stand on (1) vs (2)?
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
Sure, that’s basically true. Let’s say, provisionally, that this is a reasonable description.
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
I’m talking about stuff like this:
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important.
Now, is that the only rule that applies to situations like this (i.e., “writing comments on a discussion forum”)? No, of course not. Many other rules apply. It’s not really reasonable to expect me to enumerate the entirety of my moral and practical views in a comment.
As for why I’ve selected the rules… it’s because I think that they’re the right ones, of course.
Like, at this point we’ve moved into “list and explain all of your opinions about morality and also about everything else”. And, man, that is definitely a “we’re gonna be here all day or possibly all year or maybe twelve years” sort of conversation.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
Well, yes, those are indeed two different things. But also, neither of them are things that I’ve said, so neither of them seems relevant…?
Do you think there is something useful here, perhaps with some amendments?
I think that you’re reading things into my comments that are not the things that I wrote in those comments. I’m not sure what the source of the confusion is.
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
Well, things don’t just “matter” in the abstract, they only matter to specific people. I’m sure that the effects on Bob of Bob reading my comments matter to Bob. This is fine! Indeed, it’s perfect: the effects matter to Bob, and Bob is the one who knows best what the effects are, and Bob is the one best capable of controlling the effects, so a policy of “the effects on Bob of Bob reading my comments are Bob’s to take care of” is absolutely ideal in every way.
And, yes indeed, it would be very bad for society as a whole (and relevant subsets thereof, such as “the participants in this discussion forum”) if we were to adopt the opposite policy. (Indeed, we can see that it is very bad for society, almost every time we do adopt the opposite policy.)
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
This is starting to feel satisfying, like I understand where you are coming from. I have a relatively strong curiosity here; I want to understand where you’re coming from.
It sounds like there are rules such as “saying things that are true, relevant and at least somewhat important” that you strongly believe will lead to the best outcomes for society. These rules apply to the decision to post the tenth comment, and so you follow the rule and post the comment.
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
I’m trying to ask what you value.
And as for listing out your entire moral philosophy, I am certainly not asking for that. I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
Right.
I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
I guess I’d have to think about it. The “rules” that are relevant to this sort of situation have always seemed to me to be both very obvious and also continuous with general principles of how to live and act, so separating them out is not easy.
I think your comment here epitomizes what I value about your posting. I’m not here to feel good about myself, I want to learn stuff correctly the first time. If I want to be coddled I can go to my therapist.
I also think that there’s a belief in personal agency that we share. No one is required to read or comment, and I view even negative comments as a valuable gift of the writer’s time and energy.
I wish I could write as sharply and itintelligently as you do. Most people waste too many words not saying anything with any redeeming factor except social signaling. (At least when I waste words i try to make it funny and interesting, which is not much better but intended as sort of an unspoken apology)
I hope that, at least, you now have some idea of why I view such suggestions as “why can’t you just write more nicely” as something less than an obviously winning play.
EDIT: The parent comment was heavily edited after I posted this reply; originally it contained only the first paragraph. The text above is a reply to that. I will reply to the edited-in parts in a sibling comment.
(Sorry about the edit Said, and thank you for calling it out and stating your intent. I was going to DM you but figured you might not receive it due to some sort of moderation action, which is unfortunate. I figured there’d be a good chance that you’d see the edit and so I’d wait a few hours before replying to let you know I had edited the comment.)
Alright. Well, here’s one starting point, I guess. (You can also Cmd-F in the comments on that post for “insult” and “social attack”; I think that should get you to most of the relevant subthreads.)
(There are many other examples, but this will do for now.)
I spent a few minutes trying to do so and feel overwhelmed. I’m not motivated to continue.
Edit:
If you wouldn’t mind, I’d appreciate a concise summary. No worries if you’d prefer not to though.
In particular, I’m wondering why you might think that your approach to commenting leads to more winning than the more gentle approach I referred to.
Is it something you enjoy? That brings you happiness? More than other hobbies or sources of entertainment? I suspect not.
Are your motivations altruistic? Maybe it’s that despite being not fun to you personally, you feel you are doing the community a service by defending certain norms. This seems somewhat plausible to me but also not too likely.
My best guess is that the approach to commenting you have taken is not actually a thoughtful strategy that you expect will lead to the most winning, but instead is the result of being unable to resist the impulse of someone being wrong on the internet. (I say this knowing that you are the type of person who appreciates candidness.)
Replying to the added-by-edit parts of the parent comment.
My approach to commenting is the correct one.
(Or so I claim! Obviously, others disagree. But you asked about my motivations, and that’s the answer.)
Part of the answer to your question is the “gentle approach” you refer to is not real. It’s a fantasy. In reality, there is my approach, and there are other approaches which don’t accomplish the same things. There is no such thing as “saying all the same things that Said says, but more nicely, and without any downsides”. Such an option simply does not exist.
Earlier, you wrote:
Well, setting aside the question of whether I can write in a “softer, gentler” way, it’s clear enough that many other people can write like that, and often do. One can see many examples of such writing on the EA Forum, for instance.
Of course, the EA forum is also almost entirely useless as a place to have any kind of serious, direct discussion of difficult questions. The cause of this is, largely, a very strong, and zealously moderator-enforced, norm for precisely that sort of “softer, gentler” writing.
Regardless of whether I can write like that, I certainly won’t. That would be wrong, and bad—for me, and for any intellectual community of which I am a member. To a first approximation, no one should ever write like that, on a forum like LessWrong.
Indeed I do appreciate candidness.
As far as “the most winning” goes, I can’t speak to that. But the “softer, gentler” path is the path of losing—of that, I am very sure.
As far as the xkcd comic goes… well. I must tell you that, while of course I cannot prove this, I suspect that that single comic is responsible for a large chunk of why the Internet, and by extension the world, is in the shape that it’s in, these days.[1] (Some commentary on my own views on the subject of arguing with people who are wrong on the internet can be found in this comment.)
I am not sure if it’s worse than the one about free speech as far as long-term harm goes, but xkcd #386 at least a strong contender for the title of “most destructive webcomic strip ever posted”.
Thank you for the response.
Given your beliefs, I understand why you won’t apply this “softer, gentler” writing style. You would find it off-putting and you think it would do harm to the community.
There is something that I don’t understand and would like to understand though. Simplifying, we can say that some people enjoy your engagement style and others don’t. What I don’t understand is why you choose to engage with people who clearly don’t enjoy your engagement style.
I suspect that your thinking is that the responsibility falls on them to disengage if they so desire. But clearly some people struggle with that (and I would pose the same question to them as well: why continue engaging). So from your perspective, if you’re aiming to win, why continue to engage with such people?
Does it make you happy? Does it make them happy? Is it an altruistic attempt to enforce community norms?
Or is it just that duty calls and you are not in fact making a conscious attempt to win? I suspect this is what is happening.
(And I apologize if this is too “gentle”, but hey, zooming out, being agent-y, and thinking strategically about whether what you’re doing is the best way to win is not easy. I certainly fail at it the large majority of the time. I think pretty much everyone does.)
None of the above.
The answer is that thinking of commenting on a public discussion forum simply as “engaging with” some specific single person is just entirely the wrong model.
It’s not like I’m having a private conversation with someone, they say “Um I don’t think I want to talk to you anymore” and run away, and I chase after them, yelling “Come back here and respond to my critique! You’re not getting away from me that easily! I have several more points to make!!”, while my hapless victim frantically looks for an alley to hide in.
LessWrong is a public discussion forum. The point of commenting is for the benefit of everyone—yourself, the person you’re replying to, any other participants in the discussion, any readers of the discussion, any future readers of the discussion…
Frankly, the view where someone finding your comments aversive is a general reason to not reply to their comments or post under their posts, strikes me as bizarre. Why would someone who only considered the impact of their comments on the specific user they were replying to, even bother commenting on LessWrong? It seems like a monstrously inefficient use of one’s time and energy…
EDIT: See this comment thread for more on this subject.
Let me make this more concrete. Suppose you are going back and forth with a single user in a comments thread—call them Bob—and there have been nine exchanges. Bob wrote the ninth comment. You get the sense that Bob is finding the conversation unpleasant, but he continues to respond anyway.
You have the option of just not responding. Not writing that tenth comment. Not continuing to respond in that comment thread at all. (I don’t think you’d dispute this.)
And so my question is: why write the tenth comment? You point out that, as a public discussion forum, when you write that tenth comment in response to Bob, it is not just for Bob, but for anyone who might read or end up contributing to the conversation.
But that observation itself is, I think you’d agree, insufficient to explain why it’d make sense to write the tenth comment. To the extent your goals are altruistic, you’d have to posit that this tenth comment is having a net benefit to the general public. Is that your position? That despite potentially causing harm to Bob, it is worth writing the tenth comment because you expect there to be enough benefit to the general public?
Why not write the tenth comment…? I mean, presumably, in this scenario, I have some reason why I am posting any comments on this hypothetical thread at all, right? Some argument that I am making, some point that I am explaining, some confusion that I am attempting to correct (whether that means “a confusion on Bob’s part, which I am correcting by explaining whatever it is”, or “a confusion on my part, which I think that the discussion with Bob may help me resolve”), something I am trying to learn or understand, etc. Well, why should that reason not still apply to the tenth comment, just as it did to the first…?
I don’t accept this “causing harm to Bob” stipulation. It’s basically impossible for that to happen (excepting certain scenarios such as “I post Bob’s private contact info” or “I reveal an important secret of Bob’s” or something like that; presumably, this is not what we’re talking about).
That aside: yes, the purpose of participating in a public discussion on a public discussion forum is (or should be!) public benefit. That is how I think about commenting on LessWrong, at any rate.
I will again note that I find it perplexing to have to explain this. The alternative view (where one views a discussion in the comments on a LessWrong post as merely an interaction between two individuals, with no greater import or impact) seems nigh-incomprehensible to me.
Thank you for clarifying that your motivation in writing the tenth comment is to altriusitically benefit the general public at large. That you are making a conscious attempt to win in this scenario by writing the tenth comment.
I suspect that this is belief in belief. Suppose that we were able to measure the impact of your tenth comment. If someone offered you a bet that this tenth comment would have a net negative overall impact on the general public, at 1-to-1 odds, for a large sum of money, I don’t think you would take it because I don’t think you actually predict the tenth comment to have this net positive impact.
Because you have more information after the first nine comments. You have reason to believe that Bob finds the discussion to be unpleasant, that you are unlikely to update his beliefs, and that he is unlikely to update yours.
Hm. “Cause” might be oversimplifying. In the situation I’m describing let’s suppose that Bob is worse off in the world where you write the tenth comment than he is in the counterfactual world where you don’t. What word/phrase would you use to describe this?
My belief here is that impact beyond the two individuals varies. Sometimes lots of other people are following the conversation. Sometimes they get value out of it, sometimes it has a net negative impact on them. Sometimes few other people follow the conversation. Sometimes zero other people follow it.
I expect that you share this set of beliefs and that basically everyone else shares this set of beliefs.
This is not an accurate summary.
It seems like you’re trying very hard to twist my words so as to make my views fit into your framework. But they don’t.
None of that is either particularly relevant to the considerations described, that affect my decision to write a comment.
I would describe it like you just did there, I guess, if I were inclined to describe it at all. But I generally wouldn’t be. (I say more about this in the thread I linked earlier.)
This seems to be some combination of “true but basically irrelevant” (of course more people read some comment threads than others, but so what?) and “basically not true” (a net negative impact? seems unlikely unless I lie or otherwise behave unethically, which I do not). None of this has any bearing on the fact that comments on a public forum aren’t just written for one person.
I usually find that I get negative value out of “said posts many comments drilling into an author to get a specific concern resolved”. usually, if I get value from a Said comment thread, it’s one where said leaves quickly, either dissatisfied or satisfied; when Said makes many comments, it feels more like polluting the commons by inducing compute for me to figure out whether the thread is worth reading (and I usually don’t think so). if I were going to make one change to how said comments, it’s to finish threads with “okay, well, I’m done then” almost all the time after only a few comments.
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
What specific practical difference do you envision between the thing that you’re describing as what you want me to believe, and the thing that you think I currently believe? Like, what actual, concrete things do you imagine I would do differently, if your wish came true?
(EDIT: I ask this because I do not recognize, in your description, anything that seems like it accurately describes my beliefs. But maybe I’m misunderstanding you—hence the question.)
well, in this example, you are applying a pattern of “What specific practical difference do you envision”, and so I would consider you to be putting high probability on that being a good question. I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already. and so in the original claim, I was saying that you seem to have frameworks that prescribe behaviors like “what practical difference”, which are things like—at a guess—“if a suggestion isn’t specific enough to be sure I’ve interpreted correctly, ask for clarification”. I do that sometimes, but you do it more. and there are many more things like this, the more general pattern is my point.
anyway gonna follow my own instructions and cut this off here. if you aren’t able to extract useful bits from it, such as by guessing how I’d have answered if we kept going, then oh well.
I see… well, maybe it will not surprise you to learn that, based on long and much-repeated experience, I consider that approach to be vastly inferior. In my experience, it is impossible for me to guess what anyone means, and also it is impossible for anyone else to guess what I mean. (Perhaps it is possible for other people to guess what other people mean, but what I have observed leads me to strongly doubt that, too.) Trying to do this impossible thing reliably leads to much more wasted communication. Asking is far, far superior.
In short, it is not that I haven’t considered doing things in the way that you suggest. I have considered it, and tried it, and had it tried on me, many times. My conclusion has been that it’s impossible to succeed and a very bad idea to try.
Hm. I’m realizing that I’ve been presuming that you are at least roughly consequentialist and are trying to take actions that lead to good consequences for affected parties. Maybe that’s not true though.
But if it is true, here is how I am thinking about it. We can divide affected parties into 1) you, 2) Bob, and 3) others. We’ve stipulated that with the tenth comment you expect it to negatively affect Bob. So then, I’d think that’d mean that your reason for posting the tenth comment is that you expect the desirable consequences for you and others to outweigh the undesirable consequences for Bob.
Furthermore, you’ve emphasized “public benefit” and the fact that this is a public forum. You also haven’t indicated that you have particularly selfish motives that would make you want to do things that benefit you at the expense of others, at least not to an unusual degree. So then, I presume that the expected benefit to the third group—others—is the bulk of your reason for posting the tenth comment.
I’m sorry that it came across that way. I promise that I am not trying to twist your words. I just would like to understand where you are coming from.
“Roughly consequentialist” is a basically apt label. But as I have written a few times, act consequentialism is pretty obviously non-viable; the only reasonable way to be a consequentialist is rule consequentialism.
This makes your the reasoning you outline in your second paragraph inapplicable and inappropriate.
I describe my views on this a bit in the thread I linked earlier. Some more relevant commentary can be found in this comment (Cmd-F “I say and write things” for the relevant ~3 paragraphs, although that entire comment thread is at least partly relevant to this discussion, as it talks about consequentialism and how to implement it, etc.).
Thanks for clarifying, Said. That is helpful.
I skimmed each of the threads you linked to.
One thing I want to note is that I hear you and agree with you about how these comments are taking place in public forums and that we need to consider their effects beyond the commenter and the person being replied to.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect. I will outline some things about my model of the world and would love to hear about how it meshes with your model.
Components of my model:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
When perceived that way, people tend to enter a soldier-like mindset.
People are rather bad at updating their believes when they have such a mindset.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
I guess this doesn’t mesh well with the hypothetical I’ve outlined, but I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
(I’m at the gym on my phone and was a little loose with my language and thinking.)
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
… I get the sense that you haven’t been reading my comments at all.
I didn’t claim that I “expect [my] hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect”. I explicitly disclaimed the view (act consequentialism) which involves evaluation of this question at all. The last time you tried to summarize my view in this way, I specifically said that this is not the right summary. But now you’re just repeating that same thing again. What the heck?
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
That having been said, I do want to comment on your outlined model a bit:
First of all, “the effects on Bob” of my comments are Bob’s own business, not mine.
Let’s be clear about what it is that we’re not discussing. We’re not talking about “effects on Bob” that are of the form “other people read my comment and then do things that are bad for Bob” (which would happen if e.g. I doxxed Bob, or posted defamatory claims, etc.). We’re not talking about “effects on Bob” that come from the comment just existing, regardless of whether Bob ever read it (e.g., erroneous and misleading descriptions of Bob’s ideas). And we’re definitely not talking about some sort of “basilisk hack” where my comment hijacks Bob’s brain in some weird way and causes him to have seizures (perhaps due to some unfortunate font rendering bug).
No, the sorts of “effects” being referred to, here, are specifically and exclusively the effects, directly on Bob, of Bob reading my comments (and understanding them, and thinking about them, etc.), in the normal way that humans read ordinary text.
Well, for one thing, if Bob doesn’t want to experience those effects, he can just not read the comment. That’s a choice that Bob can make! “Don’t like, don’t read” applies more to some things than others… but it definitely applies to some obscure sub-sub-sub-thread of some discussion deep in the weeds of the comment section of a post on Less Wrong dot com.
But also, and more generally, each person is responsible for what effects reading some text has on them. (We are, again, not talking about some sort of weird sci-fi infohazard, but just normal reading of ordinary text written by humans.) Part of being an adult is that you take this sort of very basic responsibility for how things affect your feelings, and if you don’t like doing something, you stop doing it. Or not! Maybe you do it anyway, for any number of reasons. That’s your call! But the effects on you are your business, not anyone else’s.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
If that perception is correct, then it is right and proper to perceive it thus. If it is incorrect, then the one who mis-perceives it thus should endeavor to correct their error.
Maintaining good epistemics in the face of pressure is an important rationality skill—one which it benefits everyone to develop. And the “pressure” involved in arguing with some random nobody on LessWrong is one of the mildest, most consequence-free forms of pressure imaginable—the perfect situation for practicing those skills.
If our hypothetical Bob thinks this, then he should have no problem at all disengaging from the discussion, and ignoring all further replies in the given thread. “I think that this is not important enough for me to continue spending my time on it, so thank you for the discussion thus far, but I won’t be replying further” is a very easy to thing to say.
Then perhaps these hypothetical readers should develop and practice the skill of “not continuing to waste their time reading things which they can see is a waste of their time”. “Somehow finding yourself doing something which you don’t endorse” is a general problem, and thus admits of general solutions. It is pointless to try to take responsibility for the dysfunctional internet-forum-reading habits of anyone who might ever read one’s comments on LessWrong.
I don’t have the strongest grasp of what rule consequentialism actually means. I’m also very prone to thinking about things in terms of expected value. I apologize if either of these things has lead to confusion or misattribution.
My understanding of rule consequentialism is that you choose rules that you think will lead to the best consequences and then try to follow those rules. But it is also my understanding that it is often a little difficult to figure out what rules apply to what situations, and so in practice some object level thinking about expected consequences bleeds in.
It sounds like that is not the case here though. It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
This isn’t a perfect analogy, but I think that it is gesturing at something that is important and in the ballpark of what we’re talking about. I’m having trouble putting my finger on it. Do you think there is something useful here, perhaps with some amendments? Would you like to comment on where you stand on (1) vs (2)?
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
Sure, that’s basically true. Let’s say, provisionally, that this is a reasonable description.
I’m talking about stuff like this:
Now, is that the only rule that applies to situations like this (i.e., “writing comments on a discussion forum”)? No, of course not. Many other rules apply. It’s not really reasonable to expect me to enumerate the entirety of my moral and practical views in a comment.
As for why I’ve selected the rules… it’s because I think that they’re the right ones, of course.
Like, at this point we’ve moved into “list and explain all of your opinions about morality and also about everything else”. And, man, that is definitely a “we’re gonna be here all day or possibly all year or maybe twelve years” sort of conversation.
Well, yes, those are indeed two different things. But also, neither of them are things that I’ve said, so neither of them seems relevant…?
I think that you’re reading things into my comments that are not the things that I wrote in those comments. I’m not sure what the source of the confusion is.
Well, things don’t just “matter” in the abstract, they only matter to specific people. I’m sure that the effects on Bob of Bob reading my comments matter to Bob. This is fine! Indeed, it’s perfect: the effects matter to Bob, and Bob is the one who knows best what the effects are, and Bob is the one best capable of controlling the effects, so a policy of “the effects on Bob of Bob reading my comments are Bob’s to take care of” is absolutely ideal in every way.
And, yes indeed, it would be very bad for society as a whole (and relevant subsets thereof, such as “the participants in this discussion forum”) if we were to adopt the opposite policy. (Indeed, we can see that it is very bad for society, almost every time we do adopt the opposite policy.)
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
This is starting to feel satisfying, like I understand where you are coming from. I have a relatively strong curiosity here; I want to understand where you’re coming from.
It sounds like there are rules such as “saying things that are true, relevant and at least somewhat important” that you strongly believe will lead to the best outcomes for society. These rules apply to the decision to post the tenth comment, and so you follow the rule and post the comment.
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
I’m trying to ask what you value.
And as for listing out your entire moral philosophy, I am certainly not asking for that. I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
Right.
I guess I’d have to think about it. The “rules” that are relevant to this sort of situation have always seemed to me to be both very obvious and also continuous with general principles of how to live and act, so separating them out is not easy.
I think your comment here epitomizes what I value about your posting. I’m not here to feel good about myself, I want to learn stuff correctly the first time. If I want to be coddled I can go to my therapist.
I also think that there’s a belief in personal agency that we share. No one is required to read or comment, and I view even negative comments as a valuable gift of the writer’s time and energy.
I wish I could write as sharply and itintelligently as you do. Most people waste too many words not saying anything with any redeeming factor except social signaling. (At least when I waste words i try to make it funny and interesting, which is not much better but intended as sort of an unspoken apology)
Yep, makes sense.
I hope that, at least, you now have some idea of why I view such suggestions as “why can’t you just write more nicely” as something less than an obviously winning play.
EDIT: The parent comment was heavily edited after I posted this reply; originally it contained only the first paragraph. The text above is a reply to that. I will reply to the edited-in parts in a sibling comment.
(Sorry about the edit Said, and thank you for calling it out and stating your intent. I was going to DM you but figured you might not receive it due to some sort of moderation action, which is unfortunate. I figured there’d be a good chance that you’d see the edit and so I’d wait a few hours before replying to let you know I had edited the comment.)