One thing I want to note is that I hear you and agree with you about how these comments are taking place in public forums and that we need to consider their effects beyond the commenter and the person being replied to.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect. I will outline some things about my model of the world and would love to hear about how it meshes with your model.
Components of my model:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
When perceived that way, people tend to enter a soldier-like mindset.
People are rather bad at updating their believes when they have such a mindset.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
I guess this doesn’t mesh well with the hypothetical I’ve outlined, but I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
(I’m at the gym on my phone and was a little loose with my language and thinking.)
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect.
… I get the sense that you haven’t been reading my comments at all.
I didn’t claim that I “expect [my] hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect”. I explicitly disclaimed the view (act consequentialism) which involves evaluation of this question at all. The last time you tried to summarize my view in this way, I specifically said that this is not the right summary. But now you’re just repeating that same thing again. What the heck?
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
That having been said, I do want to comment on your outlined model a bit:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
First of all, “the effects on Bob” of my comments are Bob’s own business, not mine.
Let’s be clear about what it is that we’re not discussing. We’re not talking about “effects on Bob” that are of the form “other people read my comment and then do things that are bad for Bob” (which would happen if e.g. I doxxed Bob, or posted defamatory claims, etc.). We’re not talking about “effects on Bob” that come from the comment just existing, regardless of whether Bob ever read it (e.g., erroneous and misleading descriptions of Bob’s ideas). And we’re definitely not talking about some sort of “basilisk hack” where my comment hijacks Bob’s brain in some weird way and causes him to have seizures (perhaps due to some unfortunate font rendering bug).
No, the sorts of “effects” being referred to, here, are specifically and exclusively the effects, directly on Bob, of Bob reading my comments (and understanding them, and thinking about them, etc.), in the normal way that humans read ordinary text.
Well, for one thing, if Bob doesn’t want to experience those effects, he can just not read the comment. That’s a choice that Bob can make! “Don’t like, don’t read” applies more to some things than others… but it definitely applies to some obscure sub-sub-sub-thread of some discussion deep in the weeds of the comment section of a post on Less Wrong dot com.
But also, and more generally, each person is responsible for what effects reading some text has on them. (We are, again, not talking about some sort of weird sci-fi infohazard, but just normal reading of ordinary text written by humans.) Part of being an adult is that you take this sort of very basic responsibility for how things affect your feelings, and if you don’t like doing something, you stop doing it. Or not! Maybe you do it anyway, for any number of reasons. That’s your call! But the effects on you are your business, not anyone else’s.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
If that perception is correct, then it is right and proper to perceive it thus. If it is incorrect, then the one who mis-perceives it thus should endeavor to correct their error.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
Maintaining good epistemics in the face of pressure is an important rationality skill—one which it benefits everyone to develop. And the “pressure” involved in arguing with some random nobody on LessWrong is one of the mildest, most consequence-free forms of pressure imaginable—the perfect situation for practicing those skills.
I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
If our hypothetical Bob thinks this, then he should have no problem at all disengaging from the discussion, and ignoring all further replies in the given thread. “I think that this is not important enough for me to continue spending my time on it, so thank you for the discussion thus far, but I won’t be replying further” is a very easy to thing to say.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
Then perhaps these hypothetical readers should develop and practice the skill of “not continuing to waste their time reading things which they can see is a waste of their time”. “Somehow finding yourself doing something which you don’t endorse” is a general problem, and thus admits of general solutions. It is pointless to try to take responsibility for the dysfunctional internet-forum-reading habits of anyone who might ever read one’s comments on LessWrong.
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
I don’t have the strongest grasp of what rule consequentialism actually means. I’m also very prone to thinking about things in terms of expected value. I apologize if either of these things has lead to confusion or misattribution.
My understanding of rule consequentialism is that you choose rules that you think will lead to the best consequences and then try to follow those rules. But it is also my understanding that it is often a little difficult to figure out what rules apply to what situations, and so in practice some object level thinking about expected consequences bleeds in.
It sounds like that is not the case here though. It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
This isn’t a perfect analogy, but I think that it is gesturing at something that is important and in the ballpark of what we’re talking about. I’m having trouble putting my finger on it. Do you think there is something useful here, perhaps with some amendments? Would you like to comment on where you stand on (1) vs (2)?
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
Sure, that’s basically true. Let’s say, provisionally, that this is a reasonable description.
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
I’m talking about stuff like this:
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important.
Now, is that the only rule that applies to situations like this (i.e., “writing comments on a discussion forum”)? No, of course not. Many other rules apply. It’s not really reasonable to expect me to enumerate the entirety of my moral and practical views in a comment.
As for why I’ve selected the rules… it’s because I think that they’re the right ones, of course.
Like, at this point we’ve moved into “list and explain all of your opinions about morality and also about everything else”. And, man, that is definitely a “we’re gonna be here all day or possibly all year or maybe twelve years” sort of conversation.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
Well, yes, those are indeed two different things. But also, neither of them are things that I’ve said, so neither of them seems relevant…?
Do you think there is something useful here, perhaps with some amendments?
I think that you’re reading things into my comments that are not the things that I wrote in those comments. I’m not sure what the source of the confusion is.
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
Well, things don’t just “matter” in the abstract, they only matter to specific people. I’m sure that the effects on Bob of Bob reading my comments matter to Bob. This is fine! Indeed, it’s perfect: the effects matter to Bob, and Bob is the one who knows best what the effects are, and Bob is the one best capable of controlling the effects, so a policy of “the effects on Bob of Bob reading my comments are Bob’s to take care of” is absolutely ideal in every way.
And, yes indeed, it would be very bad for society as a whole (and relevant subsets thereof, such as “the participants in this discussion forum”) if we were to adopt the opposite policy. (Indeed, we can see that it is very bad for society, almost every time we do adopt the opposite policy.)
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
This is starting to feel satisfying, like I understand where you are coming from. I have a relatively strong curiosity here; I want to understand where you’re coming from.
It sounds like there are rules such as “saying things that are true, relevant and at least somewhat important” that you strongly believe will lead to the best outcomes for society. These rules apply to the decision to post the tenth comment, and so you follow the rule and post the comment.
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
I’m trying to ask what you value.
And as for listing out your entire moral philosophy, I am certainly not asking for that. I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
Right.
I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
I guess I’d have to think about it. The “rules” that are relevant to this sort of situation have always seemed to me to be both very obvious and also continuous with general principles of how to live and act, so separating them out is not easy.
I think your comment here epitomizes what I value about your posting. I’m not here to feel good about myself, I want to learn stuff correctly the first time. If I want to be coddled I can go to my therapist.
I also think that there’s a belief in personal agency that we share. No one is required to read or comment, and I view even negative comments as a valuable gift of the writer’s time and energy.
I wish I could write as sharply and itintelligently as you do. Most people waste too many words not saying anything with any redeeming factor except social signaling. (At least when I waste words i try to make it funny and interesting, which is not much better but intended as sort of an unspoken apology)
Thanks for clarifying, Said. That is helpful.
I skimmed each of the threads you linked to.
One thing I want to note is that I hear you and agree with you about how these comments are taking place in public forums and that we need to consider their effects beyond the commenter and the person being replied to.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect. I will outline some things about my model of the world and would love to hear about how it meshes with your model.
Components of my model:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
When perceived that way, people tend to enter a soldier-like mindset.
People are rather bad at updating their believes when they have such a mindset.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
I guess this doesn’t mesh well with the hypothetical I’ve outlined, but I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
(I’m at the gym on my phone and was a little loose with my language and thinking.)
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
… I get the sense that you haven’t been reading my comments at all.
I didn’t claim that I “expect [my] hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect”. I explicitly disclaimed the view (act consequentialism) which involves evaluation of this question at all. The last time you tried to summarize my view in this way, I specifically said that this is not the right summary. But now you’re just repeating that same thing again. What the heck?
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
That having been said, I do want to comment on your outlined model a bit:
First of all, “the effects on Bob” of my comments are Bob’s own business, not mine.
Let’s be clear about what it is that we’re not discussing. We’re not talking about “effects on Bob” that are of the form “other people read my comment and then do things that are bad for Bob” (which would happen if e.g. I doxxed Bob, or posted defamatory claims, etc.). We’re not talking about “effects on Bob” that come from the comment just existing, regardless of whether Bob ever read it (e.g., erroneous and misleading descriptions of Bob’s ideas). And we’re definitely not talking about some sort of “basilisk hack” where my comment hijacks Bob’s brain in some weird way and causes him to have seizures (perhaps due to some unfortunate font rendering bug).
No, the sorts of “effects” being referred to, here, are specifically and exclusively the effects, directly on Bob, of Bob reading my comments (and understanding them, and thinking about them, etc.), in the normal way that humans read ordinary text.
Well, for one thing, if Bob doesn’t want to experience those effects, he can just not read the comment. That’s a choice that Bob can make! “Don’t like, don’t read” applies more to some things than others… but it definitely applies to some obscure sub-sub-sub-thread of some discussion deep in the weeds of the comment section of a post on Less Wrong dot com.
But also, and more generally, each person is responsible for what effects reading some text has on them. (We are, again, not talking about some sort of weird sci-fi infohazard, but just normal reading of ordinary text written by humans.) Part of being an adult is that you take this sort of very basic responsibility for how things affect your feelings, and if you don’t like doing something, you stop doing it. Or not! Maybe you do it anyway, for any number of reasons. That’s your call! But the effects on you are your business, not anyone else’s.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
If that perception is correct, then it is right and proper to perceive it thus. If it is incorrect, then the one who mis-perceives it thus should endeavor to correct their error.
Maintaining good epistemics in the face of pressure is an important rationality skill—one which it benefits everyone to develop. And the “pressure” involved in arguing with some random nobody on LessWrong is one of the mildest, most consequence-free forms of pressure imaginable—the perfect situation for practicing those skills.
If our hypothetical Bob thinks this, then he should have no problem at all disengaging from the discussion, and ignoring all further replies in the given thread. “I think that this is not important enough for me to continue spending my time on it, so thank you for the discussion thus far, but I won’t be replying further” is a very easy to thing to say.
Then perhaps these hypothetical readers should develop and practice the skill of “not continuing to waste their time reading things which they can see is a waste of their time”. “Somehow finding yourself doing something which you don’t endorse” is a general problem, and thus admits of general solutions. It is pointless to try to take responsibility for the dysfunctional internet-forum-reading habits of anyone who might ever read one’s comments on LessWrong.
I don’t have the strongest grasp of what rule consequentialism actually means. I’m also very prone to thinking about things in terms of expected value. I apologize if either of these things has lead to confusion or misattribution.
My understanding of rule consequentialism is that you choose rules that you think will lead to the best consequences and then try to follow those rules. But it is also my understanding that it is often a little difficult to figure out what rules apply to what situations, and so in practice some object level thinking about expected consequences bleeds in.
It sounds like that is not the case here though. It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
This isn’t a perfect analogy, but I think that it is gesturing at something that is important and in the ballpark of what we’re talking about. I’m having trouble putting my finger on it. Do you think there is something useful here, perhaps with some amendments? Would you like to comment on where you stand on (1) vs (2)?
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
Sure, that’s basically true. Let’s say, provisionally, that this is a reasonable description.
I’m talking about stuff like this:
Now, is that the only rule that applies to situations like this (i.e., “writing comments on a discussion forum”)? No, of course not. Many other rules apply. It’s not really reasonable to expect me to enumerate the entirety of my moral and practical views in a comment.
As for why I’ve selected the rules… it’s because I think that they’re the right ones, of course.
Like, at this point we’ve moved into “list and explain all of your opinions about morality and also about everything else”. And, man, that is definitely a “we’re gonna be here all day or possibly all year or maybe twelve years” sort of conversation.
Well, yes, those are indeed two different things. But also, neither of them are things that I’ve said, so neither of them seems relevant…?
I think that you’re reading things into my comments that are not the things that I wrote in those comments. I’m not sure what the source of the confusion is.
Well, things don’t just “matter” in the abstract, they only matter to specific people. I’m sure that the effects on Bob of Bob reading my comments matter to Bob. This is fine! Indeed, it’s perfect: the effects matter to Bob, and Bob is the one who knows best what the effects are, and Bob is the one best capable of controlling the effects, so a policy of “the effects on Bob of Bob reading my comments are Bob’s to take care of” is absolutely ideal in every way.
And, yes indeed, it would be very bad for society as a whole (and relevant subsets thereof, such as “the participants in this discussion forum”) if we were to adopt the opposite policy. (Indeed, we can see that it is very bad for society, almost every time we do adopt the opposite policy.)
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
This is starting to feel satisfying, like I understand where you are coming from. I have a relatively strong curiosity here; I want to understand where you’re coming from.
It sounds like there are rules such as “saying things that are true, relevant and at least somewhat important” that you strongly believe will lead to the best outcomes for society. These rules apply to the decision to post the tenth comment, and so you follow the rule and post the comment.
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
I’m trying to ask what you value.
And as for listing out your entire moral philosophy, I am certainly not asking for that. I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
Right.
I guess I’d have to think about it. The “rules” that are relevant to this sort of situation have always seemed to me to be both very obvious and also continuous with general principles of how to live and act, so separating them out is not easy.
I think your comment here epitomizes what I value about your posting. I’m not here to feel good about myself, I want to learn stuff correctly the first time. If I want to be coddled I can go to my therapist.
I also think that there’s a belief in personal agency that we share. No one is required to read or comment, and I view even negative comments as a valuable gift of the writer’s time and energy.
I wish I could write as sharply and itintelligently as you do. Most people waste too many words not saying anything with any redeeming factor except social signaling. (At least when I waste words i try to make it funny and interesting, which is not much better but intended as sort of an unspoken apology)