Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
-----------------------------------------------------------------------------
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.