I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
(If you want more detail on my position, I endorse Jessica’s Dialogue on Appeals to Consequences).
Ah again, thanks for clarifying that.
Ah. That’s my bad for conflating my mental concept of “POINTS!” (a reference mostly to the former At Midnight show, which I’ve generalized) with points in the form of Karma points. I think of generic ‘points’ as the vague mental accounting people do with respect to others by default. When I say I shouldn’t have to say ‘points’ I meant that I shouldn’t have to say words, but I certainly also meant I shouldn’t have to literally give you actual points!
And yeah, the whole metaphor is already a sign that things are not where we’d like them to be.
I don’t think the above is a reasonable statement of my position.
The above doesn’t think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring “regulation.”
I don’t think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I’d want to be.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one’s accurate speech—in an inevitably Asymmetric Justice / CIE fashion—seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.
And echo Jessica that it’s not reasonable to say that all of this is voluntary within the frame you’re offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.
I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.
At some point I hope to write a virtue ethics sequence, but it’s super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won’t really work at getting people to reconsider. Alas.
Agree strongly with this decomposition of integrity. They’re definitely different (although correlated) things.
My biggest disagreement with this model is that the first form (structurally integrated models) seems to me to be something broader? Something like, you have structurally integrated models of how things work and what matters to you, and take the actions suggested by the models to achieve what matters to you based on how things work?
Need to think through this in more detail. One can have what one might call integrity of thought without what one might call integrity of action based on that thought—you have the models, but others/you can’t count on you to act on them. And you can have integrity of action without integrity of thought, in the sense that you can be counted on to perform certain actions in certain circumstances, without integrity of thought, in which case you’ll do them whether or not it makes any sense, but you can at least be counted on. Or you can have both.
And I agree you have to split integrity of action into keeping promises when you make them slash following one’s own code, and keeping to the rules of the system slash following others’ codes, especially codes that determine what is blameworthy. To me, that third special case isn’t integrity. It’s often a good thing, but it’s a different thing—it counts as integrity if and only if one is following those rules because of one’s own code saying one should follow the outside code. We can debate under what circumstances that is or isn’t the right code, and should.
So I think for now I have it as Integrity-1 (Integrity of Thought) and Integrity-2 (Integrity of Action), and a kind of False-Integrity-3 (Integrity of Blamelessness) that is worth having a name for, and tracking who has and doesn’t have it in what circumstances to what extent, like the other two, but isn’t obviously something it’s better to increase than decrease by default. Whereas Integrity-1 is by default to be increased, as is Integrity-2, and if you disagree with that, this implies to me there’s a conflict causing you to want others to be less effective, or you’re otherwise trying to do extraction or be zero sum.
(5) Splitting for threading.
Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I’m coming from a lot better, which is why I’m trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.
Anyway, I still have to give a response to Ray’s comment, so here goes.
Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it’s because that answer is sickeningly political! It’s saying “First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they’re slaughtering all these people, that you consider having them do less of that?”
I mean, that’s not fair. But it’s also not all that unfair, either.
(2) we strongly agree.
Pacifists who say “we should disband the military” may or may not be making the mistake of not appreciating the military—they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is “a lot,” I don’t know to what extent the military should be appreciated.
As for appreciation of people’s efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don’t have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don’t appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won’t name them in print, but might in conversation.
So I don’t think there’s a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don’t, and I don’t see why I’m reiterating things ‘everybody knows’ are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.
That doesn’t mean that I wouldn’t reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying “I see you trying to do a thing! I think it’s harmful and you should stop.” and you saying “oops!” should net you points without me having to say “POINTS!”
(4) Splitting for threading.
Pure answer / summary.
The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.
The meta-ethical framework I’m using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.
I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one’s information in order to score political points, so don’t do that. But it’s also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.
The power of this “should” is that I’m denying the legitimacy of coercing me into doing something in order to maintain someone else’s desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why “should” attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.
The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.
But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people’s maps and create clarity. That’s rhetoric, not discourse, no?
(3) (Splitting for threading)
Sharing true information, or doing anything at all, will cause people to update.
Some of those updates will cause some probabilities to become less accurate.
Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people’s heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?
Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?
Why should the burden be on me to explain should here, anyway? I’m not claiming a duty, I’m claiming a negative, a lack of duty—I’m saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.
Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.
Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!
(2) (Splitting these up to allow threading)
Sharing true information will cause people to update.
If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. “I appreciate all the effort you have put in towards various causes, I think that otherwise you’re a great person and I’m a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn’t shot me in the face. Twice.”)
(1) Glad you asked! Appreciate the effort to create clarity.
Let’s start off with the recursive explanation, as it were, and then I’ll give the straightforward ones.
I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It’s a great question to be asking if you don’t understand, or are unsure if you understand or not, and you want to know. If you’re confused about this, and especially if others are as well, it’s important to clear it up.
Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.
On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people’s social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using “should” which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?
Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.
(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)
I agree that these are (sometimes) legitimate things to do, and that people often use the ‘everybody knows’ framing to do them implicitly. But I think that using this framing, rather than saying the thing more explicitly, is useful for those trying to do other things, and counter-productive for those trying to do the exact things you are describing, unless they also want to do other things.
For #1, the reason we do that is exactly because it is likely that not everyone in the room knows (even though they really should if they are in the room) and the people who don’t know are going to be lost if you don’t tell them. And certainly not everyone knows there are 20 amino acids (e.g. I didn’t know that and will doubtless not remember it tomorrow).
I find your example in #2 to be on point: I am highly confident that far from everyone knows what happens if trash bags are left outside the dumpster. I actually had another mode in at one point to describe the form “I thought that everyone knew X, but it turned out I was wrong” because in my experience that’s how this actually comes up.
Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.
(Presumably not the full ‘know how to do all the particular integrals and be able to ace the final’ perhaps, but definitely ‘grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that’s harder than that.’ To ace the final you’ll need two weeks.)
The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It’s quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don’t ask ‘what would cause people I love to die less often’ at all, which my model says is because that question doesn’t even parse to them.
Noting that this was suggested to me by the algorithm, and presumably shouldn’t be eligible for that.
A ‘remind me what recommendations you’ve given me recently’ list being available to be clicked on might be nice?