That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
Our philosophy of language did not “somehow” got politicized. You personally (Zack M. Davis) politicized it by abusing it in the context of a political issue.
...Which might make it all the more gratifying if you can find a mistake in the racist bastard’s math: then you could call out the mistake in the comments and bask in moral victory as the OP gets downvoted to oblivion for the sin of bad math.
If you had interesting new math or non-trivial novel insights, I would not complain. Of course that’s somewhat subjective: someone else might consider your insights valuable.
But what, realistically, do you expect me to do?
You’re right, I don’t have a good meta-level solution. So, if you want to keep doing that thing you’re doing, knock yourself out.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such “financial ties”. Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don’t fully get why it would be troublesome to reveal this belief.
I don’t really have a dog in whatever fight this is, but looking at Zack’spostsandcomments recently, I see nothing but interesting and correct insights and analysis, devoid of any explicit politics (but perhaps yielding insights about such?). How can you call this “abuse”? The overwhelming majority of the content that gets posts to Less Wrong these days should aspire to the level of quality of the stuff I just linked!
The abuse did not happen on LW. However, because I happen to be somewhat familiar with Davis’ political writing, I am aware of a sinister context to what ey write in LW of which you are not aware. Now, you may say that this is not a fair objection to Davis writing whatever ey write here, and you might well be right. However, I thought I at least have the right to express my feelings on this matter so that Davis and others can take them into account (or not). If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
I am, frankly, appalled to read this sort of thing on Less Wrong. You are, in all seriousness, attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged “sinister context” that you are bringing in from somewhere else. To call this “not a fair objection” would be a gross understatement. It is shameful.
If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
Absolutely not.
This sort of attitude is tremendously corrosive to productive discussion and genuine truth-seeking. We have discussed this before… and am I genuinely disappointed that this sort of thing is happening again.
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed? Specifically, I should be ashamed just for saying that something made me uncomfortable rather than suffering in silence? I mean, if that’s your vision, it’s fine, I understand. But I wonder whether that’s really the predominant opinion around here? What about all the stuff about “community” and “Village” etc?
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed?
As discussed in the linked thread—it is none of my business, nor the business of any of your interlocutors, whether you are, or are not, a “perfectly dispassionate robot in a vacuum”, when it comes to discussions on subjects like the OP. That is not something which should enter into the discussion at all; it is simply off-topic.
If we permit the introduction of such questions as whether you feel uncomfortable (about the topic, or any on-topic claims) into discussions of abstract epistemology, or Bayesian inference, or logic, etc., when that discomfort in no way bears on the truth or falsity of the claims under discussion, then we might as well close up shop, because at that point, we have bid good-bye even to the pretense of “rationality”, much less the fact of it.
And if the “predominant opinion” disagrees—so much the worse for predominant opinion; and so much the sadder for Less Wrong.
Edit: And all this is, of course, not even mentioning your conflation of “I am uncomfortable” with insinuating comments about “sinister context”, and implications of wrongdoing on Zack’s part!
Alright, let’s suppose it’s off-topic in this thread, or even on this forum. But is there another place within the community’s “discussion space” where it is on-topic? Or you don’t think such a place should exist at all?
I’ve found /r/TheMotte (recently forked from /r/slatestarcodex) to be a good place to discuss politically-charged topics? (Again, also happy to talk privately sometime.)
I wasn’t referring to “where to discuss politically charged topics”, I was referring to “where to discuss the fact that something that happens on LessWrong.com makes me uncomfortable because [reasons]”.
To be honest I prefer to avoid politically charged topics, as long as they avoid me (which they didn’t, in this case).
I just want to chime in quickly to say that I disagree with Said here pretty heavily, but also don’t know that I agree with any other single person in the conversation, and articulating what I actually believe would require more time than I have right now.
I love that you’re willing to say that, but I’m a bit confused as to what purpose that comment serves. Without some indication of which parts you disagree with, and what things you DO believe, all this is saying is “I take no responsibility for what everyone is saying here”, which I assume is true for all of us.
Personally, I agree with Said on a number of aspects—a reader’s reaction to a topic, or to a poster, is not sufficient reason to do anything. This is especially true when the reader’s reaction is primarily based on non-LW information. I DISAGREE that this makes all discussion fair game, as long as it’s got a robe of abstraction which allows deniability that it relates to the painful topic.
I don’t know that I’ve seen anyone besides me claim that the abstraction seems too thin. It would take a discussion of when it applies and when it does not to get me to ignore my (limited) understanding of the participants’ positions on the related-but-not-on-LW topic.
Generally, if you want to talk about how LW is moderated or unpleasant behavior happening here, you should talk to me. [If you think I’m making mistakes, the person to talk to is probably Habryka.] We don’t have an official ombudsman, and perhaps it’s worth putting some effort into finding one.
I mean, the sum total of spaces that the rationalist community uses to hold discussions, propagate information, do collective decision making, (presumably) provide mutual support et cetera, to the extent these spaces are effective in fulfilling their functions. Anywhere where I can say something and people in the community will listen to me, and take this new information into account if it’s worth taking into account, or at least provide me with compassionate feedback even if it’s not.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
But, of course, things are what they are. Still, it is hardly any of my business, as a participant of Less Wrong, what discussions you have elsewhere, on some other forum. Why should it be?
Of course, it would be quite beyond the pale if the outcomes of those discussions were used in deciding (by those who have the authority to decide these things—basically, I mean the admins of Less Wrong) how to treat someone here!
In short, I am saying: in other places, discuss whatever you want to discuss (assuming your discussions are appropriate thereto… but, in any case—not my business). None of that should affect any discussions here. “I propose to treat <Less Wrong participant X> in such-and-such a way—why? because he said or did so-and-so, in another place entirely”—this ought not be acceptable or tolerated.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
Well, that is a legitimate opinion. I just want to point out that it did not appear to be the consensus so far. If it is the consensus (or becomes such) then it seems fair to ask to make it clear, in particular to inform’s people’s decisions about how and whether to interact with the forum.
I won’t go so far as to say there should be no community, but I do believe that it (or they; there are likely lots of involved communities of rationalists) is not synonymous with LessWrong. There is overlap in topics discussed, but there are good LW topics that are irrelevant to some or all communities, and there are LOTS of community topics that don’t do well on LW.
And that includes topics that, in a vacuum, would be appropriate to LW, but are deeply related to topics in a community which are NOT good for LW. Sorry, but that entanglement of ideas makes it impossible to discuss rationally in a large group.
The dispute in question isn’t about epistemology but ontology and I think it’s worth keeping the two apart mentally but I think your general point still stands.
I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.
Our philosophy of language did not “somehow” got politicized. You personally (Zack M. Davis) politicized it by abusing it in the context of a political issue.
If you had interesting new math or non-trivial novel insights, I would not complain. Of course that’s somewhat subjective: someone else might consider your insights valuable.
You’re right, I don’t have a good meta-level solution. So, if you want to keep doing that thing you’re doing, knock yourself out.
I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.
It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.
If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such “financial ties”. Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don’t fully get why it would be troublesome to reveal this belief.
See my reply to Said Achmiz.
I don’t really have a dog in whatever fight this is, but looking at Zack’s posts and comments recently, I see nothing but interesting and correct insights and analysis, devoid of any explicit politics (but perhaps yielding insights about such?). How can you call this “abuse”? The overwhelming majority of the content that gets posts to Less Wrong these days should aspire to the level of quality of the stuff I just linked!
The abuse did not happen on LW. However, because I happen to be somewhat familiar with Davis’ political writing, I am aware of a sinister context to what ey write in LW of which you are not aware. Now, you may say that this is not a fair objection to Davis writing whatever ey write here, and you might well be right. However, I thought I at least have the right to express my feelings on this matter so that Davis and others can take them into account (or not). If we are supposed to be a community, then it should be normal for us to consider each other’s feelings, even when there was no norm violation per se involved, not so?
… a “sinister context”?!
I am, frankly, appalled to read this sort of thing on Less Wrong. You are, in all seriousness, attacking someone’s writings about abstract epistemology and Bayesian inference, on Less Wrong, of all places (!!), not because there is anything at all mistaken about them, but because of some alleged “sinister context” that you are bringing in from somewhere else. To call this “not a fair objection” would be a gross understatement. It is shameful.
Absolutely not.
This sort of attitude is tremendously corrosive to productive discussion and genuine truth-seeking. We have discussed this before… and am I genuinely disappointed that this sort of thing is happening again.
Ugh, because productive discussion happens between perfectly dispassionate robots in a vacuum, and if I’m not one then it is my fault and I should be ashamed? Specifically, I should be ashamed just for saying that something made me uncomfortable rather than suffering in silence? I mean, if that’s your vision, it’s fine, I understand. But I wonder whether that’s really the predominant opinion around here? What about all the stuff about “community” and “Village” etc?
As discussed in the linked thread—it is none of my business, nor the business of any of your interlocutors, whether you are, or are not, a “perfectly dispassionate robot in a vacuum”, when it comes to discussions on subjects like the OP. That is not something which should enter into the discussion at all; it is simply off-topic.
If we permit the introduction of such questions as whether you feel uncomfortable (about the topic, or any on-topic claims) into discussions of abstract epistemology, or Bayesian inference, or logic, etc., when that discomfort in no way bears on the truth or falsity of the claims under discussion, then we might as well close up shop, because at that point, we have bid good-bye even to the pretense of “rationality”, much less the fact of it.
And if the “predominant opinion” disagrees—so much the worse for predominant opinion; and so much the sadder for Less Wrong.
Edit: And all this is, of course, not even mentioning your conflation of “I am uncomfortable” with insinuating comments about “sinister context”, and implications of wrongdoing on Zack’s part!
Alright, let’s suppose it’s off-topic in this thread, or even on this forum. But is there another place within the community’s “discussion space” where it is on-topic? Or you don’t think such a place should exist at all?
I’ve found /r/TheMotte (recently forked from /r/slatestarcodex) to be a good place to discuss politically-charged topics? (Again, also happy to talk privately sometime.)
I wasn’t referring to “where to discuss politically charged topics”, I was referring to “where to discuss the fact that something that happens on LessWrong.com makes me uncomfortable because [reasons]”.
To be honest I prefer to avoid politically charged topics, as long as they avoid me (which they didn’t, in this case).
I just want to chime in quickly to say that I disagree with Said here pretty heavily, but also don’t know that I agree with any other single person in the conversation, and articulating what I actually believe would require more time than I have right now.
I love that you’re willing to say that, but I’m a bit confused as to what purpose that comment serves. Without some indication of which parts you disagree with, and what things you DO believe, all this is saying is “I take no responsibility for what everyone is saying here”, which I assume is true for all of us.
Personally, I agree with Said on a number of aspects—a reader’s reaction to a topic, or to a poster, is not sufficient reason to do anything. This is especially true when the reader’s reaction is primarily based on non-LW information. I DISAGREE that this makes all discussion fair game, as long as it’s got a robe of abstraction which allows deniability that it relates to the painful topic.
I don’t know that I’ve seen anyone besides me claim that the abstraction seems too thin. It would take a discussion of when it applies and when it does not to get me to ignore my (limited) understanding of the participants’ positions on the related-but-not-on-LW topic.
Generally, if you want to talk about how LW is moderated or unpleasant behavior happening here, you should talk to me. [If you think I’m making mistakes, the person to talk to is probably Habryka.] We don’t have an official ombudsman, and perhaps it’s worth putting some effort into finding one.
This information should be publicly findable. And ideally anonymous information about reports received should also be published.
Alright, thank you!
What do you mean by ‘the community’s “discussion space”’? Are you referring to Less Wrong? Or something else?
I mean, the sum total of spaces that the rationalist community uses to hold discussions, propagate information, do collective decision making, (presumably) provide mutual support et cetera, to the extent these spaces are effective in fulfilling their functions. Anywhere where I can say something and people in the community will listen to me, and take this new information into account if it’s worth taking into account, or at least provide me with compassionate feedback even if it’s not.
Firstly, I have always said (and this incident has once again reinforced my view of this) that “we”, which is to say “rationalists”, should not be a “community”.
But, of course, things are what they are. Still, it is hardly any of my business, as a participant of Less Wrong, what discussions you have elsewhere, on some other forum. Why should it be?
Of course, it would be quite beyond the pale if the outcomes of those discussions were used in deciding (by those who have the authority to decide these things—basically, I mean the admins of Less Wrong) how to treat someone here!
In short, I am saying: in other places, discuss whatever you want to discuss (assuming your discussions are appropriate thereto… but, in any case—not my business). None of that should affect any discussions here. “I propose to treat <Less Wrong participant X> in such-and-such a way—why? because he said or did so-and-so, in another place entirely”—this ought not be acceptable or tolerated.
Well, that is a legitimate opinion. I just want to point out that it did not appear to be the consensus so far. If it is the consensus (or becomes such) then it seems fair to ask to make it clear, in particular to inform’s people’s decisions about how and whether to interact with the forum.
I think it is fairly clear that it’s not the consensus; I alluded to this in my comment (perhaps too obliquely?).
The rest of my comment should be read with the understanding that I’m aware of the above fact.
I won’t go so far as to say there should be no community, but I do believe that it (or they; there are likely lots of involved communities of rationalists) is not synonymous with LessWrong. There is overlap in topics discussed, but there are good LW topics that are irrelevant to some or all communities, and there are LOTS of community topics that don’t do well on LW.
And that includes topics that, in a vacuum, would be appropriate to LW, but are deeply related to topics in a community which are NOT good for LW. Sorry, but that entanglement of ideas makes it impossible to discuss rationally in a large group.
The dispute in question isn’t about epistemology but ontology and I think it’s worth keeping the two apart mentally but I think your general point still stands.
I think it needs clarification. It’s clearly vague enough that it’s not a valid reason by itself. However it is reasonable to think that part of the “bad vibe” would be the type why political meshing is bad while part of it could be relevant.
For example it could be that there is worry that constantly mentioning a specific point goes for “mere exposure” where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn’t get exposure more than would have been gotten by legimate means.
But we can’t go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.
For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don’t cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly “corrupt” the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, “errors”. Now I don’t know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be “value chaoticness” where small formulation differences have big or unpredictable implications for values.