No, others could be a bit more charitable than that. Looking back at the very few comments I would have considered deleting, I would use it exclusively to remove low-effort comments that could reasonably be interpreted as efforts to derail the conversation into demon threads.
Consider the possible reasons why you, as the OP, would not want a comment to appear in the comments section of your post. These fall, I think, into two broad categories:
Category 1: Comments that are undesirable because having other people respond to them is undesirable.
Category 2: Comments that are undesirable because having people read them is undesirable (regardless of whether anyone responds to them).
Category 1 (henceforth, C1) includes things like what you just described. Trolling (comments designed to make people angry or upset and thus provoke responses), off-topic comments (which divert attention and effort to threads that have nothing to do with what you want to discuss), low-effort or malicious or intentionally controversial or etc. comments that are likely to spawn “demon threads”, pedantry, nitpicking, nerdsniping, and similar things, all fall into this category as well.
Category 2 (henceforth, C2) is quite different. There, the problem is not the fact that the comment provokes responses (although it certainly might); the problem is that the actual content of the comment is something which you prefer people not to see. This can include everything from doxxing to descriptions of graphic violence to the most banal sorts of spam (CHEAP SOCKS VERY GOOD PRICE) to things which are outright illegal (links to distribution of protected IP, explicit incitement to violence, etc.).
And, importantly, C2 also includes things like criticism of your ideas (or of you!), comments which mention things about you that paint you in a bad light (such as information about conflicts of interest), and any number of similar things.
It should be clear from this description that Category 2 cleaves neatly into two subtypes (let’s call them C2a and C2b), the key distinction between which is this: for comments in C2a, you (the OP) do not want people to read them, and readers themselves also do not want to read them; your interests and those of your readers are aligned. But for comments in C2b, you—even more so than for C2a!—don’t want people to read them… but readers may (indeed, almost certainly do) feel very differently; your interests and those of your readers are at odds.
It seems clear to me that these three types of comments require three different approaches to handling them.
For comments in C1 (those which are undesirable because it’s undesirable for people to respond to them), it does not seem necessary to delete them at all! In fact, they need not even be hidden; simply disable responses to the comment (placing an appropriate flag or moderator note on it; and, ideally, an explanation). I believe LW2 already includes this capability.
Comments in C2a (those which are undesirable because you do not want people to read them, and readers also have no desire to read them) clearly need to be hidden, at the least; by construction, anything less fails to solve the problem. Should they be entirely deleted, however? Well, read on.
Comments in C2b (those which are undesirable to you because you prefer that people not see them, but which may be quite desirable indeed to your readers)… well, this is the crux of the matter. It’s a very dubious proposition, to say that such comments are a problem in the first place. Indeed, I’d claim the opposite is true. Of course, you (the OP) might very much like to delete them without a trace—if you are dishonest, and lacking in integrity! But your readers don’t want you to be able to delete them tracelessly; and it seems obvious to me that the admins of any forum which aims to foster honest seeking after truth, should be on the readers’ side in such cases.
Now let’s go back to comments of type C2a. Should they be entirely deleted, without a trace? No, and here’s why: if you delete a comment, then that is evidence for that comment having been in C2b; certainly it casts a shadow of suspicion on whoever deleted it. Is that really what you want? Is an atmosphere of mistrust, of uncertainty, really what we want to foster? It seems a wholly undesirable side effect of merely wanting to protect your readers from things that they themselves don’t wish to see! Much better simply to hide the comments (in some suitably unobtrusive way—I won’t enumerate possible implementations here, but they are legion). That way, anyone who wishes to assure themselves of your integrity can easily do so, while at the same time, saving readers from having to view spam and other junk text.
My primary desire to remove the trace is that there are characters so undesirable on the internet that I don’t want to be reminded of their existence every time I scroll through my comments section, and I certainly don’t want their names to be associated with my content. Thankfully, I have yet to receive any comments anything close to this level on LW, but give a quick browse through the bans section of SlateStarCodex and you’ll see they exist.
I am in favor of a trace if it were on a moderation log that does not show up on the comment thread itself.
This is a valid concern, one I would definitely like to respond to. I obviously can’t speak for anyone else who might develop another third-party client for LW2, but as far as GreaterWrong goes—saturn and I have discussed this issue. We don’t feel that it would be our place to do what you describe, as it would violate the LW2 team’s prerogative to make decisions on how to set up and run the community. We’re not trying to undermine them; we’re providing something that (hopefully) helps them, and everyone who uses LW2, by giving members of the community more options for how to interact with it. So you shouldn’t expect to see GW add features like what you describe (i.e. those that would effectively undo the moderation actions of the LW2 team, for any users of GW).
They might. But that would unhide it only for them. For most undesirable comments, the point of deleting them is to keep them out of everyone’s face, and that’s perfectly compatible with there being other ways of viewing the content on LW that reinstate the comments.
What fraction of users who want the ability to delete comments without trace would be satisfied with that, I don’t know.
(A moderation log wouldn’t necessarily contain the full text of deleted comments, anyway, so restoring them might not be possible.)
You’re right about lobste.rs, but in this case I would strongly suggest that you do show the full text of deleted comments in the moderation log. Hide them behind a disclosure widget if you like. But it is tremendously valuable, for transparency purposes, to have the data be available. It is a technically insignificant change, and it serves all the same purposes (the offending comment need not appear in the thread; it needs not even appear by default in the log—hence the disclosure widget); but what you gain, is very nearly absolute immunity to accusations of malfeasance, to suspicion-mongering, and to all the related sorts of things that can be so corrosive to an internet community.
Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content).
Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content.
This might be solvable by making the content of the deleted comments only available to people who have an account, or above a certain level of karma, or to make it hard to link to individual entries in the moderation log (though that seems like it destroys a bunch of the purpose of the moderation log).
Currently, I would feel uncomfortable having the content of the old comments be easily available, simply because I expect that people will inevitably start paying more attention to the deleted content section than the average comment with 0 karma, completely defeating the purpose of reducing the amount of attention and influence bad content has.
The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.
Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content).
This view seems to imply some deeply worrying things about what comments you expect to see deleted—and that you endorse being deleted! Consider again my taxonomy of comments that someone might want gone. What you say applies, it seems to me, either to comments of type C1 (comments whose chief vice is that they provoke responses, but have little or no intrinsic value), or to comments of type C2b (criticism of the OP, disagreement, relevant but embarrassing-to-the-author observations, etc.).
The former sort of comment is unlikely to provoke a response if they are in the moderation log and not in the thread. No one will go and dig a piece of pedantry or nitpickery out of the mod-log just to response to it. Clearly, such comments will not be problematic.
But the latter sort of comment… the latter sort of comment is exactly the type of comment which it should be shameful to delete; the deletion of which reflects poorly on an author; and to whose deletion, attention absolutely should be paid! It is right and proper that such comments, if removed, should attract even more attention than if they remain unmolested. Indeed, if the Streisand effect occurs in such a case, then the moderation log is doing precisely that which it is meant to do.
Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content.
This category of comment ought not meaningfully inform your overall design of the moderation log feature, as there is a simple way to deal with such cases that doesn’t affect anything else:
Treat it like any other deleted comment, but instead of showing the text of the comment in the mod-log, instead display a message (styled and labeled so as to clearly indicate its nature—perhaps in bold red, etc.) to the effect of “The text of this comment has been removed, as it contained non-public information / doxxing / etc.”. (If you were inclined to go above and beyond in your dedication to transparency, you might even censor only part of the offending comment—after all, this approach is good enough for our government’s intelligence organizations… surely it’s good enough for a public discussion forum? ;)
The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.
This is certainly not the worst solution in the world. If this is the price to be paid for having the text of comments be visible, then I endorse this approach (though of course it is still an unfortunate barrier, for the reasons I outline above).
Whoever provides a mirror would only need the cooperation of some user with 100 karma to circumvent that restriction. Unless you log which users viewed which deleted posts, and track which deleted posts have been published. Then the mirror might become a trading hub where you provide content from deleted posts in exchange for finding out content from other deleted posts. And at some point money might enter into it, incentivizing karma farms.
Others could, if they are unwise. But they should not. There is no shame in deleting low-effort comments and so no reason to hide the traces of doing so. There is shame in deleting comments for less prosocial reasons, and therefore a reason to hide the traces.
The fact that you desire to hide the traces is evidence that the traces being hidden are of the type it is shameful to create.
I agree that desiring to hide traces is evidence of such a desire, but it’s simply not my motivation:
The primary reason I want comments at all are (a) to get valuable corrective feedback and discussion, and (b) as motivation and positive reinforcement to continue writing frequently. There are comments that provide negligible-to-negative amounts of (a) and even leaving a trace of which stands a serious chance of fucking with (b) when I scroll past in the future. These I would like to delete without trace.
Now I would like to have a discussion about whether a negative reaction to seeing even traces of the comments of trolls is a rational aversion to have, but I know I currently have it and would guess that most other writers do as well.
I agree that desiring to hide traces is evidence of such a desire, but it’s simply not my motivation
Irrelevant. Stated motivation is cheap talk, not reliable introspectively, let alone coming from someone else.
Or, in more detail:
1) Unchecked, this capability being misused will create echo chambers.
2) There is a social incentive to misuse it; lack of dissent increases perceived legitimacy and thus status.
3) Where social incentives to do a thing for personal benefit exist, basic social instincts push people to do that thing for personal benefit.
4) These instincts operate at a level below and before conscious verbalization.
5) The mind’s justifier will, if feasible, throw up more palatable reasons why you are taking the action.
6) So even if you believe yourself to be using an action for good reasons, if there is a social incentive to be misusing it, you are very likely misusing it a significant fraction of the time.
7) Even doing this a fraction of the time will create an echo chamber.
8) For good group epistemics, preventing the descent into echo chambers is of utmost importance.
9) Therefore no given reason can be an acceptable reason.
I think you are seriously missing the point of the concerns that PDV is (and that I am) raising, if you respond by saying “but I don’t plan to use traceless deletion for the bad reason you fear!”.
Do I really need to enumerate the reasons why this is so? I mean, I will if asked, but every time I see this sort of really very frustrating naïveté, I get a bit more pessimistic…
This seems to be missing the point of Alkjash’s comment, though. I don’t think Alkjash is missing the concerns you and PDV have.
PDV said “others can only assume that we wouldn’t like what we saw if the traces were public.” This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more).
(if PDV was saying ’it’s strategically adviseable to assume the worst reason, that’s… plausible, and would lead me to respond differently.)
FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.
(if PDV was saying ’it’s strategically adviseable to assume the worst reason, that’s… plausible, and would lead me to respond differently.)
Making it easier to get away with bad behavior is bad in itself, because it reduces trust and increases the bad behavior’s payoff, even if no bad behavior was occurring before. It’s also corrosive to any norm that exists against the bad behavior, because “everyone’s getting away with this except me” becomes a plausible hypothesis whether or not anyone actually is.
I interpret PDV’s comments as an attempt to implicitly call attention to these problems, but I think explicitly spelling them out would be more more likely to be well-received on this particular forum.
It is strategically necessary to assume that social incentives are the true reason, because social incentives disguise themselves as any acceptable reason, and the corrosive effect of social incentives is the Hamming Problem for group epistemics. (I went into more detail here.)
I don’t think Alkjash is missing the concerns you and PDV have.
Then his comments are simply non-responsive to what I and PDV have said, and make little to no sense as replies to either of our comments. I assumed (as I usually do) compliance with the maxim of relation.
FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.
Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others.
PDV said “others can only assume that we wouldn’t like what we saw if the traces were public.” This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more).
I think here I’m going to say “plausible deniability” and “appearance of impropriety” and hope that those keywords get my point across. If not, then I’m afraid I’ll have to bow out of this for now.
Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others.
This is a claim that requires justification, not bald assertion—especially in this kind of thread, where you are essentially implying that anyone who disagrees with you must be either stupid or malicious. Needless to say, this implication is not likely to make the conversation go anywhere positive. (In fact, this is a prime example of a comment that I might delete were it to show up on my personal blog—not because of its content, but because of the way in which that content is presented.)
Issues with tone aside, the quoted statement strongly suggests to me that you have not made a genuine effort to consider the other side of the argument. Not to sound rude, but I suspect that if you were to attempt an Ideological Turing Test of alkjash’s position, you would not in fact succeed at producing a response indistinguishable from the genuine article. In all charitability, this is likely due to differences of internal experience; I’m given to understand that some people are extremely sensitive to status-y language, while others seem blind to it entirely, and it seems likely to me (based on what I’ve seen of your posts) that you fall into the latter category. In no way does this obviate the existence or the needs of the former category, however, and I find your claim that said needs are “dwarfed” by the concerns most salient to you extremely irritating.
Footnote: Since feeling irritation is obviously not a good sign, I debated with myself for a while about whether to post this comment. I decided ultimately to do so, but I probably won’t be engaging further in this thread, so as to minimize the likelihood of it devolving into a demon thread. (It’s possible that it’s already too late, however.)
If you don’t want to leave public traces, others must assume that we wouldn’t like what we saw if the traces were public.
No, others could be a bit more charitable than that. Looking back at the very few comments I would have considered deleting, I would use it exclusively to remove low-effort comments that could reasonably be interpreted as efforts to derail the conversation into demon threads.
Consider the possible reasons why you, as the OP, would not want a comment to appear in the comments section of your post. These fall, I think, into two broad categories:
Category 1: Comments that are undesirable because having other people respond to them is undesirable.
Category 2: Comments that are undesirable because having people read them is undesirable (regardless of whether anyone responds to them).
Category 1 (henceforth, C1) includes things like what you just described. Trolling (comments designed to make people angry or upset and thus provoke responses), off-topic comments (which divert attention and effort to threads that have nothing to do with what you want to discuss), low-effort or malicious or intentionally controversial or etc. comments that are likely to spawn “demon threads”, pedantry, nitpicking, nerdsniping, and similar things, all fall into this category as well.
Category 2 (henceforth, C2) is quite different. There, the problem is not the fact that the comment provokes responses (although it certainly might); the problem is that the actual content of the comment is something which you prefer people not to see. This can include everything from doxxing to descriptions of graphic violence to the most banal sorts of spam (CHEAP SOCKS VERY GOOD PRICE) to things which are outright illegal (links to distribution of protected IP, explicit incitement to violence, etc.).
And, importantly, C2 also includes things like criticism of your ideas (or of you!), comments which mention things about you that paint you in a bad light (such as information about conflicts of interest), and any number of similar things.
It should be clear from this description that Category 2 cleaves neatly into two subtypes (let’s call them C2a and C2b), the key distinction between which is this: for comments in C2a, you (the OP) do not want people to read them, and readers themselves also do not want to read them; your interests and those of your readers are aligned. But for comments in C2b, you—even more so than for C2a!—don’t want people to read them… but readers may (indeed, almost certainly do) feel very differently; your interests and those of your readers are at odds.
It seems clear to me that these three types of comments require three different approaches to handling them.
For comments in C1 (those which are undesirable because it’s undesirable for people to respond to them), it does not seem necessary to delete them at all! In fact, they need not even be hidden; simply disable responses to the comment (placing an appropriate flag or moderator note on it; and, ideally, an explanation). I believe LW2 already includes this capability.
Comments in C2a (those which are undesirable because you do not want people to read them, and readers also have no desire to read them) clearly need to be hidden, at the least; by construction, anything less fails to solve the problem. Should they be entirely deleted, however? Well, read on.
Comments in C2b (those which are undesirable to you because you prefer that people not see them, but which may be quite desirable indeed to your readers)… well, this is the crux of the matter. It’s a very dubious proposition, to say that such comments are a problem in the first place. Indeed, I’d claim the opposite is true. Of course, you (the OP) might very much like to delete them without a trace—if you are dishonest, and lacking in integrity! But your readers don’t want you to be able to delete them tracelessly; and it seems obvious to me that the admins of any forum which aims to foster honest seeking after truth, should be on the readers’ side in such cases.
Now let’s go back to comments of type C2a. Should they be entirely deleted, without a trace? No, and here’s why: if you delete a comment, then that is evidence for that comment having been in C2b; certainly it casts a shadow of suspicion on whoever deleted it. Is that really what you want? Is an atmosphere of mistrust, of uncertainty, really what we want to foster? It seems a wholly undesirable side effect of merely wanting to protect your readers from things that they themselves don’t wish to see! Much better simply to hide the comments (in some suitably unobtrusive way—I won’t enumerate possible implementations here, but they are legion). That way, anyone who wishes to assure themselves of your integrity can easily do so, while at the same time, saving readers from having to view spam and other junk text.
My primary desire to remove the trace is that there are characters so undesirable on the internet that I don’t want to be reminded of their existence every time I scroll through my comments section, and I certainly don’t want their names to be associated with my content. Thankfully, I have yet to receive any comments anything close to this level on LW, but give a quick browse through the bans section of SlateStarCodex and you’ll see they exist.
I am in favor of a trace if it were on a moderation log that does not show up on the comment thread itself.
Wouldn’t someone just make a client or mirror like greaterwrong that uses the moderation log to unhide the moderation?
This is a valid concern, one I would definitely like to respond to. I obviously can’t speak for anyone else who might develop another third-party client for LW2, but as far as GreaterWrong goes—saturn and I have discussed this issue. We don’t feel that it would be our place to do what you describe, as it would violate the LW2 team’s prerogative to make decisions on how to set up and run the community. We’re not trying to undermine them; we’re providing something that (hopefully) helps them, and everyone who uses LW2, by giving members of the community more options for how to interact with it. So you shouldn’t expect to see GW add features like what you describe (i.e. those that would effectively undo the moderation actions of the LW2 team, for any users of GW).
They might. But that would unhide it only for them. For most undesirable comments, the point of deleting them is to keep them out of everyone’s face, and that’s perfectly compatible with there being other ways of viewing the content on LW that reinstate the comments.
What fraction of users who want the ability to delete comments without trace would be satisfied with that, I don’t know.
(A moderation log wouldn’t necessarily contain the full text of deleted comments, anyway, so restoring them might not be possible.)
Yeah, I wasn’t thinking of showing the full text of deleted comments, but just a log of its deletion. This is also how lobste.rs does it.
You’re right about lobste.rs, but in this case I would strongly suggest that you do show the full text of deleted comments in the moderation log. Hide them behind a disclosure widget if you like. But it is tremendously valuable, for transparency purposes, to have the data be available. It is a technically insignificant change, and it serves all the same purposes (the offending comment need not appear in the thread; it needs not even appear by default in the log—hence the disclosure widget); but what you gain, is very nearly absolute immunity to accusations of malfeasance, to suspicion-mongering, and to all the related sorts of things that can be so corrosive to an internet community.
Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content).
Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content.
This might be solvable by making the content of the deleted comments only available to people who have an account, or above a certain level of karma, or to make it hard to link to individual entries in the moderation log (though that seems like it destroys a bunch of the purpose of the moderation log).
Currently, I would feel uncomfortable having the content of the old comments be easily available, simply because I expect that people will inevitably start paying more attention to the deleted content section than the average comment with 0 karma, completely defeating the purpose of reducing the amount of attention and influence bad content has.
The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.
This view seems to imply some deeply worrying things about what comments you expect to see deleted—and that you endorse being deleted! Consider again my taxonomy of comments that someone might want gone. What you say applies, it seems to me, either to comments of type C1 (comments whose chief vice is that they provoke responses, but have little or no intrinsic value), or to comments of type C2b (criticism of the OP, disagreement, relevant but embarrassing-to-the-author observations, etc.).
The former sort of comment is unlikely to provoke a response if they are in the moderation log and not in the thread. No one will go and dig a piece of pedantry or nitpickery out of the mod-log just to response to it. Clearly, such comments will not be problematic.
But the latter sort of comment… the latter sort of comment is exactly the type of comment which it should be shameful to delete; the deletion of which reflects poorly on an author; and to whose deletion, attention absolutely should be paid! It is right and proper that such comments, if removed, should attract even more attention than if they remain unmolested. Indeed, if the Streisand effect occurs in such a case, then the moderation log is doing precisely that which it is meant to do.
This category of comment ought not meaningfully inform your overall design of the moderation log feature, as there is a simple way to deal with such cases that doesn’t affect anything else:
Treat it like any other deleted comment, but instead of showing the text of the comment in the mod-log, instead display a message (styled and labeled so as to clearly indicate its nature—perhaps in bold red, etc.) to the effect of “The text of this comment has been removed, as it contained non-public information / doxxing / etc.”. (If you were inclined to go above and beyond in your dedication to transparency, you might even censor only part of the offending comment—after all, this approach is good enough for our government’s intelligence organizations… surely it’s good enough for a public discussion forum? ;)
This is certainly not the worst solution in the world. If this is the price to be paid for having the text of comments be visible, then I endorse this approach (though of course it is still an unfortunate barrier, for the reasons I outline above).
Whoever provides a mirror would only need the cooperation of some user with 100 karma to circumvent that restriction. Unless you log which users viewed which deleted posts, and track which deleted posts have been published. Then the mirror might become a trading hub where you provide content from deleted posts in exchange for finding out content from other deleted posts. And at some point money might enter into it, incentivizing karma farms.
Others could, if they are unwise. But they should not. There is no shame in deleting low-effort comments and so no reason to hide the traces of doing so. There is shame in deleting comments for less prosocial reasons, and therefore a reason to hide the traces.
The fact that you desire to hide the traces is evidence that the traces being hidden are of the type it is shameful to create.
I agree that desiring to hide traces is evidence of such a desire, but it’s simply not my motivation:
The primary reason I want comments at all are (a) to get valuable corrective feedback and discussion, and (b) as motivation and positive reinforcement to continue writing frequently. There are comments that provide negligible-to-negative amounts of (a) and even leaving a trace of which stands a serious chance of fucking with (b) when I scroll past in the future. These I would like to delete without trace.
Now I would like to have a discussion about whether a negative reaction to seeing even traces of the comments of trolls is a rational aversion to have, but I know I currently have it and would guess that most other writers do as well.
Can’t you just use AdBlock to hide such comments from your browser?
I agree that desiring to hide traces is evidence of such a desire, but it’s simply not my motivation
Irrelevant. Stated motivation is cheap talk, not reliable introspectively, let alone coming from someone else.
Or, in more detail:
1) Unchecked, this capability being misused will create echo chambers.
2) There is a social incentive to misuse it; lack of dissent increases perceived legitimacy and thus status.
3) Where social incentives to do a thing for personal benefit exist, basic social instincts push people to do that thing for personal benefit.
4) These instincts operate at a level below and before conscious verbalization.
5) The mind’s justifier will, if feasible, throw up more palatable reasons why you are taking the action.
6) So even if you believe yourself to be using an action for good reasons, if there is a social incentive to be misusing it, you are very likely misusing it a significant fraction of the time.
7) Even doing this a fraction of the time will create an echo chamber.
8) For good group epistemics, preventing the descent into echo chambers is of utmost importance.
9) Therefore no given reason can be an acceptable reason.
10) Therefore this capability should not exist.
I think you are seriously missing the point of the concerns that PDV is (and that I am) raising, if you respond by saying “but I don’t plan to use traceless deletion for the bad reason you fear!”.
Do I really need to enumerate the reasons why this is so? I mean, I will if asked, but every time I see this sort of really very frustrating naïveté, I get a bit more pessimistic…
This seems to be missing the point of Alkjash’s comment, though. I don’t think Alkjash is missing the concerns you and PDV have.
PDV said “others can only assume that we wouldn’t like what we saw if the traces were public.” This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more).
(if PDV was saying ’it’s strategically adviseable to assume the worst reason, that’s… plausible, and would lead me to respond differently.)
FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.
Making it easier to get away with bad behavior is bad in itself, because it reduces trust and increases the bad behavior’s payoff, even if no bad behavior was occurring before. It’s also corrosive to any norm that exists against the bad behavior, because “everyone’s getting away with this except me” becomes a plausible hypothesis whether or not anyone actually is.
I interpret PDV’s comments as an attempt to implicitly call attention to these problems, but I think explicitly spelling them out would be more more likely to be well-received on this particular forum.
It is strategically necessary to assume that social incentives are the true reason, because social incentives disguise themselves as any acceptable reason, and the corrosive effect of social incentives is the Hamming Problem for group epistemics. (I went into more detail here.)
Then his comments are simply non-responsive to what I and PDV have said, and make little to no sense as replies to either of our comments. I assumed (as I usually do) compliance with the maxim of relation.
Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others.
I think here I’m going to say “plausible deniability” and “appearance of impropriety” and hope that those keywords get my point across. If not, then I’m afraid I’ll have to bow out of this for now.
This is a claim that requires justification, not bald assertion—especially in this kind of thread, where you are essentially implying that anyone who disagrees with you must be either stupid or malicious. Needless to say, this implication is not likely to make the conversation go anywhere positive. (In fact, this is a prime example of a comment that I might delete were it to show up on my personal blog—not because of its content, but because of the way in which that content is presented.)
Issues with tone aside, the quoted statement strongly suggests to me that you have not made a genuine effort to consider the other side of the argument. Not to sound rude, but I suspect that if you were to attempt an Ideological Turing Test of alkjash’s position, you would not in fact succeed at producing a response indistinguishable from the genuine article. In all charitability, this is likely due to differences of internal experience; I’m given to understand that some people are extremely sensitive to status-y language, while others seem blind to it entirely, and it seems likely to me (based on what I’ve seen of your posts) that you fall into the latter category. In no way does this obviate the existence or the needs of the former category, however, and I find your claim that said needs are “dwarfed” by the concerns most salient to you extremely irritating.
Footnote: Since feeling irritation is obviously not a good sign, I debated with myself for a while about whether to post this comment. I decided ultimately to do so, but I probably won’t be engaging further in this thread, so as to minimize the likelihood of it devolving into a demon thread. (It’s possible that it’s already too late, however.)