This seems like a fairly extreme statement, so I was about to upvote due to the courage required to post it publicly and stand behind it. But then I stopped and thought about the long term effects and it’s probably best not to encourage this.
As ideally, you, along with the vast majority of potential readers, should become less emotionally reactive over time to any real or perceived insults, slights, etc...
If it’s the heat of the moment talking, that’s fine, but letting thoughts of payback, revenge, etc., linger on for days afterwards likely will not lead to any positive outcome.
As ideally, you, along with the vast majority of potential readers, should become less emotionally reactive over time to any real or perceived insults, slights, etc...
If it’s the heat of the moment talking, that’s fine, but letting thoughts of payback, revenge, etc., linger on for days afterwards likely will not lead to any positive outcome.
I have had these thoughts many times. I would berate myself for letting it get on my nerves so much. It was just an hour-and-a-half chat. But I don’t think it’s a matter of “letting” thoughts occur, or not. Certain situations are damaging to certain people, and this situation isn’t a matter of whether people are encouraged to be damaged or not (I certainly had no expectation of writing about this, back in July–October 2022.)
I upvoted for effort because it’s clear you put in quite a bit of effort in writing this comment, but skipped expressing agreement or disagreement.
I had thought of several possible responses, and it is worthy of a substantial response, but since it’s not my role to be the adjudicator or corrector of LW users, I’ll pose you this question:
Consider, is it possible for him to take offence in return and then retaliate via some mean(s)? If that does occur, what’s the range of likely outcomes?
(Note that I moved most of the original comment and plan to put it elsewhere in the thread.)
Consider, is it possible for him to take offence in return and then retaliate via some mean(s)? If that does occur, what’s the range of likely outcomes?
I don’t follow. I’m not going to behave differently in the face of any possible retaliation, nor do I in fact expect Nate to retaliate in an inappropriate manner. So I’m not worried about this?
[...] I was about to upvote due to the courage required to post it publicly and stand behind it. But then I stopped and thought about the long term effects and it’s probably best not to encourage this. [...] As ideally, you, along with the vast majority of potential readers, should become less emotionally reactive over time to any real or perceived insults, slights, etc...
It seems weird to single out this specific type of human limitation (compared to perfect-robot instrumental rationality) over the hundreds of others. If someone isn’t in top physical shape or cannot drive cars under difficult circumstances or didn’t renew their glasses and therefore doesn’t see optimally, would you also be reluctant to upvote comments you were otherwise tempted to upvote (where they bravely disclose some limitation) because of this worry about poor incentives? “Ideally,” in a world where there’s infinite time so there are no tradeoffs for spending self-improvement energy, rationalists would all be in shape, have brushed up their driving skills, have their glasses updated, etc. In reality, it’s perfectly fine/rational to deprioritize many things that are “good to have” because other issues are more pressing, more immediately deserving of self-improvement energy. (Not to mention that rationality for its own sake is lame anyway and so many of us actually want to do object-level work towards a better future.) What to best focus on with self-improvement energy will differ a lot from person to person, not only because people have different strengths and weaknesses, but also because they operate in different environments. (E.g., in some environments, one has to deal with rude people all the time, whereas in others, this may be a rare occurrence.) For all these reasons, it seems weirdly patronizing to try to shape other people’s prioritization for investing self-improvement energy. This isn’t to say that this site/community shouldn’t have norms and corresponding virtues and vices. Since LW is about truth-seeking, it makes sense to promote virtues directly related to truth-seeking, e.g., by downvoting comments that exhibit poor epistemic practices. However, my point is that even though it might be tempting to discourage not just poor epistemic rationality but also poor instrumental rationality, these two work very differently, especially as far as optimal incentive-setting is concerned. Epistemic rationality is an ideal we can more easily enforce and get closer towards. Instrumental rationality, by contrast, is a giant jungle that people are coming into from all kinds of different directions. “Having unusually distracting emotional reactions to situations xyz” is one example of suboptimal instrumental rationality, but so is “being in poor physical shape,”or “not being able to drive a car,” or “not having your glasses updated,” etc. I don’t think it makes sense for the community to create a hierarchy of “most important facets of instrumental rationality” that’s supposed to apply equally to all kinds of people. (Instead, I think it makes more sense to reward meta-skills of instrumental rationality, such as “try to figure out what your biggest problems are and really prioritize working on them.) (If we want to pass direct judgment on someone’s prioritization of self-improvement energy, we need to know their exact situation and goals and the limitations they have, how good they are at learning various things, etc.) Not to mention the unwelcoming effects when people get judged for limitations of instrumental rationality that the community for some reason perceives to be particularly bad. Such things are always more personal (and therefore more unfair) than judging someone for having made a clear error of reasoning (epistemic rationality).
(I say all of this as though it’s indeed “very uncommon” to feel strongly hurt and lastingly affected by particularly harsh criticism. I don’t even necessarily think that this is the case: If the criticism comes from a person with high standing in a community one cares about, it seems like a potentially quite common reaction?)
I say all of this as though it’s indeed “very uncommon” to feel strongly hurt and lastingly affected by particularly harsh criticism. I don’t even necessarily think that this is the case: If the criticism comes from a person with high standing in a community one cares about, it seems like a potentially quite common reaction?
This is relevant context for my strong reaction. I used to admire Nate, and so I was particularly upset when he treated me disrespectfully. (The experience wasn’t so much “criticism” as “aggression and meanness”, though.)
FWIW, I also reject the framing that this situation is reasonably understood as an issue with my own instrumental rationality.
Going back to the broader point about incentives, it’s not very rewarding to publicly share a distressing experience and thereby allow thousands of internet strangers to judge my fortitude, and complain if they think it lacking. I’m not walking away from this experience feeling lavished and reinforced for having experienced an emotional reaction.
Furthermore, the reason I spoke up was mostly not to litigate my own experience. It’s because I’ve spent months witnessing my friends take unexpected damage from a powerful individual who appears to have faced basically no consequences for his behavior.
It seems weird to single out this specific type of human limitation (compared to perfect-robot instrumental rationality) over the hundreds of others.
This is a minor error but I feel the need to correct it for future readers, as it’s in the first sentence. There are infinitely many ‘specific types’ of human limitations, or at least an uncountable quantity , depending on the reader’s preferred epistemology.
The rest of your thesis is interesting though a bit difficult to parse. Could you isolate a few of the key points and present them in a list?
I wasn’t the one who downvoted your reply (seems fair to ask for clarifications), but I don’t want to spend much more time on this and writing summaries isn’t my strength. Here’s a crude attempt at saying the same thing in fewer and different words:
IMO, there’s nothing particularly “antithetical to LW aims/LW culture” (edit: “antithetical to LW aims/LW culture” is not a direct quote by anyone; but it’s my summary interpretation of why you might be concerned about bad incentives in this case) about neuroticism-related “shortcomings.” “Shortcomings” compared to a robotic ideal of perfect instrumental rationality. By “neuroticism-related “shortcomings”″, I mean things like having triggers or being unusually affected by harsh criticism. It’s therefore weird and a bit unfair to single out such neuroticism-related “shortcomings” over things like “being in bad shape” or “not being good at common life skills like driving a car.” (I’m guessing that you wouldn’t be similarly concerned about setting bad incentives if someone admitted that they were bad at driving cars or weren’t in the best shape.) I’m only guessing here, but I wonder about rationalist signalling cascades about the virtues of rationality, where it gets rewarded to be particularly critical about things that least correspond to the image of what an ideally rational robot would be like. However, in reality, applied rationality isn’t about getting close to some ideal image. Instead, it’s about making the best out of what you have, taking the best next move step-by-step for your specific situation, always prioritizing what actually gets you to your goals rather than prioritizing “how do I look as though I’m very rational.”
Not to mention that high emotionality confers advantages in many situations and isn’t just an all-out negative. (See also TurnTrout’s comment about rejecting the framing that this is an issue of his instrumental rationality being at fault.)
I don’t mind the occasional downvote or negative karma, it even has some positive benefits, such as being a useful signalling function. As it’s decent evidence I haven’t tailored my comments for popularity or platitudes.
In regards to your points, I’ll only try to respond to them one at a time, since this is already pretty far down the comment chain.
IMO, there’s nothing particularly “antithetical to LW aims/LW culture” about neuroticism-related “shortcomings” (compared to a robotic ideal of perfect instrumental rationality) like having triggers or being unusually affected by harsh criticism.
Who suggested that there was a relation between being “antithetical to LW aims/LW culture” and “neuroticism-related “shortcomings”″?
i.e. Is it supposed to be my idea, TurnTrout’s, your’s, a general sentiment, something from the collective unconscious, etc.?
I made an edit to my above comment to address your question; it’s probably confusing that I used quotation marks for something that wasn’t a direct quote by anyone.
as a kind of semantic brackets; I think the official way to do this is to write-the-words-connected-by-hyphens, but that just seems hard to read;
to remove a possible connotation, i.e. to signal that I am using the word not exactly as most people would probably use it in a similar situation;
or as a combination of both, something like: I am using these words to express an idea, but these are probably not the right words, but I can’t find any better, so please do not take this part literally and don’t start nitpicking (don’t assume that I used a specific word because I wanted to hint at something specific).
For example, as I understand it,
“Shortcomings” compared to a robotic ideal of perfect instrumental rationality.
Means: things that technically are shortcomings (because they deviate from some ideal), but also a reasonable person wouldn’t call them so (because it is a normal human behavior, and I would actually be very suspicious about anyone who claimed to not have any of them), so the word is kinda correct but also kinda incorrect. But it is a way to express what I mean.
But to clarify, this is not the reason why I ‘might be concerned about bad incentives in this case’, if you were wondering.
Sounds like I misinterpreted the motivation behind your original comment!
I ran out of energy to continue this thread/conversation, but feel free to clarify what you meant for others (if you think it isn’t already clear enough for most readers).
This seems like a fairly extreme statement, so I was about to upvote due to the courage required to post it publicly and stand behind it. But then I stopped and thought about the long term effects and it’s probably best not to encourage this.
As ideally, you, along with the vast majority of potential readers, should become less emotionally reactive over time to any real or perceived insults, slights, etc...
If it’s the heat of the moment talking, that’s fine, but letting thoughts of payback, revenge, etc., linger on for days afterwards likely will not lead to any positive outcome.
I have had these thoughts many times. I would berate myself for letting it get on my nerves so much. It was just an hour-and-a-half chat. But I don’t think it’s a matter of “letting” thoughts occur, or not. Certain situations are damaging to certain people, and this situation isn’t a matter of whether people are encouraged to be damaged or not (I certainly had no expectation of writing about this, back in July–October 2022.)
EDIT: Moving another part elsewhere.
I upvoted for effort because it’s clear you put in quite a bit of effort in writing this comment, but skipped expressing agreement or disagreement.
I had thought of several possible responses, and it is worthy of a substantial response, but since it’s not my role to be the adjudicator or corrector of LW users, I’ll pose you this question:
Consider, is it possible for him to take offence in return and then retaliate via some mean(s)? If that does occur, what’s the range of likely outcomes?
(Note that I moved most of the original comment and plan to put it elsewhere in the thread.)
I don’t follow. I’m not going to behave differently in the face of any possible retaliation, nor do I in fact expect Nate to retaliate in an inappropriate manner. So I’m not worried about this?
It seems weird to single out this specific type of human limitation (compared to perfect-robot instrumental rationality) over the hundreds of others. If someone isn’t in top physical shape or cannot drive cars under difficult circumstances or didn’t renew their glasses and therefore doesn’t see optimally, would you also be reluctant to upvote comments you were otherwise tempted to upvote (where they bravely disclose some limitation) because of this worry about poor incentives? “Ideally,” in a world where there’s infinite time so there are no tradeoffs for spending self-improvement energy, rationalists would all be in shape, have brushed up their driving skills, have their glasses updated, etc. In reality, it’s perfectly fine/rational to deprioritize many things that are “good to have” because other issues are more pressing, more immediately deserving of self-improvement energy. (Not to mention that rationality for its own sake is lame anyway and so many of us actually want to do object-level work towards a better future.) What to best focus on with self-improvement energy will differ a lot from person to person, not only because people have different strengths and weaknesses, but also because they operate in different environments. (E.g., in some environments, one has to deal with rude people all the time, whereas in others, this may be a rare occurrence.) For all these reasons, it seems weirdly patronizing to try to shape other people’s prioritization for investing self-improvement energy. This isn’t to say that this site/community shouldn’t have norms and corresponding virtues and vices. Since LW is about truth-seeking, it makes sense to promote virtues directly related to truth-seeking, e.g., by downvoting comments that exhibit poor epistemic practices. However, my point is that even though it might be tempting to discourage not just poor epistemic rationality but also poor instrumental rationality, these two work very differently, especially as far as optimal incentive-setting is concerned. Epistemic rationality is an ideal we can more easily enforce and get closer towards. Instrumental rationality, by contrast, is a giant jungle that people are coming into from all kinds of different directions. “Having unusually distracting emotional reactions to situations xyz” is one example of suboptimal instrumental rationality, but so is “being in poor physical shape,”or “not being able to drive a car,” or “not having your glasses updated,” etc. I don’t think it makes sense for the community to create a hierarchy of “most important facets of instrumental rationality” that’s supposed to apply equally to all kinds of people. (Instead, I think it makes more sense to reward meta-skills of instrumental rationality, such as “try to figure out what your biggest problems are and really prioritize working on them.) (If we want to pass direct judgment on someone’s prioritization of self-improvement energy, we need to know their exact situation and goals and the limitations they have, how good they are at learning various things, etc.) Not to mention the unwelcoming effects when people get judged for limitations of instrumental rationality that the community for some reason perceives to be particularly bad. Such things are always more personal (and therefore more unfair) than judging someone for having made a clear error of reasoning (epistemic rationality).
(I say all of this as though it’s indeed “very uncommon” to feel strongly hurt and lastingly affected by particularly harsh criticism. I don’t even necessarily think that this is the case: If the criticism comes from a person with high standing in a community one cares about, it seems like a potentially quite common reaction?)
This is relevant context for my strong reaction. I used to admire Nate, and so I was particularly upset when he treated me disrespectfully. (The experience wasn’t so much “criticism” as “aggression and meanness”, though.)
FWIW, I also reject the framing that this situation is reasonably understood as an issue with my own instrumental rationality.
Going back to the broader point about incentives, it’s not very rewarding to publicly share a distressing experience and thereby allow thousands of internet strangers to judge my fortitude, and complain if they think it lacking. I’m not walking away from this experience feeling lavished and reinforced for having experienced an emotional reaction.
Furthermore, the reason I spoke up was mostly not to litigate my own experience. It’s because I’ve spent months witnessing my friends take unexpected damage from a powerful individual who appears to have faced basically no consequences for his behavior.
This is a minor error but I feel the need to correct it for future readers, as it’s in the first sentence. There are infinitely many ‘specific types’ of human limitations, or at least an uncountable quantity , depending on the reader’s preferred epistemology.
The rest of your thesis is interesting though a bit difficult to parse. Could you isolate a few of the key points and present them in a list?
I wasn’t the one who downvoted your reply (seems fair to ask for clarifications), but I don’t want to spend much more time on this and writing summaries isn’t my strength. Here’s a crude attempt at saying the same thing in fewer and different words:
IMO, there’s nothing particularly “antithetical to LW aims/LW culture” (edit: “antithetical to LW aims/LW culture” is not a direct quote by anyone; but it’s my summary interpretation of why you might be concerned about bad incentives in this case) about neuroticism-related “shortcomings.” “Shortcomings” compared to a robotic ideal of perfect instrumental rationality. By “neuroticism-related “shortcomings”″, I mean things like having triggers or being unusually affected by harsh criticism. It’s therefore weird and a bit unfair to single out such neuroticism-related “shortcomings” over things like “being in bad shape” or “not being good at common life skills like driving a car.” (I’m guessing that you wouldn’t be similarly concerned about setting bad incentives if someone admitted that they were bad at driving cars or weren’t in the best shape.) I’m only guessing here, but I wonder about rationalist signalling cascades about the virtues of rationality, where it gets rewarded to be particularly critical about things that least correspond to the image of what an ideally rational robot would be like. However, in reality, applied rationality isn’t about getting close to some ideal image. Instead, it’s about making the best out of what you have, taking the best next move step-by-step for your specific situation, always prioritizing what actually gets you to your goals rather than prioritizing “how do I look as though I’m very rational.”
Not to mention that high emotionality confers advantages in many situations and isn’t just an all-out negative. (See also TurnTrout’s comment about rejecting the framing that this is an issue of his instrumental rationality being at fault.)
I don’t mind the occasional downvote or negative karma, it even has some positive benefits, such as being a useful signalling function. As it’s decent evidence I haven’t tailored my comments for popularity or platitudes.
In regards to your points, I’ll only try to respond to them one at a time, since this is already pretty far down the comment chain.
Who suggested that there was a relation between being “antithetical to LW aims/LW culture” and “neuroticism-related “shortcomings”″?
i.e. Is it supposed to be my idea, TurnTrout’s, your’s, a general sentiment, something from the collective unconscious, etc.?
I made an edit to my above comment to address your question; it’s probably confusing that I used quotation marks for something that wasn’t a direct quote by anyone.
I appreciate the edit though can you clarify why you put so many quotes in when they are your own thoughts?
Is it just an idiosyncratic writing style or is it also meant to convey some emotion, context, direction, etc.?
But to clarify, this is not the reason why I ‘might be concerned about bad incentives in this case’, if you were wondering.
Not Lukas, but I also sometimes use quotes:
as a kind of semantic brackets; I think the official way to do this is to write-the-words-connected-by-hyphens, but that just seems hard to read;
to remove a possible connotation, i.e. to signal that I am using the word not exactly as most people would probably use it in a similar situation;
or as a combination of both, something like: I am using these words to express an idea, but these are probably not the right words, but I can’t find any better, so please do not take this part literally and don’t start nitpicking (don’t assume that I used a specific word because I wanted to hint at something specific).
For example, as I understand it,
Means: things that technically are shortcomings (because they deviate from some ideal), but also a reasonable person wouldn’t call them so (because it is a normal human behavior, and I would actually be very suspicious about anyone who claimed to not have any of them), so the word is kinda correct but also kinda incorrect. But it is a way to express what I mean.
Sounds like I misinterpreted the motivation behind your original comment!
I ran out of energy to continue this thread/conversation, but feel free to clarify what you meant for others (if you think it isn’t already clear enough for most readers).