Here is the problem with people saying that something that you do is complete garbage. Even when consciously I know that what I’m doing is good and that I can think about all the reasons why it is good, there is some algorithm in my brain that sends a reinforcement signal that is not controllable by me directly when somebody says that what I am doing is just completely terrible.
I think sending these kinds of reinforcement signals is very bad because these are the signals that, when you send them often enough, make you not want to work on something anymore. Even when you consciously think that this is the best thing that you can do, simply because these reinforcement signals have such a strong impact on your mind. An impact that cannot be overwritten or removed (or at least not easily). It is very hard to ignore these signals initially, which is when you need to ignore them, in order to make them have not this strong negative impact. You basically need to be in a mindset of “I don’t give a fuck what you think” in order for it to not affect you. At least that is the most effective way that I have discovered so far. But this has other negative side-effects, like being more likely to ignore good things the other person says.
It also seems like, to a significant extent, it’s important how you say something. You can say something in a demeaning way that puts yourself above the other person, which is not what you would want to do. You should do it in a very cold, philosopher-like demeanor. Really, I think one of the superpowers philosophers have is that they usually get trained to talk in a way that you’re not talking about any person anymore and you know that whatever you’re saying is not reflecting on what any person is saying, but only on the content that is being spoken about.
I would like my mind to be such that anybody could just say whatever they think is best for maximizing information flow and I could just handle that information appropriately, but it seems like I’m not able to do this. I think I’m pretty good at it, but I think I’m not so good that it makes sense for me to request you to just optimize the information flow. I would like you to optimize for information flow, but also for saying things in a way that doesn’t trigger this reinforcement circuitry, which I think is very bad.
I think in future conversations I’d like people to say P9 instead of you or Johannes. Where P9 means the computer that is Johannes’ brain and all the algorithms/processes that run on it and have run on it. Now we removed the ‘I’ form the equation, and it seems that in principle no matter what you say with regard to P9 it should not make me feel bad. I have used this technique to some limited extent in the past and there it had worked pretty well.
Another thing that might be useful to try is to use the meditation technique to resolve the self and see how then the feedback is taken and if still the qualia of negativity arises.
I have talked to many people who said they subscribe to Kruger’s rules and I think, possibly each time I noticed, I think this exact phenomenon that I am describing here and then. Sometimes it was so strong that they literally wanted to stop talking about the particular topic that’s being discussed where I am was just being really straightforward in a way too harsh way about what I think. I really strongly recommend that these people don’t say that they subscribe to the standard version of Kruger’s rules because clearly it has a negative impact on them giving them reinforcement signals that make them not want to think anymore about particular topics which seems extremely bad.
I would like my mind to be such that anybody could just say whatever they think is best for maximizing information flow and I could just handle that information appropriately, but it seems like I’m not able to do this.
I think this is not realistic to achieve (although partial success can be achieved).
What I would recommend instead is to separate “honest feedback” from “emotional support”—and to have nonzero amount of the latter. Not sure what would be the proper social ritual to achieve this.
Fwiw, you’re on my shortlist of researchers whose potential I’m most excited about. I don’t expect my judgment to matter to you (or maybe up to one jot), but I mention it just in case it helps defend against the self-doubt you experience as a result of doing things differently. : )
I don’t know many researchers that well, but I try to find the ones that are sufficiently unusual-in-a-specific-way to make me feel hopefwl about them. And the stuff you write here reflects exactly the unusualness what makes me hopefwl: You actually think inside your own head.
Also, wrt defending against negative social reinforcement signals, it may be sort of epistemically-irrational, but I reinterpret [people disagreeing with me] as positive evidence that I’m just far ahead of them (something I actually believe). Notice how, when a lot of people tell you you’re wrong, that is evidence for both [you are wrong] and [you are so much righter than them that they are unable to recognise how you are right (eg they lack the precursor concepts)].
Also, if you expect [competence at world-saving] to be normally (or lognormally) distributed, you should expect to find large gaps between the competence of the most competent people, simply because the tail flattens out the further out you go. In other words, P(you’re Δ more competent than avg) gets closer to P(you’re Δ+1 more competent than avg) as you increase Δ. This is one way to justify treating [other people not paying attention to you] as evidence for [you’re in a more advanced realm of conversation], but it’s far from the main consideration.
I invite you to meditate on this Mathematical Diagram I made! I believe that your behaviour (wrt the dimension of consequentialist world-saving) is so far to the right of this curve, that most of your peers will think your competence is far below them, unless they patiently have multiple conversations with you. That is, most people’s deference limit is far to the left your true competence.
I’m now going to further destroy the vibes of this comment by saying “poop!” If someone, in their head, notice themselves downvaluing the wisdom of what I previously wrote, merely based on the silly vibes, their cognition is out of whack and they need to see a mechanic. This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching. :P
Can you explain why you use “hopefwl” instead of “hopeful”? I’ve seen this multiple times in multiple places by multiple people but I do not understand the reasoning behind this. This is not a typo, it is a deliberate design decision by some people in the rationality community. Can you please help me undertand.
You have permission to steal my work & clone my generating function. Liberate my vision from its original prison. Obsolescence is victory. I yearn to be surpassed. Don’t credit me if it’s more efficient or better aesthetics to not. Forget my name before letting it be dead weight.
This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching.
If only. Advanced vibes-based pattern-matching is useful when your pattern-matching algorithm is optimized for the distribution you are acting in.
but u don’t know which distribution(s) u are acting in. u only have access to a sample dist, so u are going to underestimate the variance unless u ~Bessel-correct[1] ur intuitions. and it matters which parts of the dists u tune ur sensors for: do u care more to abt sensitivity/specificity wrt the median cluster or sensitivity/specificity wrt the outliers?
ig sufficiently advanced vibes-based pattern-matching collapses to doing causal modelling, so my real-complaint is abt ppl whose vibe-sensors are under-dimensional.
So you seem to be doing a top down reasoning here, going from math to a model of the human brain. I didn’t actually have something like that in mind, and instead was doing bottom up reasoning, where I had a bunch of experiences involving people that gave me a sense for what it means to (1) do vibes-based pattern-matching, and (2) also get a sense for which when you should trust and not trust your intuitions. I really don’t think it is that hard, actually!
Also your Remnote link is broken, and I think it is pretty cool that you use Remnote.
Initially, I thought that your comment did not apply to me at all. I thought that most of the feedback that I get that is negative is actually of the form that the feedback is correct, but it was delivered incorrectly. But now that I think about it, it seems that most of the negative feedback that I get is based on that somebody does not understand what I am saying sufficiently. This might be in large part because I fail to explain it properly.
There are definitely instances though where people did point out big important holes in my reasoning. All of the people who did that were really competent I think. And they did point out things in such a way that I was like “Oh damm, this seems really important! I should have thought about this myself.” But I did not really get negative reinforcement at all from them. They usually pointed it out in a neutral philosopher style, where you talk about the content not the person. I think most of the negative feedback that I am talking about you would get when people don’t differentiate between the content and the person. You want to say “This idea does not work for reason X”. You don’t want to say “Your idea is terrible because you did not write it up well, and even if you had written up well, it seems to really not talk about anything important.”
Interestingly I get less and less negative feedback, on the same things I do. This is probably because of a selection effect where people who like what I do would stick around. However, another major factor seems to be that because I worked on what I do for so long, it gets easier and easier to explain. In the beginning, it is very illegible because it is mostly intuitions. And then as you cash out the intuitions things become more and more legible.
Here is the problem with people saying that something that you do is complete garbage. Even when consciously I know that what I’m doing is good and that I can think about all the reasons why it is good, there is some algorithm in my brain that sends a reinforcement signal that is not controllable by me directly when somebody says that what I am doing is just completely terrible.
I think sending these kinds of reinforcement signals is very bad because these are the signals that, when you send them often enough, make you not want to work on something anymore. Even when you consciously think that this is the best thing that you can do, simply because these reinforcement signals have such a strong impact on your mind. An impact that cannot be overwritten or removed (or at least not easily). It is very hard to ignore these signals initially, which is when you need to ignore them, in order to make them have not this strong negative impact. You basically need to be in a mindset of “I don’t give a fuck what you think” in order for it to not affect you. At least that is the most effective way that I have discovered so far. But this has other negative side-effects, like being more likely to ignore good things the other person says.
It also seems like, to a significant extent, it’s important how you say something. You can say something in a demeaning way that puts yourself above the other person, which is not what you would want to do. You should do it in a very cold, philosopher-like demeanor. Really, I think one of the superpowers philosophers have is that they usually get trained to talk in a way that you’re not talking about any person anymore and you know that whatever you’re saying is not reflecting on what any person is saying, but only on the content that is being spoken about.
I would like my mind to be such that anybody could just say whatever they think is best for maximizing information flow and I could just handle that information appropriately, but it seems like I’m not able to do this. I think I’m pretty good at it, but I think I’m not so good that it makes sense for me to request you to just optimize the information flow. I would like you to optimize for information flow, but also for saying things in a way that doesn’t trigger this reinforcement circuitry, which I think is very bad.
I think in future conversations I’d like people to say P9 instead of you or Johannes. Where P9 means the computer that is Johannes’ brain and all the algorithms/processes that run on it and have run on it. Now we removed the ‘I’ form the equation, and it seems that in principle no matter what you say with regard to P9 it should not make me feel bad. I have used this technique to some limited extent in the past and there it had worked pretty well.
Another thing that might be useful to try is to use the meditation technique to resolve the self and see how then the feedback is taken and if still the qualia of negativity arises.
I have talked to many people who said they subscribe to Kruger’s rules and I think, possibly each time I noticed, I think this exact phenomenon that I am describing here and then. Sometimes it was so strong that they literally wanted to stop talking about the particular topic that’s being discussed where I am was just being really straightforward in a way too harsh way about what I think. I really strongly recommend that these people don’t say that they subscribe to the standard version of Kruger’s rules because clearly it has a negative impact on them giving them reinforcement signals that make them not want to think anymore about particular topics which seems extremely bad.
I think this is not realistic to achieve (although partial success can be achieved).
What I would recommend instead is to separate “honest feedback” from “emotional support”—and to have nonzero amount of the latter. Not sure what would be the proper social ritual to achieve this.
Fwiw, you’re on my shortlist of researchers whose potential I’m most excited about. I don’t expect my judgment to matter to you (or maybe up to one jot), but I mention it just in case it helps defend against the self-doubt you experience as a result of doing things differently. : )
I don’t know many researchers that well, but I try to find the ones that are sufficiently unusual-in-a-specific-way to make me feel hopefwl about them. And the stuff you write here reflects exactly the unusualness what makes me hopefwl: You actually think inside your own head.
Also, wrt defending against negative social reinforcement signals, it may be sort of epistemically-irrational, but I reinterpret [people disagreeing with me] as positive evidence that I’m just far ahead of them (something I actually believe). Notice how, when a lot of people tell you you’re wrong, that is evidence for both [you are wrong] and [you are so much righter than them that they are unable to recognise how you are right (eg they lack the precursor concepts)].
Also, if you expect [competence at world-saving] to be normally (or lognormally) distributed, you should expect to find large gaps between the competence of the most competent people, simply because the tail flattens out the further out you go. In other words, P(you’re Δ more competent than avg) gets closer to P(you’re Δ+1 more competent than avg) as you increase Δ. This is one way to justify treating [other people not paying attention to you] as evidence for [you’re in a more advanced realm of conversation], but it’s far from the main consideration.
I invite you to meditate on this Mathematical Diagram I made! I believe that your behaviour (wrt the dimension of consequentialist world-saving) is so far to the right of this curve, that most of your peers will think your competence is far below them, unless they patiently have multiple conversations with you. That is, most people’s deference limit is far to the left your true competence.
I’m now going to further destroy the vibes of this comment by saying “poop!” If someone, in their head, notice themselves downvaluing the wisdom of what I previously wrote, merely based on the silly vibes, their cognition is out of whack and they need to see a mechanic. This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching. :P
Can you explain why you use “hopefwl” instead of “hopeful”? I’ve seen this multiple times in multiple places by multiple people but I do not understand the reasoning behind this. This is not a typo, it is a deliberate design decision by some people in the rationality community. Can you please help me undertand.
This is an interesting concept. I wish it became a post.
u’r encouraged to write it!
If only. Advanced vibes-based pattern-matching is useful when your pattern-matching algorithm is optimized for the distribution you are acting in.
but u don’t know which distribution(s) u are acting in. u only have access to a sample dist, so u are going to underestimate the variance unless u ~Bessel-correct[1] ur intuitions. and it matters which parts of the dists u tune ur sensors for: do u care more to abt sensitivity/specificity wrt the median cluster or sensitivity/specificity wrt the outliers?
ig sufficiently advanced vibes-based pattern-matching collapses to doing causal modelling, so my real-complaint is abt ppl whose vibe-sensors are under-dimensional.
idk the right math tricks to use are, i just wanted to mk the point that sample dists underestimate the variance of the true dists
also, oops, fixed link. upvoted ur comment bc u complimented me for using RemNote, which shows good taste.
So you seem to be doing a top down reasoning here, going from math to a model of the human brain. I didn’t actually have something like that in mind, and instead was doing bottom up reasoning, where I had a bunch of experiences involving people that gave me a sense for what it means to (1) do vibes-based pattern-matching, and (2) also get a sense for which when you should trust and not trust your intuitions. I really don’t think it is that hard, actually!
Also your Remnote link is broken, and I think it is pretty cool that you use Remnote.
Initially, I thought that your comment did not apply to me at all. I thought that most of the feedback that I get that is negative is actually of the form that the feedback is correct, but it was delivered incorrectly. But now that I think about it, it seems that most of the negative feedback that I get is based on that somebody does not understand what I am saying sufficiently. This might be in large part because I fail to explain it properly.
There are definitely instances though where people did point out big important holes in my reasoning. All of the people who did that were really competent I think. And they did point out things in such a way that I was like “Oh damm, this seems really important! I should have thought about this myself.” But I did not really get negative reinforcement at all from them. They usually pointed it out in a neutral philosopher style, where you talk about the content not the person. I think most of the negative feedback that I am talking about you would get when people don’t differentiate between the content and the person. You want to say “This idea does not work for reason X”. You don’t want to say “Your idea is terrible because you did not write it up well, and even if you had written up well, it seems to really not talk about anything important.”
Interestingly I get less and less negative feedback, on the same things I do. This is probably because of a selection effect where people who like what I do would stick around. However, another major factor seems to be that because I worked on what I do for so long, it gets easier and easier to explain. In the beginning, it is very illegible because it is mostly intuitions. And then as you cash out the intuitions things become more and more legible.