Worried that I might already be a post-rationalist. I’m very interested in minimizing miscommunication, and helping people through the uncanny valley of rationality. Feel free to pm me about either of those things.
Hazard
Excellent post! I’ve noticed in my own life that I lot of the progress I make (in rationality or otherwise) comes when I get in the habit of asking better questions when something goes wrong. Your debugging chart offers a great diving board to do just that. I like the flexibility it offers as well. It would be easy enough to start with a blank OODA chart, and add one’s own tools as they learn them, since the placement on the chart is more a pragmatic memory tagging aid, rather than a deep epistemic claim.
I’d second the active listening and check for clarification idea. If I’m at the point where I’m fairly certain that this person doesn’t know what they are talking about, I stop putting effort into arguing and just see if I can learn anything about how they came to this point of view.
I have a way of talking with people where when someone says something I disagree with, and I sense they have an ego, I don’t outright disagree with them. Instead I start asking them a lot of questions about their stance, getting them to flush out what it is that they actually believe. This is much easier to do when you are still “on the same side”, as you haven’t clashed with their ego by telling them they’re wrong. From my experience, less biases are triggered when one is explaining something to someone who they think is curious, vs someone who they think is “out to get them”.
A take away could be to get in the habit of checking in with all parties involved to see if they have the time and mental bandwidth to really discuss the issue, and if not, defer the conversation to a later date. This would be easier in a community like LW, but I think it could still be fluidly pulled off in other various social contexts.
I think the key to pulling it off would be to make it clear that the issue is only being deferred because you want it to give it your attention and consideration, and not because you are trying to avoid confrontation.
I’m very interested in that meta point you brought up. Do you know of any books or articles that attempt to comprehensively describe the “before and after” picture of people’s daily lives?
I think I followed you 75% to 80% of the way with the math. Would it be fair to say that your main point is that due to the fact that certain combinations of rewards and mappings will always produce the same set of actions, and thus you can’t exactly know the way an agent values things?
One thing that I couldn’t tell if you addressed was how many possible compatible pairs of mappings and reward functions can exist for an agent. In you’re third to last paragraph, you say that “it seems we can’t say anything about the human reward function.” yet if there is a finite amount of compatible pairs, it seems we’ve gained at least some knowledge about what the agent might value.
But even more interestingly, this raises questions of which genres are valid. Or, more precisely, I’ve interrogated my gut sense of what’s lotus eating and what isn’t to draw out information about my deep implicit beliefs surrounding the meaning of life.
This bit really stuck out to me as an awesome example of how to strategically gain insights into your own beliefs and values. When I first got into rationality I did a Halt Melt and Catch Fire, and proceeded to try and reason out and choose what my values “should” be, which resulted in a jumbled mess of contradictions and dissonance.
Your question of asking “What genre do I feel like I’m living in?” leans into the idea that you already have implicit values and beliefs, and encourages you to discover what they are as opposed to telling yourself what they should be.
I really like the “even number of attempts” idea. Especially when doing practice that is a bit on the tedious side, it can be easy to just stop doing it without realizing. Forcing yourself to do an even amount is a good hack around that.
What particular skills are you specifically working on?
Critiques:
The 7 − 10 hour time commitment can be really intimidating or not even possible for a lot of people. While I do think that long chunks of deep work and deliberate practice are the best way to get results, I’m dubious as to whether or not such a dive is an approachable way to develop a new skill. Where you thinking of this as more of a technique for learning a new skill or doubling down on an existing one.
Also, I don’t think that your implementation directly tackles any of the road blocks that you mentioned at the beginning of your post. It seems like the effectiveness of an aggressively protected 10 hour practice block helps because it forces you to keep confronting whatever road block you are currently at. If you’re two hours in and you feel like you’re experiencing some decision paralysis, as long as you commit to the block, there’s only so much time you can stare at the wall before you mind rage quits and makes a choice (or at least that’s how I often experience it). Do you agree or disagree with that?
When they’ve gotten good at taking (admittedly arbitrary) intentions to return to a task seriously, and can spend a couple of hours on a hard thing knowing they can trust themselves to spend another couple hours if that’s what it takes to master it.
I think that right there is the core benefit of a several hour committed work block. Having a level of moment to moment commitment that makes it so you don’t just stop when things get hard. The other things you mention are also useful tips, but they aren’t inherently baked into the practice you suggested (through any sort of checklist or workflow).
I think that a large part of this problem stems from how people think changing your opinion of someone works. An implicit belief that seems to exist in a lot of people’s minds is that when you break a commitment with someone, they can either decide to “hold it against you” or “let it go”. While there is a conscious part of your friend that is deciding whether or not your transgression was worth making a fuss over, I think that the more important change is that their mental model of you has been ever so slightly adjusted.
If you frequently show up late to meetings, even if your friends say it’s “okay”, they are still unconsciously updating their model of you to someone who isn’t reliably on time. This happens bit by bit, and is adjusted slightly each time you’re late or on time.
If you’re friend has slowly started blowing you off more often, and you keep saying it’s fine, you’re going to be slowly adjusting your model. At some point, the model of your friend that you use to control your anticipation will be at odds with your belief in the belief that your friend and you are “totally chill”. Then there will be one blow off too many, snapping your belief in belief, and it will appear to your friend like it all came out of nowhere.
It seems the best way to avoid these sorts of problems would be to create common knowledge on how we actual update our opinions of each other. I’m not sure of what would be a smooth way to add that into conversation.
I dig the area under the curve analogy. I’d bet that one of the reasons it often feels so tempting to aim for that momentary Maximum Effort is because that is the time that feels satisfying and rewarding. Even when I’m making significant progress in a part of my life, unless there are very blatant indications that I’m “doing a ton of work”, it’s hard for me to really feel like progress is being made. I agree whole heartedly that maximizing you sustainable average is the way to go, but it can be harder to milk satisfaction out of that.
I’m not sure how universal that experience is, but I’m guessing it could be behind a lot ones drive to max out. I’ve been working on creating some systems that help be clearly see the progress I’m making in order to keep up moral.
From your experience, is the most damaging part of a cause based burnout the emotional fallout of putting so much importance in The Cause, and then beating yourself up about not meeting the crazy high standards you’ve made?
To me, the most effective way I’ve found to avoid getting sucked into “you aren’t doing enough!” has been to ask my self as many specific questions as possible about what exactly I “should” be doing. How much is “enough”? Am I actually capable of doing “enough”? Is “enough” just serving as an unreachable point of emotional satisfactions? In my own mind, I’ve found a few times that thoughts like, “You are bad because you aren’t doing X!” didn’t really care about the X at all; they were just excuse for me to tell myself I wan’t good enough.
Something that’s helped me a lot has been getting a better feel for how long certain works takes me, how long I can do focused deep work reliably, both of which have made me a lot better at answering, “Can I also take on tasks X and Y this week? What will or won’t have to be sacrificed?” Then, if you feel like you want to be able to do more for your cause, it becomes a matter of finding ways to train and expand your capacity.
E.g. “I notice what feels like an undertone of aggression in your comment, and I notice some defensiveness in myself.”
I am really behind making this sort of communication more of a norm. There have been many times when I’ve wanted to tell someone how I’ve felt, yet also let them know that I didn’t necessarily agree with or identify with how I was feeling, and I ended up staying silent because it all seemed like to much to explain.
Establishing common knowledge that what you feel doesn’t automatically inform what you think would go a long way to making it easier to clearly communicate.
I really like taking the “making beliefs pay rent” anaolgy further. You thinking of your score as “how much rent you get payed” and even if you have a bunch of beliefs that are 6 months back on their payments, having a single tenant that pays you triple the cost of rent, on time, every month, has the potential to make you score high.
That line of thinking also opens the idea that a belief can do worse than not pay rent, it can vandalise your apartment complex and take up a lot of your time and attention with the problems it causes.
LessWrong lionizes empiricism and science, but then never seems to produce any.
I think you hit on something really important. More things in the vain of “So, I had this idea on how to do better at X, and I tried it out and (it didn’t work at all)/(it seems promising)/(I’m still uncertain about...)” could make this a place where ideas are grown and forged.
I agree with you. Rereading my post, I do see a bit of a “rationalist apologist” vibe that I didn’t want.
I could have been clearer by emphasizing, “Here’s one particular reason why some of us were/are failing. Given that this could be your problem, here’s how to overcome that problem and continue on your path to being able to wipe the floor with the competition.”
It feels like you switch around how you use snobbery. At the beginning you give the definition “with this belief, I eliminate a class of problems other people have.”
To me, that usage doesn’t seem like it has much to do with the standard notion of snobbery, but that’s not a huge deal. Though with your tiping and hacker news example, it seems like you go back to using snobbery to mean holding “elitist beliefs ”.
Also, it didn’t like you actually made a case for the general utility of holding snobish beliefs (if that’s what you were trying to argue for). The tipping example seemed to be making a particular case for the act of tipping, that didn’t seem related to the fact that tipping well could be considered a classic snobish behaviour. In the hacker news example, it seems you were pointing out that if you needed to win over people who had free-floating snobbish beliefs, that it can sometimes benefit you to shout the same slogan as them.
I think the script for that one needs two parts for it to work. The first is this-problem-specific and is conveying the belief that “People don’t automatically have access to their motives, and it’s super easy for one to confabulate their motives.” I’ve got a feeling that to really get someone to understand that point would require at least some reading on the topic. Actually, you might need to pair this one with a tangent explaining this idea.
The second ingrediant seems to be a more generic one, and it’s establishing the rule that “Us disagreeing with each other doesn’t mean we have to be on opposite teams.”
That second one is probably the more important part when interacting with a semi-stranger.
This is why I think one of the more useful scripts to have is the one for communicating, “I think that we should pause this argument and talk about this tangent idea for a bit, really focus on it, discuss it, and then come back to this one and see what changes.”
Even though you still bear the burden of trying to explain, you’ve at least created a space where they are giving you new idea thought, as opposed to being focused on the original issue and mostly ignoring you tangent rationality related idea.
I think we agree on the reality of the situation, but I’d rephrase you conclusions as, “Things that don’t seem like strong/fantastic evidence can often be so, due to how evidence can relate and interact with our background knowledge of reality.”
The way you currently phrase your last bullet point could be confusing, because you use the understanding you’ve developed in your post to say that you would be convinced by the previously mentioned evidence, yet you still refer to said evidence as “decent-but-not-fantastic”, which you would only do if you held the naive perspective that you proposed at the beginning of your post.
Mixing the two in one sentence makes things fuzzier and easier to misinterpret.