Case study: A simple algorithm for fixing motivation
So here I was, trying to read through an online course to learn about cloud computing, but I wasn’t really absorbing any of it. No motivation.
Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.
So I traversed the chain to see which link was broken.
Why was I doing the online course? Because I want to become better at my job.
Do I still think doing the online course will make me better at my job? Yes I do.
Do I want to get better at my job? Nah, doesn’t spark joy.
Why do I want to get better at my job? Because I want to get promoted.
Do I still think doing better will make me get promoted? Yes I do.
Do I want to get promoted? Nah, doesn’t spark joy.
Why do I want to get promoted? Because (among other things) I want more influence on my environment, for example by having more money.
Do I still think promotion will give me more influence? Yes I do
Do I want more influence? Nah
Why do I want more influence (via money)? Because (among other things) I want to buy a house and do meetups, and live with close friends at the center of a vibrant community that helps people
Do I think more money will get me this house? Yes I do
Do I want to live with close friends at the center of a vibrant community that helps people? Well, usually yes, but today I kind of just want to go to the beach with my gf, and decompress.
Well okay, but most days you do want this thing.
Shit you’re right, I do want to do this online course
And motivation was restored. Suddenly, I feel invigorated. To do the course, and to write this post.
Question for the Kegan levels folks: I’ve noticed that I tend to regress to level 3 if I enter new environments that I don’t fully understand yet, and that this tends to cause mental issues because I don’t always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?
As someone who never came across religion before adulthood, I’ve been trying to figure it out. Some of it’s claims seem pretty damn nonsensical, and yet some of it’s adherents seem pretty damn well-adjusted and happy. The latter means there’s gotta be some value in there.
The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a covenant with god, they talk about sticking to some kind of mindset no matter what.
Back when this stuff was written, people didn’t seem to distinguish between objective reality and subjective experience. The former is a modern invention. Either that, or this nuance has been lost in translation over the centuries.
As for being on ibogaine, a high dose isn’t fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive
Have you tried opiates? You don’t need to be in pain for it to make you feel great
Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.
If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.
Of course, that’s only if there are no unforeseen caveats. Still, why isn’t everybody talking about this?
Did Dominic Cummings in fact try a “Less Wrong approach” to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)
I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I’ve found useful extracts everywhere.
And now I’m alone. I don’t fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I’ve lost motivation for deep friendships, it just doesn’t seem compatible with learning new things about the world. That sense of belonging I got from LessWrong is gone too. There are a few things that LW/EA just doesn’t understand well enough, and I haven’t been able to get it across.
I don’t think I can bridge this gap. Even if I can put things to words, they’re too provisional and complicated to be worth delving into. Most of it isn’t directly actionable. I can’t really prove things yet.
I’ve considered going back. Is lonely dissent worth it? Is there an end to this tunnel?
I don’t recall, this is one of those concepts that you kind of assemble out of a bunch of conversations with people that already presuppose it
Here’s another: probing into their argument structure a bit and checking if they can keep it from collapsing under its own weight.
Probably the skill of discerning skill would be easier to learn than… every single skill you’re trying to discern.
The outgroup is evil, not negotiating in good faith, and it’s an error to give them an inch. Conflict theory is the correct one for this decision.
Which outgroup? Which decision? Are you saying this is universally true?
Forgive me for stating things more strongly than I mean them. It’s a bad habit of mine.
I’m coming from the assumption that people are much more like Vulcans than we give them credit for. Feelings are optimizers. People that do things that aren’t in line with their stated goals, aren’t always biased. In many cases they misstate their goals but don’t actually fail to achieve them.
See my last shortform for more on this
So here’s two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.
But the emphasis on “we’re just a bunch of hardcoded heuristics” is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether you’ll fall in and out of love with someone based on some criteria, like whether they’re compatible with your self-narrative and whether their opinions and interests align with yours, etc. The same is true for many intuitions that we often tend to dismiss as just “my brain” or “neurotransmitter xyz” or “some knee-jerk reaction”.
There tends to be a layer of agency in these things. A set of conditions that makes these things fire off, or not fire off. If we want to influence them, we should be looking for the levers, instead of just accepting these things as a given.
So sure, we’re godshatter, but the shards are larger than we give them credit for.