What would it be look to strive for perfection in your process of choosing how much effort to put into each process?
to address the fundamental unhappiness that comes from wanting something at all.
I think your whole comment, and this clause in particular, comes from what I refer to as a very “enlightenment-oriented” frame.
That is, the thing that matters is feeling good (or not feeling bad), and the goal is to get to that.
There’s another perspective, that I like to call the “heaven-oriented” perspective, in which the thing that matters is achieving a world where all needs are met and nourished, and the goal is to get to that.
I have heard people coming from a more heaven-oriented perspective say that people who think they just want to be happy are making a fundamental category error, and not in touch with what they actually care about.
I have heard people coming from a more enlightenment-oriented perspective say that people who think they want to achieve a state of the world are making a fundamental category error, and not in touch with what they actually care about.
My take, having worked with dozens of people and guiding introspection into fundamental motivations, is that both of these are true for different people. My current frame is that these perspectives are more like fundamental dispositions that people have, to lean more towards enlightenment or heaven (with some at the extremes and some at different places along the spectrum), although it does get a bit more complicated because they may lean in different ways in respect to different needs.
In general, the type of advice I’ll be giving in this sequence will tend to be more useful to heaven-oriented individuals, although I encourage people who are more enlightenment-oriented to follow along and take what they’d like from it.
The original book on the Transtheoretical model is still my go to resource for this, It’s called “Changing for Good” by James Prochaska and Carlo DiClemente. However it’s quite a commonly used model especially in the treatment of addiction, and there’s plenty of info online including wikipedia, probably webmd, etc.
Forgiveness and procrastination: This study from Wohl et al:
That plan bot is cool but the week time frame seems like an odd choice. For many habits like new years resolution I find it takes them longer to fail then a week, So I’d recommend mentally replacing that with something like 6 months.
A few thoughts on this:
In general, I like to use the stages of change model when trying to make a change. The research basically says that if people try to change when they’re ready to change, they’ll do it the first time, but if they try to change before they’re ready, it will take multiple attempts.
For this reason, I try not to set action-based new years resolution (it’d be really suspicious if all the changes I wanted to make suddenly moved into the “Action” stage on the 1st). Instead, I’ll do something like a “Theme” for the year (this year it’s “Full contact with reality” and then take stage appropriate actions for that theme (thinking and reading during contemplation, planning during preparation, creating habits during action, etc.)
MurphyJitsu is a great tool to use here. There’s a bunch of good exanations on LW, but the basic tool is to imagine you failed, ask yourself why, then patch your approach until it’s very surprising that you failed.
Learning to forgive yourself is HUGE here. Research says that people who forgive themselves for procrastinating are less likely to procrastinate in the future, and I’m pretty sure this generalizes. Expect adjustments and forgive yourself for needing to make them.
If you’re continually finding yourself with systems that don’t stick, IME it’s likely that you’re fundamentally motivating yourself in a coercive way. You may want to read this post and sequence to begin to reorient your motivation system to a more sustainable strategy: https://www.lesswrong.com/posts/ga8g4RbKc6DmqEBwD/why-productivity-systems-don-t-stick
Did you think the comment above missed something about that dynamic? I was meaning it to apply to interactions as well.
When we say “new technology” in Wardley Mapping we’re referring to a fundamentally new idea upon which new things can be built.
Only if AGI springs forth as soon as that new idea is created would it be in the custom built stage. It’s equally possible that AGI could come from iterating on or making the new idea repeatable/practical/cost effective that AGI could arise.
An analogy would be if we were talking about—FHT (Faster than Horse Technology). The exact moment we crossed the barrier of being faster than a horse might have been when a new technology was created, bits it’s equally possible that it would be between one model of car and another, with no fundamentally new technology just iterating on the existing technology and making the speed go up through experimentation, better understanding, or the result of being able to manufacture at higher scale.
It seems like not ok mode is the mode of ‘get others to see my panic and create a plan, whereas ok mode is the mode of ‘create my own plan’. Interestingly, this seems almost the opposite of your model.
One could imagine a 3 by 2.
Sees reality clearly
Sees a problem in reality
Is in ok mode
It seems like the best place to be is yes in all 3, but it’s probably better to be yes yes no than no no yes.
It seems quite important to separate OK mode from the state of the world (see also, Nates article on detaching the grimometer).
There is a problem, and that’s ok.
I’m freaking out about it, and that’s ok.
I have a response-ability o fix it, and that’s ok.
Everything is ok, because what else could it be?
Reality is the way it is, and thats ok.
Ok, what do I do now?
I guess the worry would be that this increases variance of outcomes away from popular opinion.
Related is the concept of “Sortition” in which a candidate is just picked at random without voting at all
There are some good arguments for it.
To what extent is AGI bottlenecked on data, compute, insight, or other factors?
Is there some tradeoff such that e.g. 10xing compute or 20xing data is worth 1 massive insight (scaling hypothesis)?
Will these tradeoffs reach diminishing returns before or after we get to AGI?
This will not only effect timelines, but also effect in what Wardley Stage we’d expect to see it happen, whether we expect to come from a small or large firm, etc
Conditional on reaching a constraint and the current brand of machine learning becoming commoditized before we reach AI, how safety conscious will the major commodity owners be?
As mentioned in another comment, they have a huge influence on the market.
At which stage of Wardley Evolution will we reach AGI?
Right now we are in the “Custom Built” stage, during this stage, building something competitive takes an incredible investment in money and talent, so the playing field is small and it’s easier to coordiate.
As we move into the “Product” stage, that’s the most dangerous. There’s no longer huge R&D costs, so many people can enter the game. This is also when the landscape is the most competitive and players are willing to do the most to get ahead, so they’re more likely to “move fast and break things”.
Then, as we move into the “Commodity” stage, things get a bit safer again. The market usually thins out as winners emerge, and since everyone basically has the same features, we wouldn’t expect drastic shifts that create AGI. At this stage, a further question becomes
Are the companies that win the commodity game safety conscious? Because they have a huge leg up in both influencing and monitoring the further developments of AI.
Can we get to AGI without running into constraints?
Do we need radically new concepts/tools to get to AGI?
There will be radically different market dynamics if the thing that eventually becomes AGI is built on commoditized components (which will happen if we need another breakthrough to get AGI). The players who build these commodities (possibly safety conscious companies like OpenAI and Deepmind) will have lots of influence on the market.
It does seem like there’s a Western strain of wuwei in the form of the Western Pragmatists, but they tend to be left out of the discussion.
It seems like you still need some criterion through which to criticize your beliefs. Popper offers the criterion that “your past observations don’t falsify your theory”, and “your theory minimizes adhocness”, but by which criterion can you accept those criterion as true or useful?
There’s one sense in which self-coercion is impossible because you cannot make yourself do something that at least some part of yourself doesn’t endorse. There’s another sense in which self-coercion is an inescapable inevitability because some particular part of you will always dis-endorse any given action.
Yeah, I think aiming for 100% endorsement in every situation is impossible. But endorsement is different from “acceptance”. I think it’s definitely possible to get either endorsement or acceptance from every part for, many actions and goals.
I think feeling this for EVERY action in every area of your life is fairly hard (is this what enlightenment is?) but certainly for a given task or goal it’s achievable.
In my framing, the effective approach isn’t to find a non-coercive plan, but rather a minimally-coercive plan that still achieves the goal.
That seems like a decent framing! One of the things I mentioned in my last post is that coercion can BUILD over time, so if that’s your goal you’ll want to check that you’re not consistently ignoring the same part or value.
Plus, the only way you can really learn where plans sit on the coerciveness landscape is to attempt to execute them.
There are definitely introspective strategies here that can help without doing the action. I especially recommend these for bigger goals or visions, because you may not run into the resistance for a bit. For smaller actions I agree it often makes sense to just start and deal with resistance if and when it occurs.
Hmm, it wasn’t clear to me from that section what you were suggestion. I tend to skim these posts and find areas that are relevant because they’re very long, so if you talk about it in a different part I suppose I have egg on my face here.