Slack matters more than any outcome

About a month ago Ben Pace invited me to expand on a point I’d made in a comment.

The gist is this:

  • Addictions can cause people to accomplish things they wouldn’t accomplish otherwise.

  • But if the accomplishment were worthwhile, why would the addiction be helpful? Why wouldn’t the clarity that it’s worthwhile be enough?

  • I postulate that the reason is a kind of metaphorical heaviness in culture. A particular structural tendency to eat slack.

  • So I think it’d be net better to let some worthwhile things go unaccomplished in favor of lifting the metaphoric burden. Creating more slack.

  • And I’d even say that this is the main plausible pathway I see for creating a great future for humanity. I don’t think we can get there by focusing on making worthwhile things happen.

I felt inspired to write up an answer. Then I spent a month working on it. I clarified my thinking a lot but slid into a slog of writing. Kind of a perfectionist thing.

So I’m scrapping all that. I’m going to write a worse version, since the option is (a) a quickly hacked together version or (b) nothing at all.

Addictions

My main point isn’t really about addictions, but I need to clarify something about this topic anyway. They’re also a great example cluster.

When I say “addiction”, I’m not gesturing at a vague intuition. I mean a very particular structure:

  • There’s some unwelcome experience that repeatedly arises.

  • There’s a behavior pattern that can temporarily distract the person in question from the unwelcome experience.

  • But the behavior pattern doesn’t address the cause of the unwelcome experience arising in the first place.

So when someone engages in the distraction, it provides temporary relief, but the unwelcome experience arises again — and now the distraction is a little more tempting. A little more habit-forming. When that becomes automatic, it can feel like you’re trapped inside it, like you’re powerless against some behavior momentum.

Which is to say, this structure eats slack.

Some rapid-fire examples:

  • Caffeine dependency becomes an addiction when you autopilot react to the withdrawal symptoms by reaching for another cup of coffee.

  • Alcoholism as an addiction is often (usually? always?) about avoiding emotional experiences. Since the causes of the emotions don’t go away, sobriety can result in the unwelcome experience arising, which the alcoholic knows how to numb away.

  • I have a long-standing habit of feeling kind of listless, lonely, like I should be doing something more or different with my life but I’m not quite sure what it is or that I can do it. If I don’t pay attention when that sensation/​emotion/​thought cluster arises, I find myself on my computer scrolling social media or watching YouTube or Netflix. Putting up blockers to these sites both (a) makes me good at disabling the blockers and (b) makes things like porn or Minesweeper more tempting.

I’m not saying that all addictions are like this. I can’t think of any exceptions off the top of my head, but that might just be a matter of my lack of creativity or a poor filing system in my mind.

I’m saying that there’s this very particular structure, that it’s quite common, and that I’m going to use the word “addiction” to refer to it.

And yeah, I do think it’s the right word, which is why I’m picking it. Please notice the framing effect, and adjust yourself as needed.

Imposing an idea

The main thing I want to talk about is a generalization of rationalization, in the sense of writing the bottom line.

Caffeine dependency

When I grab a cup of coffee for a pick-me-up, I’m basically asserting that I should have more energy than I do right now.

This is kind of odd if you think about it. If I found out my house were on fire, I wouldn’t feel too tired to deal with it. So my body can mobilize the energy even from a mental perception.

I mean, the caffeine wouldn’t work if the energy weren’t in some sense available in my body already.

So if I really do need to do that bit of writing, or give a killer presentation, or show up alert to that date… why isn’t that fact enough for me to have the right amount of energy?

But instead of asking that question, I grab some coffee.

This induces a kind of split. My idea of how energized I should be is in defiance of… something. Obviously some kind of process in my body disagrees with my mental idea of how I should be.

In the case of caffeine, that process shows up as adaptation. My brain grows more adenosine receptors to get around the action of the caffeine — precisely because the caffeine is messing with my body’s idea of how much energy should be present.

This argument between the mind and the body is what eventually creates caffeine addiction. The conscious mental process that results in reaching for more coffee doesn’t dialogue basically at all with the body-level systems it’s overriding. So they end up in a kind of internal arms race.

I think it’s pretty common for people to land on something like an equilibrium. Something like “Don’t talk to me before I’ve had my first cup of coffee” plus a kind of ritual around when and how to have the second and maybe third cup. This equilibrium is a kind of compromise: the person can have normal functional levels of alertness at predetermined times, but at the cost of needing coffee to be functional — and sometimes the coffee won’t work or won’t be enough anyway.

Getting out of this equilibrium is usually pretty sticky. It’s a kind of heavy. The inner war has eaten some slack. Ending the war requires slogging through caffeine withdrawal, which means not only being extra tired and dysfunctional and possibly dealing with headaches, but also requires fighting the tempting habit of ending the discomfort with a bit of caffeine.

And lots of people can’t do it. They just don’t have the inner resources at a given time to face all of that.

…which is another way of saying that they don’t have enough slack.

And the caffeine addiction is one of the things eating up that slack!

Oops.

Not listening

The caffeine thing is an example of a general pattern. It’s about embedding internal wars in how a system works.

In practice, this embedding happens because the mind disagrees with the world, and doubles down on its disagreement instead of listening.

To the extent that the thing the mind is forcing can adapt, you end up with an arms race.

Here’s a few more examples, again in rapid-fire:

  • Pursuer/​avoider dynamics in relationships. Here the system is the pair. Each person has an idea of how the other should or needs to behave — but rather than asking why they’re not already behaving that way, there’s an attempt to force or pressure. Then that pressure gets embedded in the relationship itself.

  • Using TAPs to modify behavior. If you’ve done the goal factoring all the way, and you clearly see what actions need to happen, then something like a TAP will self-install. If it doesn’t, there’s a reason. Trying to make a TAP happen anyway tends to act like a hack, and the reason why the TAP didn’t self-install will push back if it can. I used to see this in CFAR a lot: people would never get around to installing a TAP, or they’d try and it wouldn’t work, or it’d work for a while but then fade, or they’d double down extra hard on their idea (!) of how they should behave and then run into injury or burnout problems later on. Goal factoring is the “listen” step, and if done right makes the “TAP installation” part totally unnecessary because it bypasses the need for an inner arms race.

  • Doing chores I don’t wanna do. If I need to clean my bathroom but I keep avoiding it, I could just make myself do it — which is to say, I can impose the mental idea of my actions on my behavior. But then how does the part that was resisting respond? The details depend on the part — what it wants and what it can access. But for example, I might find myself being extra irritable, which both makes my experience more miserable and can create problems when interacting with others. There’s always a price to pay when forcing instead of listening.

  • Culture wars. I want to tread carefully here. I’ll try to point at a timeless pattern. One facet of culture wars is a demand that each side kind of… stop existing. Often with an insistence that it’s the other side that’s making this existential demand. There’s an attempt to apply force to make the other side comply with an idea, instead of sincerely listening to why they aren’t spontaneously complying on their own.

There are basically four problems with forcing instead of listening:

  1. The need to fight becomes embedded in the dynamic. This is what eats slack.

  2. The fighting itself often is expensive and has externalities — roughly the same way that any war tends to be devastating to the land it’s fought on.

  3. If the side being imposed upon has a point, that point goes unheard. Which is to say, if you’re doing the forcing, then you’re violating something you would care about if you were to become aware of it, but you’re doubling down on your unawareness.

  4. Even if the imposed-upon side is just missing information, it doesn’t know that. So it’s gonna fight back as hard as it can unless and until it learns what you know — or until you obliterate it, which is usually extremely difficult (and is a terrible policy due to point 3).

Adaptive entropy

I find it helpful to reify the tendency-to-eat-slack-and-keep-adding-force-to-deal-with-the-lack-of-slack as a kind of stuff. It’s like a sticky tar that gets in the way of a system being able to use all its otherwise available intelligence[1].

All the analogies are a little terrible, but the one I find least terrible is entropy. Entropy as energy that isn’t available to do work, and that grows if you try to make it do work.

I use the term adaptive entropy to talk about this stuff. It’s the problem-ness of a situation that fights against being solved. The way the pursuer/​avoider dynamic actually gets stronger when the people in it try to orient to the dynamic itself and solve it. The way that bringing more intensity or insight to a battlefront of the culture wars makes the war more severe.

You can think of adaptive entropy as sticky problem-ness. Sure, maybe we apply a ton of political force to pass legislation that deals with something minor about climate change, but the cost is massively more political divisiveness and tremendously greater resistance to everything else having to do with orienting to climate change. For example.

For another example, what’s the cost you incur by forcing yourself to follow a good diet-and-exercise plan? Or at least one you think is good. Imposing this mental idea on your system means you trigger an inner arms race. As that gets embedded in how you work… well, usually the plan fails, often with abandon, and now there’s a bit of extra gridlock in your system as whatever you were trying to override correctly trusts you less. But this gets even worse if your thoughts are wrong about what’s actually healthy for you but you succeed in imposing your mental plan anyway.

Which is to say, the act of trying to force yourself has incurred adaptive entropy — as a loss of willpower and/​or health.

Possessing minds and behavior

Adaptive entropy is anti-slack. It’s not just a lack of slack. It’s a structural devouring of slack that just keeps growing, that defies your attempts to orient to it, that eats your mind and energy in service to its growing abyssal presence.

It’s important to notice that Moloch thinks with your mind. To the extent you’re engaged in a race to the bottom, the creativity you bring to bear to race at least as well as everyone else is precisely the ingenuity Moloch is using to screw the whole system over. Moloch is something like a distributed computation here.

The same happens with foolish arguments. Around these parts I think I can pick on religion a bit. Arguments about how God exists because of “prime mover” reasoning (for instance) have to come from somewhere. Someone had to write the bottom line and then direct their intelligence at the project of justifying that bottom line. Then the argument becomes a kind of memetic shield that others who like that bottom line can use to bolster their position.

The whole problem with adaptive entropy is that some bottom line has been written, and the fixation on that bottom line has become embedded in the system.

In practice this means that thinking directed at getting rid of adaptive entropy tends to strengthen it.

Like in the pursuer/​avoider dynamic. Often the pursuer is vividly aware of the problem. It bothers them. It causes them to fret about how their fretting might be making things worse. And they’re aware that that meta-fretting is anti-helpful! But all they can think (!) to do about it is try to talk about the dynamic with their partner. But since the pursuer is trying to have the conversation in order to alleviate their stress, this actually enacts the very pressured dynamic that’s the fuel for the adaptive entropy in the relationship. Thus the very act of trying to orient to the problem actually makes it worse.

This “it gets worse if you try to deal with it” isn’t necessarily true in every case. In this way adaptive entropy is actually unlike thermodynamic entropy: it’s possible to reduce adaptive entropy within a closed system.

But the default very much is like this. Most people who are within a heavily adaptive entropic system cannot help but increase the adaptive entropy when they orient to the problem-ness of the situation.

Like, how much dialogue going on in public about the culture wars is actually helping resolve it? What’s nearly all of that dialogue actually doing?

Entropic burden crushes everything else

Basically, if you want to solve the problem-ness of a situation, and the problem-ness has the structure of adaptive entropy (i.e., it’s because of embedded forcing of an idea of how things should be onto responsive subsystems that are resisting), then any attempt to address the problem-ness that doesn’t prioritize lifting the entropic burden is at best useless.

This is usually counterintuitive. I’m saying, for instance, that solving AI risk can’t be done by trying to solve AI risk directly. Same for climate change. Same for war in the Middle East. Same for the USA obesity epidemic, or the homelessness problem, or basically any tension point of the culture wars.

It’s not that the actions don’t matter. It’s not that you can’t move the problem-ness around. It’s that there’s something functionally intelligent fighting to preserve the problem-ness itself, and said intelligent opponent can and often does hijack people’s thinking and efforts.

Anything that does not orient to this reality is basically irrelevant in terms of actually addressing the problem-ness. Regardless of how clever the plan is, and definitely regardless of how important the object-level issues are.

This is a kind of semi-mythopoeic way of saying “The problem is anti-slack.” That anti-slack — what I’m calling “adaptive entropy” — is the crack in ability-to-coordinate through which Moloch crawls.

But you don’t have to think of it mythopoeically at all. I’m naming mechanisms here, whatever aesthetic I’m wrapping it in.

The inclination to insist that we just have to try harder[2] is literally doubling down on force. It’s fueling the adaptive entropy even more. It’s an example of the mental hijacking I’m talking about. Demanding even harder that we absolutely must achieve a certain outcome, and if we’re encountering resistance to moving in that direction then we just need more force. More intelligence, more energy, more funding, more taking in what’s at stake.

Based on what I’m looking at — and what I’m looking at is extremely mechanical and sure seems quite overdetermined to me — this general thrust is utterly completely and entirely doomed.

Let go of outcomes

The basic deal with adaptive entropy is that we fixate on an outcome to the exclusion of context awareness.

In practice it’s possible to lift the entropic burden by reversing that process: Become willing to land in any outcome, and prioritize listening.

This is actually baked into goal factoring for instance. For goal factoring to work at its best, you have to hold that hitting all your true goals is a more important requirement than having any particular outcome. Any outcome you can’t let go of (keeping a relationship, staying in a job, not eating liver, etc.) is a limitation on your possible solution space, which makes finding an omni-win solution harder.

When I was teaching goal factoring at CFAR, I used to emphasize a sort of emotional equanimity move:

Name the paths forward you most fear. For each path, notice, and really take in, that the only way you would choose such a path is because it in fact hits all your goals best as you can tell. Breathe into that and let the clarity of that sink in. Notice how, in the case where you actually choose that path, you not only survive but thrive as best as you possibly could, to the best of your knowledge.

(Today I’d add an element of, “Notice specifically what about it you fear. This is something to account for in your goal factoring.” (It’s possible I taught that at the time too. I just don’t remember talking about it.))

The point is, you can’t actually listen to all the parts if they believe you’re only listening to get them to shut up and do the plan you had in mind from the beginning. You have to erase the bottom line, listen deeply, and be willing to change your intentions completely based on what you become aware of.

Earning trust

Of course, what you’re finding is the best possible outcome given your constraints and desires. So why wouldn’t you do that?

Well, because we often have layers upon layers of adaptive entropy in our system. Subagents don’t trust the parts of us we normally identify with — and they often correctly don’t trust us. We might try to listen, but our skill with listening and taking in needs work. We still have deeply embedded habits of forcing, many of which we’re not yet in a position to consciously stop.

(Chronic physical tension is usually an example of adaptive entropy for instance. Most people can’t relax their trapezius muscles to a point of ease. Peeling off the layers of adaptive entropy involved there can take a while, and often people can’t do it directly despite the traps being “voluntary muscles”. Turns out, some subsystem is using the tension. So we’re not ready to stop adding that bit of force.)

The best way I know how to navigate this is to become trustworthy and transparent to these parts.

Trustworthiness requires that I cultivate a sincere desire to care for what each of these subagents care about. Even if I initially think it’s silly or stupid or irrelevant. Those judgments are an example of outcome fixation. I have to let that go and be willing to change who I am and how I prioritize things (without excluding the things I was already caring for — that’d just be switching sides in the inner war). I have to sincerely prefer inner harmony over any outcome, which means I already care about the things my subagents care about. So I want to learn about them so as to better care for them.

In particular, I’m not trying to get these parts to trust me. I’m trying to become worthy of their trust. If I try to get them to trust me, then that effort on my part can actually increase adaptive entropy as the part catches on and gets suspicious.

(…so to speak. I’m anthropomorphizing these parts a lot for the sake of pumping intuition. The actual mechanism is immensely mechanical and doesn’t depend on thinking of these things as agents.)

If I do my best to become worthy of trust, then all I have to do is transparently demonstrate — in the subagent’s terms — whether or not I am trustworthy. And again, without fixation on outcome. I in fact do not want that part to trust me if it would be incorrect for it to do so! I’m relying on that subagent to care for something I don’t yet know how to care for myself.

There’s a whole art here. Part of what bogged down earlier drafts of this post was trying to find ways of summarizing this whole art.

But just as one example:

Notice your breathing. When you do so, do you modify your breathing? Do you make yourself breathe more deeply, or take bigger breaths, or squeeze your belly, or anything like that?

Can you instead just watch your breath without modifying it whatsoever?

The chances are very good that the answer is “no”. Most people can’t. Even meditators who have been working on this for a long time can find it tricky.

But you might be able to peel off one layer of habitual effort here. One layer of adaptive entropy.

Find some element of trying or forcing or squeezing you do have conscious control over. Maybe not perfectly, but enough that you can kind of… let go a little.

The goal here isn’t to let go and keep letting go forever. It’s instead to notice what the trying is for in the first place.

Normally the trying will kind of try to re-assert itself. Maybe you stop making yourself take deeper breaths, but then your belly tenses a little. And in relaxing that, after a few moments you find yourself feeling out of breath and needing to take in a deep breath to get enough air.

Just watch that process. You didn’t need to do that before you noticed your breath (I assume). So what’s different now? What part is “speaking” here? What’s being cared for?

Listen to the thoughts, but don’t believe them too much. The purpose of the thoughts is to maintain the entropic equilibrium. But they might contain hints about what’s really going on.

Mostly just focus on the body sensations.

If you find the seed that relies on the tension, you can orient to that and really listen. How might you care for what that part cares about? Can you feel the deep truth that yes, in fact, you would want to prioritize caring for it if you could and knew how? Can you recognize your gratitude for what this tension-user is doing even if you don’t yet know why?

If you stay with this long enough — which might be minutes, or it might take days or weeks, depending on the part and your internal skill — you’ll feel a layer of the inner arms race end. The tension will leave — not just relax, but it’ll let go in a way that is final.

And normally there’s a sense of freedom, space, and energy that becomes more present as a result. Like putting down a heavy pack after forgetting you were wearing it.

But that step isn’t up to you. It just happens, after you earn the trust of all parts involved. It’s a result but not the goal. The goal is deeply listening to and honoring every part of yourself.

An aside on technical debt

The spot where adaptive entropy is reversible makes “entropy” a kind of terrible analogy.

Like I said earlier, all the analogies are a little terrible.

One could go with something akin to technical debt. That has a lot of merit. You can pay off technical debt. It clogs up systems. Having technical debt in a system makes incurring more technical debt more likely.

I noticed when trying to use this analogy in an early draft that it clogs up my thinking. Technical debt presupposes a goal, and adaptive entropy comes about via goal fixation. That loop turns out to make the whole idea pretty messy to think about.

Also, many times technical debt is literally an example of adaptive entropy. It’s not just an analogy. You can see this more clearly if you zoom out from the debt to the system the debt is embedded in: Becoming determined to pay off technical debt incurs other costs due to the outcome fixation, so even if you get your code cleaned up the problem-ness moves around and is quite often on net worse.

The way you’d pay off technical debt without incurring more adaptive entropy is by attending to why the debt is there in the first place and hasn’t spontaneously been paid off. If you really truly listen to that, then either you can address it (in which case the debt will get paid off without having to add effort to the system) or you’ll recognize why it’s actually fine for the debt to be there. It stops looking like “debt” at all. You come to full acceptance.

But in practice most coding contexts have too much adaptive entropy to do this. Too much anti-slack. So instead people grit their teeth and deal with it and sometimes plot to marshal enough force to banish that evil goo from the codebase.

Achieving through force

In the original exchange that inspired this post, Ben Pace mentions:

I think a point missing in the OP and the comments is that sometimes the addiction is useful. I find it hard to concisely make this point, but I think many people are addicted to things that they’re good at, be it competitions or artistic creations or mathematics.

I want to honor that some of Ben’s opinion here might have been due to the word “addiction”. I basically always mean the specific structure I named earlier, about being “addicted from, not addicted to”. Ben might have meant something more intuitive, roughly along the lines of obsession.

With that said, and continuing to run with my meaning of “addiction”, I want to quickly mention two things:

  • I think Ben’s point is correct, and that this application of addiction can be worthwhile.

  • I also think using addiction this way goes in the opposite direction of addressing real problems, both for individuals and for the world.

The only reason addictions look helpful is because of outcome fixation. We consider it worthwhile if someone can become an Olympic athlete (or whatever) and does so, pushing through resistance and difficulty to achieve something great.

But like with the caffeine example, why wasn’t the fact that it’s great enough to create the motivation? Why did there need to be a “run away” habit embedded too?

The reason is usually that we sort of inherit adaptive entropy from the culture. The culture has a ton of outcome fixation that gets imposed on people. To the extent that you haven’t learned how to unweave adaptive entropy in yourself and haven’t learned how to refuse it when it’s offered to you, culture’s demands that you be a certain way or achieve certain things in order to be worthwhile can eat at your psyche.

More concretely, it can feel like the demands of the world have to be met somehow. Like pragmatics burden us and constrain us. But in truth many of them are kind of illusory, made of adaptive entropy rather than real physics-induced constraints on the world.

The kind of dumb that was the global response to COVID-19 is a world mind addled with adaptive entropy. The fact that so many people thought the right move was to apply force to their friends & family via moral passion shows a kind of ignorance of how adaptive entropy moves and behaves.

The reason for any of that is because it’s possible to meet some of the world’s demands via suppressing parts of yourself with adaptive-entropic structures.

Which is to say, if you can find a way to force yourself to achieve big things, you can sometimes achieve big things.

But on net, what tends to happen — at the individual scale and definitely at a cultural one courtesy of the Law of Large Numbers — is that the idea of achieving big things drives people forward into an entropic equilibrium, where they either stay stuck or get burned out.

Like, depression and anxiety are totally direct results of adaptive entropy. With some caveats I’m sure, but not as many as one might think. Every case of depression I can think of comes back to habitually embedded forcing of some conclusion. Not just that it’s involved, but that it’s critically involved, and the depression itself (true to how adaptive entropy works) can often interfere with the effort to orient to letting go of said conclusion.

But yes, there are outliers. Like successful Olympic athletes. And Elon Musk, which was a favorite example around CFAR toward the end of my tenure. And culture is happy to use outliers as exemplars, to tell you to strive harder.

Is this bad?

I don’t mean to make this sound moralistic. I’m aware that how I’ve written this so far might come across that way.

I really honestly mean this as a description of something like an engineering principle. To point out something.

If force is how we achieve something, then it’s because we’re forcing against something.

If that something can adapt to our forcing, then it will, and we have an arms race.

It’s extremely mechanical. It’s borderline tautological. I expect something similar to happen in nearly any universe where evolution occurs and something analogous to intent arises.

If people want to ignore this, or try to use this in a Molochian trade to achieve predetermined goals, that’s totally fine. It just has an immensely predictable outcome: They absolutely will incur more adaptive entropy, and as a result any system they’re part of will become more entropically burdened, guaranteed.

It honestly feels like I’m saying something about as profound as “If a massive object goes unsupported in a gravitational field, it will accelerate along the lines of force of the field.” It really seems this inevitable and obvious to me.

So none of this is about judgment. It’s just fact.

And included in this fact is that, best as I can tell, anyone who really groks what I’m talking about will want to prioritize peeling off adaptive entropy over any specific outcome. That using addiction or any other entropy-inducing structure to achieve a goal is the opposite of what they truly want.

(Because adaptive entropy is experienced as stuck problem-ness. Who doesn’t want less problem-ness? Who doesn’t want more of their goals achieved? The only reason anyone wouldn’t accept that is because they don’t trust that it’s real.)

But to the extent that what I’m saying isn’t obvious to you, I don’t want you to believe it! I’d rather you continue to force outcomes than that you override your knowing to blindly believe me here. Because frankly those two are the same option entropically speaking, just applied differently, and the latter would induce adaptive entropy on understanding what adaptive entropy is.

So, I don’t know what’s best for any given person, including for you, my reader.

But I can say with a great deal of conviction that creating ease in the world as a whole is mostly a matter of orienting to adaptive entropy. That things that lift the entropic burden will help, and things that increase the burden will hurt, basically regardless of what material effects they have.

I mean, of course, if someone gets a brilliant flash of insight and builds FAI (or uFAI), then that overrides basically everything.

But in a way that’s analogous to the Law of Large Numbers, I expect which way AGI goes is actually downstream of our relationship to slack.

So, yeah.

To quote myself:

So on net, globally, I think it’s actually worthwhile to let some potential Olympic athletes fail to realize their potential if it means we collectively have more psychic breathing room.

And AFAICT, getting more shared breathing room is the main hope we have for addressing the real thing.

And thankfully, the game theory works out very nicely here:

It’s in fact in every individual’s benefit to lift their adaptive entropic burden.

And I mean in their benefit in their terms.

If the true thing I’m badly and roughly pointing at as “adaptive entropy” clicks for you, you’ll want to prioritize unweaving it. Even if your methods for doing so look way different from mine.

(It doesn’t require things that look like meditation. That’s just how I’ve approached it so far.)

And individuals unweaving their own encounters and embodiment of entropy is exactly what “pays off” the “debt” at the social and civilizational level.

At least, that’s how it looks to me.

But it doesn’t make a lick of sense to force any of that.

Hopefully by now it’s obvious why.

  1. ^

    By “intelligence” I mean something precise here. Basically adaptive capacity: What’s the system’s ability to modify itself and its interface with its environment such that the system continues to function? But giving all the details of this vision is part of what made all previous drafts of this post get bogged down in perfectionism, so I’ll leave this term a little handwavy.

  2. ^

    I want to honor something here. My understanding of Eliezer’s actual point in “shut up and do the impossible” totally accounts for what I’m saying about adaptive entropy. However, the energy I read behind his message, and the way his message usually seems to get interpreted, seems to have a “Push harder!” flavor to it. That inclination absolutely is entropy-inducing.