(I don’t follow it all, for instance I don’t recall why it’s important that the former view assumes that utility is computable.)
Partly because the “reductive utility” view is made a bit more extreme than it absolutely had to be. Partly because I think it’s extremely natural, in the “LessWrong circa 2014 view”, to say sentences like “I don’t even know what it would mean for humans to have uncomputable utility functions—unless you think the brain is uncomputable”. (I think there is, or at least was, a big overlap between the LW crowd and the set of people who like to assume things are computable.) Partly because the post was directly inspired by another alignment researcher saying words similar to those, around 2019.
Without this assumption, the core of the “reductive utility” view would be that it treats utility functions as actual functions from actual world-states to real numbers. These functions wouldn’t have to be computable, but since they’re a basic part of the ontology of agency, it’s natural to suppose they are—in exactly the same way it’s natural to suppose that an agent’s beliefs should be computable, and in a similar way to how it seems natural to suppose that physical laws should be computable.
Ah, I guess you could say that I shoved the computability assumption into the reductive view because I secretly wanted to make 3 different points:
We can define beliefs directly on events, rather than needing “worlds”, and this view seems more general and flexible (and closer to actual reasoning).
We can define utility directly on events, rather than “worlds”, too, and there seem to be similar advantages here.
In particular, uncomputable utility functions seem pretty strange if you think utility is a function on worlds; but if you think it’s defined as a coherent expectation on events, then it’s more natural to suppose that the underlying function on worlds (that would justify the event expectations) isn’t computable.
Rather than make these three points separately, I set up a false dichotomy for illustration.
Also worth highlighting that, like my post Radical Probabilism, this post is mostly communicating insights that it seems Richard Jeffrey had several decades ago.
But beware reality-masking skillsets.
It’s a good point! I failed to notice my confusion there.
I think we could get a GPT-like model to do this if we inserted other random sequences, in the same way, in the training data; it should learn a pattern like “non-word-like sequences that repeat at least twice tend to repeat a few more times” or something like that.
GPT-3 itself may or may not get the idea, since it does have some significant breadth of getting-the-idea-of-local-patterns-its-never-seen-before.
So I don’t currently see what your experiment has to do with the planning-ahead question.
I would say that the GPT training process has no “inherent” pressure toward Bellman-like behavior, but the data provides such pressure, because humans are doing something more Bellman-like when producing strings. A more obvious example would be if you trained a GPT-like system to predict the chess moves of a tree-search planning agent.
Yep. All sounds right.
I have an alternative hypothesis tho: we could say “big brains are for big problems”. As you stated, a blind person still has a similar computational problem to solve, namely navigating a complex 3D environment. In some cases, tons of sense-data will be very easily processed, due to the simplicity of what’s being looked for in that sense data. (Are there cases of animals with very large retinas, but comparatively small brains?)
The sad part of this hypothesis is that it’s difficult to test, as it doesn’t make specific predictions. You’d need to somehow know the computational complexity of surviving in a given environment. (Or, more precisely, the computational complexity where a bigger brain is too much of a cost...)
This is a potentially important question for AI timelines. How much processing power do we expect is needed to replicate human intelligence? “Approximately a brain’s worth” is the default answer, but if this post is correct, it should be a lot less (particularly for text-based AI like the GPTs).
Something seems a bit off about this. Are blind people more intelligent?
From a quick google search, it looks like blind people have many boosted properties (increased working memory, increased ability to differentiate frequencies), but somehow this does not translate to increased IQ? More research needed.
But, I guess the effect size should be huge if it’s really a simple function of total data inputs, right?
OK, I guess a confounding factor is that the brain might not be plastic enough for “blind humans” to be equivalent to “hominids who evolved a similar encephalization quotient but without the sense of sight”, which is what your theory specifically predicts would be quite intelligent.
Another thing that seems a bit off about this is that, in ML, we’ve seen that info from other sensory modalities can be very useful. So taking away one sense-organ and allocating the resources to another shouldn’t necessarily boost “overall intelligence”. But maybe this is like pointing out that a blind person won’t be able to describe a painting, so in that respect their verbal performance will be worse.
Anyway, obviously, if not sense-input, then what??
I think maybe our disagreement is about how good/useful of an overarching model ACT-R is? It’s definitely not like in physics, where some overarching theories are widely accepted (e.g. the standard model) even by people working on much more narrow topics—and many of the ones that aren’t (e.g. string theory) are still widely known about and commonly taught. The situation in cog sci (in my view, and I think in many people’s views?) is much more that we don’t have an overarching model of the mind in anywhere close to the level of detail/mechanistic specificity that ACT-R posits, and that any such attempt would be premature/foolish/not useful right now.
Makes some sense to me! This is part of why my post’s conclusion said stuff like this doesn’t mean you should believe in ACT-R. But yeah, I also think we have a disagreement somewhere around here.
I was trained in the cognitive architecture tradition, which tends to find this situation unfortunate. I have heard strong opinions, which I respect and generally believe, of the “we just don’t know enough” variety which you also espouse. However, I also buy Allen Newell’s famous argument in “you can’t play 20 questions with nature and win”, where he argues that we may never get there without focusing on that goal. From this perspective, it makes (some) sense to try to track a big picture anyway.
In some sense the grand goal of cognitive architecture is that it should eventually be seen as standard (almost required) for individual works of experimental psychology to contribute to a big picture in some way. Imagine for a moment if every paper had a section relating to ACT-R (or some other overarching model), either pointing out how it fits in (agreeing with and extending the overarching model) or pointing out how it doesn’t (revising the overarching model).
With the current state of things, it’s very unclear (as you highlighted in your original comment) what the status of overarching models like ACT-R even is. Is it an artifact from the 90s which is long-irrelevant? Is it the state of the art big-picture? Nobody knows and few care? Wouldn’t it be better if it were otherwise?
On the other hand, working with cognitive architectures like ACT-R can be frustrating and time consuming. In theory, they could be a time-saving tool (you start with all the power of ACT-R and can move forward from that!). In practice, my personal observation at least is that they add time and reduce other kinds of progress you can make. To caricaturize, a cog arch phd student spends their first 2 years learning the cognitive architecture they’ll work with, while a non-cog-arch cogsci student can hit the ground running instead. (This isn’t totally true of course; I’ve heard people say that most phd students are not really productive for their first year or two of grad school.) So I do not want to gloss over the downsides to a cog arch focus.
One big problem is what I’ll call the “task integration problem”. Let’s say you have 100 research psychologists who each spend a chunk of time doing “X in ACT-R” for many different values of X. Now you have lots of ACT-R models of lots of different cognitive phenomena. Can you mash them all together into one big model which does all 100 things?
I’m not totally sure about ACT-R, but I’ve heard that for most cognitive architectures, the answer is “no”. Despite existing in one cognitive architecture, the individual “X” models are sorta like standalone programs which don’t know how to talk to each other.
This undermines the premise of cog arch as helping us fit everything into one coherent picture. So, this is a hurdle which cog arch would have to get past in order to play the kind of role it wants to play.
I think my post (at least the title!) is essentially wrong if there are other overarching theories of cognition out there which have similar track records of matching data. Are there?
By “overarching theory” I mean a theory which is roughly as comprehensive as ACT-R in terms of breadth of brain regions and breadth of cognitive phenomena.
As someone who has also done grad school in cog-sci research (but in a computer science department, not a psychology department, so my knowledge is more AI focused), my impression is that most psychology research isn’t about such overarching theories. To be more precise:
There are cognitive architecture people, who work on overarching theories of cognition. However, ACT-R stands out amongst these as having extensive experimental validation. The rest have relatively minimal direct comparisons to human data, or none.
There are “bayesian brain” and other sorta overarching theories, but (to my limited knowledge!) these ideas don’t have such a fleshed-out computational model of the brain. EG, you might apply bayesian-brain ideas to create a model of (say) emotional processing, but it isn’t really part of one big model in quite the way ACT-R allows.
There’s a lot of more isolated work on specific subsystems of the brain, some of which is obviously going to be highly experimentally validated, but, just isn’t trying to be an overaching model at all.
So my claim is that ACT-R occupies a unique position in terms of (a) taking an experimental-psych approach, while (b) trying to provide a model of everything and how it fits together. Do you think I’m wrong about that?
I think it’s a bit like physics: outsiders hear about these big overarching theories (GUTs, TOEs, strings, …), and to an extent it makes sense for outsiders to focus on the big picture in that way. Working physicists, on the other hand, can work on all sorts of specialized things (the physics of crystal growth, say) without necessarily worrying about how it fits into the big picture. Not everyone works on the big-picture questions.
OTOH, I also feel like it’s unfortunate that more work isn’t integrated into overarching models.
This paper gives what I think is a much more contemporary overview of overarching theories of human cognition.
I’ve only skimmed it, but it seems to me more like a prospectus which speculates about building a totally new architecture (combining the strengths of deep learning with several handpicked ideas from psychology), naming specific challenges and possible routes forward for such a thing.
(Also, this is a small thing, but “fitting human reaction times” is not impressive—that’s a basic feature of many, many models.)
I said “down to reaction times” mostly because I think this gives readers a good sense of the level of detail, and because I know reaction times are something ACT-R puts effort into, as opposed to because I think reaction times is the big advantage ACT-R has over other models; but, in retrospect this may have been misleading.
I guess it comes down to my AI-centric background. For example, GPT-3 is in some sense a very impressive model of human linguistic behavior; but, it makes absolutely no attempt to match human reaction times. It’s very rare for ML people to be interested in that sort of thing. This also relates to the internal design of ACT-R. An AI/ML programmer isn’t usually interested in purposefully slowing down operations to match human performance. So this would be one of the most alien things about the ACT-R codebase for a lot of people.
I would guess similarly. Personally, I’m not especially fond of PP, although that is a bigger discussion.
Hope it turns out to be interesting to you!
This lines up fairly well with how I’ve seen psychology people geek out over ACT-R. That is: I had a psychology professor who was enamored with the ability to line up programming stuff with neuroanatomy. (She didn’t use it in class or anything, she just talked about it like it was the most mind blowing stuff she ever saw as a research psychologist, since normally you just get these isolated little theories about specific things.)
And, yeah, important to view it as a programming language which can model a bunch of stuff, but requires fairly extensive user input to do so. One way I’ve seen this framed is that ACT-R lacks domain knowledge (since it is not in fact an adult human), so you can think of the programming as mostly being about hypothesizing what domain knowledge people invoke to solve a task.
The first of your two images looks broken in my browser.
I think that’s not quite fair. ACT-R has a lot to say about what kinds of processing are happening, as well. Although, for example, it does not have a theory of vision (to my limited understanding anyway), or of how the full motor control stack works, etc. So in that sense I think you are right.
What it does have more to say about is how the working memory associated with each modality works: how you process information in the various working memories, including various important cognitive mechanisms that you might not otherwise think about. In this sense, it’s not just about interconnection like you said.
We also know how to implement it today.
I would argue that inner alignment problems mean we do not know how to do this today. We know how to limit the planning horizon for parts of a system which are doing explicit planning, but this doesn’t bar other parts of the system from doing planning. For example, GPT-3 has a time horizon of effectively one token (it is only trying to predict one token at a time). However, it probably learns to internally plan ahead anyway, just because thinking about the rest of the current sentence (at least) is useful for thinking about the next token.
So, a big part of the challenge of creating myopic systems is making darn sure they’re as myopic as you think they are.
Imagine a spectrum of time horizons (and/or discounting rates), from very long to very short.
Now, if the agent is aligned, things are best with an infinite time horizon (or, really, the convergently-endorsed human discounting function; or if that’s not a well-defined thing, whatever theoretical object replaces it in a better alignment theory). As you reduce the time horizon, things get worse and worse: the AGI willingly destroys lots of resources for short-term prosperity.
At some point, this trend starts to turn itself around: the AGI becomes so shortsighted that it can’t be too destructive, and becomes relatively easy to control.
But where is the turnaround point? It depends hugely on the AGI’s capabilities. An uber-capable AI might be capable of doing a lot of damage within hours. Even setting the time horizon to seconds seems basically risky; do you want to bet everything on the assumption that such a shortsighted AI will do minimal damage and be easy to control?
This is why some people, such as Evan H, have been thinking about extreme forms of myopia, where the system is supposed to think only of doing the specific thing it was asked to do, with no thoughts of future consequences at all.
Now, there are (as I see it) two basic questions about this.
How do we make sure that the system is actually as limited as we think it is?
How do we use such a limited system to do anything useful?
Question #1 is incredibly difficult and I won’t try to address it here.
Question #2 is also challenging, but I’ll say some words.
As you scale down the time horizon (or scale up the temporal discounting, or do other similar things), you can also change the reward function. (Or utility function, or other equivalent thing is in whatever formalism.) We don’t want something that spasmodically tries to maximize the human fulfillment experienced in the next three seconds. We actually want something that approximates the behavior of a fully-aligned long-horizon AGI. We just want to decrease the time horizon to make it easier to trust, easier to control, etc.
The strawman version of this is: choose the reward function for the totally myopic system to approximate the value function which the long-time-horizon aligned AGI would have.
If you do this perfectly right, you get 100% outer-aligned AI. But that’s only because you get a system that’s 100% equivalent to the not-at-all-myopic aligned AI system we started with. This certainly doesn’t help us build safe systems; it’s only aligned by hypothesis.
Where things get interesting is if we approximate that value function in a way we trust. An AGI RL system with supposedly aligned reward function calculates its value function by looking far into the future and coming up with plans to maximize reward. But, we might not trust all the steps in this process enough to trust the result. For example, we think small mistakes in the reward function tend to be amplified to large errors in the value function.
In contrast, we might approximate the value function by having humans look at possible actions and assign values to them. You can think of this as deontological: kicking puppies looks bad, curing cancer looks good. You can try to use machine learning to fit these human judgement patterns. This is the basic idea of approval-directed agents. Hopefully, this creates a myopic system which is incapable of treacherous turns, because it just tries to do what is “good” in the moment rather than doing any planning ahead. (One complication with this is inner alignment problems. It’s very plausible that to imitate human judgements, a system has to learn to plan ahead internally. But then you’re back to trying to outsmart a system that can possibly plan ahead of you; IE, you’ve lost the myopia.)
There may also be many other ways to try to approximate the value function in more trustable ways.
I’m not actually sure what to call the practice of attributing rational agency to things for the sake of modeling convenience. I’ve called it “rational choice theory” in my edit. Zach Davis classifies it as a generalized anti-zombie principle, or “algorithmic intent”. But this isn’t quite right either.
Clearly it’s a form of the “intentional stance”, but I think mistake theory also uses an intentional stance; just one where agents are allowed to make mistakes. I can certainly see an argument for viewing mistake-theory as taking less of an intentional stance, ie, viewing everything more based on cause-and-effect rather than agency. But I don’t think we want “intentional stance” to imply a theory of mind where no one ever makes mistakes.
But the anti-mistake theory is clearly of use in many domains. Evolution is going to produce near-optimal solutions for a lot of problems. The economy is going to produce near-optimal solutions for a lot of problems. Many psychological phenomena are going to be well-predicted by assuming humans are solving specific problems near-optimally. Many biological phenomena are probably predictable in this way as well. Plus, assuming rationality like this can really simplify and clarify the modeling in many cases—particularly if you’re happy with a toy model.
So I think we want a name for it. And “rational choice theory” is not very good, because it sounds like it might be describing the theory of rational agents (ie, decision theory), rather than the practice of modeling a lot of things as rational agents.
Anyway, clearly rational choice theory (or whatever we call it) is absolutely against mistake theory, on the face of it. But the thing is, many mistake theorists also use it. In the SSC post about conflict vs mistake, mistake theorists are supposedly the people interested in mechanism design, economics, and nuanced arguments about the consequences of actions. I see this as a big contradiction in the conflict theory vs mistake theory dichotomy as described there.
I liked this article. It presents a novel view on mistake theory vs conflict theory, and a novel view on bargaining.
However, I found the definitions and arguments a bit confusing/inadequate.
“Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”“Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”
“Let’s agree to maximize surplus. Once we agree to that, we can talk about allocation.”
“Let’s agree on an allocation. Once we do that, we can talk about maximizing surplus.”
The wording of the options was quite confusing to me, because it’s not immediately clear what “doing something first” and “doing some other thing second” really means.
For example, the original Nash bargaining game works like this:
First, everyone simultaneously names their threats. This determines the BATNA (best alternative to negotiated agreement), usually drawn as the origin of the diagram. (Your story assumes a fixed origin is given, in order to make “allocation” and “surplus” well-defined. So you are not addressing this step in the bargaining process. This is a common simplification; EG, the Nash bargaining solution also does not address how the BATNA is chosen.)
Second, everyone simultaneously makes demands, IE they state what minimal utility they want in order to accept a deal.
If everyone’s demands are mutually compatible, everyone gets the utility they demanded (and no more). Otherwise, negotiations break down and everyone plays their BATNA instead.
In the sequential game, threats are “first” and demands are “second”. However, because of backward-induction, this means people usually solve the game by solving the demand strategies first and then selecting threats. Once you know how people are going to make demands (once threats are visible), then you know how to strategize about the first step of play.
And, in fact, analysis of the strategy for the Nash bargaining game has focused on the demands step, almost to the exclusion of the threats step.
So, if we represent bargaining as any sequential game (be it the Nash game above, or some other), then order of play is always the opposite of the order in which we think about things.
So when you say:
I came up with two very different interpretations:
Let’s arrange our bargaining rules so that we select surplus quantity first, and then select allocation after that. This way of setting up the rules actually focuses our attention first on how we would choose allocation, given different surplus choices (if we reason about the game by backwards induction), therefore focusing our decision-making on allocation, and making our choice of surplus a more trivial consequence of how we reason about allocation strategies.
Let’s arrange our bargaining rules so that we select allocation first, and only after that, decide on surplus. This way of setting up the rules focuses on maximizing surplus, because hopefully no matter which allocation we choose, we will then be able to agree to maximize surplus. (This is true so long as everyone reasons by backward-induction rather than using UDT.)
The text following these definitions seemed to assume the definitions were already clear, so, didn’t provide any immediate help clearing up the intended definitions. I had to get all the way to the end of the article to see the overall argument and then think about which you meant.
Your argument seems mostly consistent with “mistake theory = allocation first”, focusing negotiations on good surplus, possibly at the expense of allocation. However, you also say the following, which suggests the exact opposite:
It also makes mistake theory seem unsavory: Apparently mistake theory is about postponing the allocation negotiation until you’re in a comfortable negotiating position. (Or, somewhat better: It’s about tricking the other players into cooperating before they can extract concessions from you.)
In the end, I settled on yet-different interpretation of your definition. A mistake theorist believes: maximizing surplus is the more important of the two concerns. Determining allocation is of secondary importance. And a conflict theorist believes the opposite.
This makes you most straightforwardly correct about what mistake theorists and conflict theorists want. Mistake theorists focus on the common good. Conflict theorists focus on the relative size of their cut of the pie.
A quick implication of my definition is that you’ll tend to be a conflict theorist if you think the space of possible outcomes is all relatively close to the pareto frontier. (IE, if you think the game is close to a zero-sum one.) You’ll be a mistake theorist if you think there is a wide variation in how close to the pareto frontier different solutions are. (EG, if you think there’s a lot to be gained, or a lot to lose, for everyone.)
On my theory, mistake theorists will be happy to discuss allocations first, because this virtually guarantees that afterward everyone will agree on the maximum surplus for the chosen allocation. The unsavory mistake theorists you describe are either making a mistake, or being devious (and therefore, sound like secret conflict theorists, tho really it’s not a black and white thing).
On the other hand, your housing example is an example where there’s first a precommitment about allocation, but the picture for agreeing on a high surplus afterward don’t seem so good.
I think this is partly because the backward-induction assumption isn’t a very good one for humans, who use UDT-like obstinance at times. It’s also worth mentioning that choosing between “surplus first” and “allocation first” bargaining isn’t a very rich set of choices. Realistically there can be a lot more going on, such that I guess mistake theorists can end up preferring to try to agree on pareto-efficiency first or trying to sort out allocations first, depending on the complexities of the situation.
These ordering issues seem very confusing to think about, and it seems better to focus on perceived relative importance of allocation vs surplus, instead.
If we use correlated equilibria as our solution concept rather than Nash, convexity is always guaranteed. Also, this is usually the more realistic assumption for modeling purposes. Nash equilibria oddly assume certainty about which equilibrium a game will be in even as players are trying to reason about how to approach a game. So it’s really only applicable to cases where players know what equilibrium they are in, EG because there’s a long history and the situation has equilibriated.
But even in such situations, there is greater reason to expect things to equilibriate to a correlated equilibrium than there is to expect a nash equilibrium. This is partly because there are usually a lot of signals from the environment that can potentially be used as correlated randomness—for example, the weather. Also, convergence theorems for learning correlated equilibria are just better than those for Nash.
Still, your comment about mistake theorists believing in a convex boundary is interesting. It might also be that conflict theorists tend to believe that most feasible solutions are in fact close to Pareto-efficient (for example, they believe that any apparent “mistake” is actually benefiting someone). Mistake theorists won’t believe this, obviously, because they believe there is room for improvement (mistakes to be avoided). However, mistake theorists may additionally believe in large downsides to conflict (ie, some very very not-pareto-efficient solutions, which it is important to avoid). This would further motivate the importance of agreeing to stick to the Pareto frontier, rather than worrying about allocation.
It’s just a little bit stunning to look back and think we’re experiencing the beginning of a century—the sort of century historians talk about, with discrete events (such as covid) which historians can talk about. This should not be surprising, of course, but I think there’s a little part of me that was still thinking of things from the perspective of the early 00s, wherein it’s only possible to say anything about the shape of this century in the form of predictions. But now we’ve been through the oughts, and the teens, and we’re already partyway through the twenties.
I’ve been thinking lately about different sub-selves at different time-scales. This includes experimenting with taking on very different perspectives. Your description of the quest for enlightenment as specifically the quest to slay Moloch resonates with many of the experiences I’ve been having.
A long-latent self will wake up (meaning, specifically, a pattern of activations which I rarely use, and which therefore hasn’t been updated recently). It will look around and typically find its surroundings pretty alien. The old patterns will often lash out with some kind of big all-encompassing criticism of my current life, for example, that I’ve gotten unacceptably old and adult-like (but this varies widely; I’m giving a flavor). Another self might wake up to defend things, EG an adult persona. I might identify with one or the other more strongly, but in any case, some mental strife might ensue.
I have learned to enjoy this, because it gives me an opportunity to “fold in” more of myself. I have found that most of my mental states, however harshly they condemn each other, can agree to let explicit reason be the judge. For example, I explicitly think that it makes sense for an adult to be more adult-like than at earlier points in life. This isn’t a deep judgement about whether specific things are OK or not OK, but it is a workable provisional judgement. Meanwhile, it also makes sense to examine specific self-criticisms in more detail, to see whether they hold the seeds of improvement.
Examining my own thought-processes in this way, I have come to believe that lots and lots of mental habits use adversarial strategies. For example, when a person doesn’t want to hear about bad consequences of their plan (eg, doesn’t want to think about how the junk food is bad for them), I think there’s usually an adversarial plan-protection strategy being employed.
Why would we be so adversarial toward ourselves and others?
I think the answer is basically “your thoughts grew up in a bad neighborhood” metaphorically (and also literally in many cases, since your family and friends and teachers all use adversarial thinking strategies too, which are often out-to-get-you).
For example, plan-protecting strategies are not employed out of some kind of self-malice. It’s very reasonable: you have a plan which you currently think is good; you know that you could abandon that plan if you lose your belief in it; therefore, protect that belief.
This reasoning only makes sense if other thoughts might adversarially cut down your plans, but hey, “who doesn’t have doubts? Everyone struggles to avoid these negative cycles sometimes.”
In other words, your thoughtscape is a low-trust environment, or has been at times in the past. This leads to low-trust strategies. But low-trust strategies help to propagate the low-trust environment.
And of course, all of this happens between people as well, and the two levels interact with each other a lot.
So the approach I am taking is to try and set up a halfway reasonable internal conflict resolution system. Any feeling of mental struggle (hopefully) catches the attention of explicit reason, which then attempts to resolve it. I don’t think Explicit Reason is the only possible choice; different versions of this kind of practice could select different reasoning modes. What is important is that the chosen reasoning mode be (1) relatively well-trusted across your possible mental states, so that you’ll still happily turn to it when you’re in a pretty weird state; (2) readily accessible from a wide variety of mental states, so that you can turn to it; (3) reliable/stable in the sense of usually arriving at the same answers when asking the same question; (4) finally, actually decent at coming up with strategies to solve problems.
Obviously similar to the buddhist idea of using suffering and unsatisfactoriness as the springboard for progress.Strife practically means there is a better way which you are close to learning. This gives me some confidence in my own mental stability, as well, since I have a planned response to feelings of spiraling out of control, which has actual positive associations (and positive feelings can themselves help). Although I can’t really say whether this protects me from panic attacks or other extreme states in practice (I have only ever had one real panic attack, and it was relatively mild).
In sociopolitical terms, a court system is better than a feuding-family system; and every trial adds to the existing body of precedent, saving computational work on future decisions.
From my limited knowledge, this approach seems a bit outside of buddhist practice. I was wondering if you’d have any comments about it. For example, I’ve accused buddhists of noticing that adversarial plan-protecting strategies are rampant, and as a response, blaming the planning faculties themselves (hence the idea that you should have no goals in meditation, the idea that desire is the root of suffering, the goal of extinguishing desire, etc). I would instead blame their “poor upbringing” and try to “teach my planning mechanisms some manners” (ie, get them to stop holding the knife out front all the time, then get them to stop holding the knife behind their backs all the time, then get them to set down the knife entirely on occasion...)
I also would have expected you to agree w/ my above comment when you originally wrote the post; I just happened to see tristanm’s old comment and replied.
However, now I’m interested in hearing about what ideas from this post you don’t endorse!