That does seem likely.
AprilSR
There’s a complication where sometimes it’s very difficult to get people not to interpret things as an instruction. “Confuse them” seems to work, I guess, but it does have drawbacks too.
I don’t really have a good idea of the principles, here. Personally, whenever I’ve made a big difference in a person’s life (and it’s been obvious to me that I’ve done so), I try to take care of them as much as I can and make sure they’re okay.
...However, I have ran into a couple issues with this. Sometimes someone or something takes too much energy, and some distance is healthier. I don’t know how to judge this other than intuition, but I think I’ve gone too far before?
And I have no idea how much this can scale. I think I’ve had far bigger impacts than I’ve intended, in some cases. One time I had a friend who was really in trouble and I had to go to pretty substantial lengths to get them to a better place, and I’m not sure all versions of them would’ve endorsed that, even if they do now.
...But, broadly, “do what you can to empower other people to make their own decisions, when you can, instead of trying to tell them what to do” does seem like a good principle, especially for the people who have more power in a given situation? I definitely haven’t treated this as an absolute rule, but in most cases I’m pretty careful not to stray from it.
I don’t really think money is the only plausible explanation, here?
I think the game is sufficiently difficult.
I read this post several years ago, but I was… basically just trapped in a “finishing high school and then college” narrative at the time, it didn’t really seem like I could use this idea to actually make any changes in my life… And then a few months ago, as I was finishing up my last semester of college, I sort of fell head first into Mythic Mode without understanding what I was doing very much at all.
And I’d say it made a lot of things better, definitely—the old narrative was a terrible one for me—but it was rocky in some ways, and… like, obviously thoughts like “confirmation bias” etc were occurring to me, but “there are biases involved here” doesn’t, really, in and of itself tell you what to do?
It would make sense if there’s some extent to which everyone who spent the first part of their life following along with a simple “go to school and then get a job i guess” script is going to have a substantial adjustment period once they start having some more interesting life experiences, but… also seems plausible that if I’d read a lot more about this sort of thing I’d’ve been better equipped.
To have a go at it:
Some people try to implement a decision-making strategy that’s like, “I should focus mostly on System 1” or “I should focus mostly on System 2.” But this isn’t really the point. The goal is to develop an ability to judge which scenarios call for which types of mental activities, and to be able to combine System 1 and System 2 together fluidly as needed.
Thank you.
I, similarly, am pretty sure I had a lot of conformist-ish biases that prevented me from seriously considering lines of argument like this one.
Like, I’m certainly not entirely sure how strong this (and related) reasoning is, but it’s definitely something one ought to seriously think about.
This post definitely resolved some confusions for me. There are still a whole lot of philosophical issues, but it’s very nice to have a clearer model of what’s going on with the initial naïve conception of value.
I do actually think my practice of rationality was benefited by spending some time seriously grappling with the possibility that everything I knew was wrong. Like, yeah, I did quickly reaccept many things, but it was still a helpful exercise.
This feels more like an argument that Wentworth’s model is low-resolution than that he’s actually misidentified where the disagreement is?
Huh. I… think I kind of do care terminally? Or maybe I’m just having a really hard time imagining what it would be like to be terrible at predicting sensory input without this having a bunch of negative consequences.
you totally care about predicting sensory inputs accurately! maybe mostly instrumentally, but you definitely do? like, what, would it just not bother you at all if you started hallucinating all the time?
Probably many people who are into Eastern spiritual woo would make that claim. Mostly, I expect such woo-folk would be confused about what “pointing to a concept” normally is and how it’s supposed to work: the fact that the internal concept of a dog consists of mostly nonlinguistic stuff does not mean that the word “dog” fails to point at it.
On my model, koans and the like are trying to encourage a particular type of realization or insight. I’m not sure whether the act of grokking an insight counts as a “concept”, but it can be hard to clearly describe an insight in a way that actually causes it? But that’s mostly deficiency in vocab plus the fact that you’re trying to explain a (particular instance of a) thing to someone who has never witnessed it.
Robin Hanson has written about organizational rot: the breakdown of modularity within an organization, in a way which makes it increasingly dysfunctional. But this is exactly what coalitional agency induces, by getting many different subagents to weigh in on each decision.
I speculate (loosely based on introspective techniques and models of human subagents) that the issue isn’t exactly the lack of modularity: when modularity breaks down over time, this leads to subagents competing to find better ways to work around the modularity, and creates more zero sum-ish dynamics. (Or maybe it’s more that techniques for working around modularity can produce an inaction bias?) But if you intentionally allow subagents to weigh-in, they may be more able to negotiate and come up with productive compromises.
I think I have a much easier time imagining a 3D volume if I’m imagining, like, a structure I can walk through? Like I’m still not getting the inside of any objects per se, but… like, a complicated structure made out of thin surfaces that have holes in them or something is doable?
Basically, I can handle 3D, but I won’t by default have all the 3Dish details correctly unless I meaningfully interact with the full volume of the object.
This does necessitate that the experts actually have the ability to tell when an argument is bad.
All the smart trans girls I know were also smart prior to HRT.
I think there is rather a lot of soap to be found… but it’s very much not something you can find by taking official doctrine as an actual authority.