Yep, that seems like a correct nuance to add. I meant “predict” in a functional sense, rather than in a thought-based one, but that wasn’t at all clear. I appreciate you adding this correction.
You might have gone too far with speculation—your theory can be tested.
I think that’s good, isn’t it? :-D
If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems.
Maybe…? I think it’s more complicated than I read this implying. But yes, I expect the abilities to learn to be somewhat correlated, even if the actualized skills aren’t.
Part of the challenge is that math reasoning seems to coopt parts of the mind that normally get used for other things. So instead of mentally rehearsing a physical movement in a way that’s connected to how your body can actually move and feel, the mind mentally rehearses the behavior (!) of some abstract mathematical object in ways that don’t necessarily map onto anything your physical body can do.
I suspect that closeness to physical doability is one of the main differences between “pure” mathematical thinking and engineering-style thinking, especially engineering that’s involved with physical materials (e.g., mechanical, electrical — as opposed to software). And yes, this is testable, because it suggests that engineers will tend to have developed more physical coordination than mathematicians relative to their starting points. (This is still tricky to test, because people aren’t randomly sorted into mathematicians vs. engineers, so their starting abilities with learning physical coordination might be different. But if we can figure out a way to test this claim, I’d be delighted to look at what the truth has to say about this!)
I mostly agree. I had, like, four major topics like this that I was tempted to cram into this essay. I decided to keep it to one message and leave things like this for later.
But yes, totally, nearly everything we actually care about comes from the social mind doing its thing.
I disagree about curiosity though. I think that cuts across the two minds. “Oh, huh, I wonder what would happen if I connected this wire to that glowing thing….”
Yes, most pleasures grab your wanting. I’m suggesting that you actually enjoy collecting arbitrary achievements, there is no “hijacking” about it. And I don’t understand why collecting arbitrary achievements needs to be meaningful, while delicious food is allowed to be meaningless.
Okay, seriously? You want to play this game?
I get that status here comes in part from good arguments. It’s a fine metric for truth-seeking. But it isn’t the same as truth-seeking, and it Goodharts into disagreement-hunting even where the disagreements don’t matter.
I’m trying to point at a simple observation: some things grab your wanting directly and yank you off-course. Seems like a good idea to notice when that happens. That’s all.
I’m not saying that one shouldn’t ever let those want-grabbers do their thing. But maybe you can’t tell I wasn’t saying that; communication is hard. But if you think I am saying that… then can’t you just notice that that’s stupid, mention that, and highlight the point I should have made?
So… I mean, really, you seriously think you’re meaningfully refuting my points by saying I enjoy achievements and therefore there’s no hijacking? Seriously? Seriously?
I mean, I think your next norm-driven move is to say “Yes, seriously.” And then do some kind of weird philosophical thing that, I don’t know, makes it sound like I’m arguing that some wants are good and others are bad, and then knocking down that strawman. Or something.
But… come on! Really?
Can we just… not fence for status?
I don’t understand why collecting arbitrary achievements needs to be meaningful, while delicious food is allowed to be meaningless.
I never said anything about food. Or about what needs to be meaningful. Just that there are want-grabbers that are meaningfulness-symmetric.
I don’t usually think of good food as lotus-like. Like, here are some pleasurable non-lotuses (for me):
Walks in nature.
Kissing someone I’m dating.
Breaking a fast with good food.
Doing an acrobatic flip.
I basically never find these yanking me away from what I’m doing. I just like them. Sometimes I want to do some of them more and it’s hard to make myself. Very not lotus-ish.
Sometimes I don’t do these things because I’m busy, I don’t know, getting sucked into getting achievements on some game that leaves Tetris effects in my brain.
I mean, if I want to do that, then that seems cool.
Seems bad not to even notice that’s happening though. Then Facebook gets to program my wants however it chooses to.
I worry that the more important distinction between collecting achievements and eating food is that the former is a low-status activity.
I don’t think of it as low-status. FWIW.
I don’t think there is any objective measure to tell what desire is ok and what is a compulsion. I think, similarly to the word “disease”, a desire is “compulsive” only if you think it causes problems for you.
Uh… then I’m not sure what your point is. You said:
“My point is that there is nothing inherently wrong with arbitrary pleasures that don’t improve your life. The problem is when you develop compulsions. There seems to be a difference between simple desire and compulsive desire.”
So… if I take you literally, I think you just said that the only problem is when you develop a desire that causes you a problem.
Like, I don’t think that’s actually what you mean. I’m strawmanning your words to point out that I think I haven’t understood your real message.
Help me understand?
Valentine apparently enjoys collecting arbitrary achievements.
“Enjoy” is too simple to describe what’s true here. I find myself motivated to collect them. When I get another one, I get a “I’m getting closer!” feeling. Getting them all gives me a few moments of satisfaction, sort of like having carefully organized a silverware drawer might.
And I agree, there’s nothing wrong with that per se.
I just don’t want that process to hijack my effort to learn French.
[…] it seems that [Valentine is] feeling some guilt about it.
Uh, no. I don’t know where you got that impression. I don’t feel guilty about eating lotus. I just want to notice when I am, because apparently I can be fed lotus without my asking for it. If I don’t notice, then others can tell me what my goals are, even accidentally. I don’t like that.
Yeah. I think giving up on things that are appealing, doesn’t work. That’s why I titled this about noticing the taste of lotus, rather than noticing lotuses. We have to use proxy goals. The trick is, noticing when we’re getting Goodharted.
Oh. You’re asking how noticing lotus flavor plays out in domains that make people addicted to insights?
I don’t know. Seems like it’d work the same as in any other domain. Either you notice and have some choice about whether to get sucked in, or you don’t.
Is delicious food also a lotus?
I think it sort of misses the point to worry about what is or isn’t a lotus. The point is to notice what grabs your wanting, and how that affects you later.
Clearly, it doesn’t make your life better after you’ve eaten it, and that seem to be the criteria you use.
Not what I meant to convey. A lotus is something that grabs your wanting directly. When it’s designed by someone else, it usually doesn’t quite fit what’s meaningful to you. Then it’s pretty common to find yourself doing whatever it is a lot, and not benefitting much from it, and not caring about that fact.
My point is that there is nothing inherently wrong with arbitrary pleasures that don’t improve your life.
The problem is when you develop compulsions. There seems to be a difference between simple desire and compulsive desire.
I don’t know what a “compulsion” is. I mean, I know the word. But I don’t really know what it is.
The problem I care about here, is that things can hijack what you care about, and the method they use for it doesn’t correlate much with value delivered. Seems like something worth noticing when it’s happening.
Maybe you mean the same thing. I just don’t know what I’d use to sort out “simple desire” from “compulsive desire”, so to me right now they’re just words.
Um… what? Can you say more words?
Yep. It varies by lotus too. What counts as a lotus, and how strongly, seems to depend on whom we’re talking about.
And clearly there are trends. Otherwise Facebook wouldn’t have its business model.
There’s an awesome fictional metaphor of this that’s really off-color. The online sex humor comic Oglaf has a two-page bit where the poor teased apprentice ends up so very much wanting a pinecone that he does some NSFW things he clearly would rather not have to do. I’ll make you do a bit of work to find it though so you can only blame yourself if you don’t like what you see there: oglaf dot com slash pinecone
I’d be happy to.
…though after reflecting on it and starting a few drafts of a comment here, I’m starting to wonder if I should instead spell it out in more detail in its own post.
The gist of it is that every framework thinks every other framework is seriously missing the point in some way. If you can nail down X’s critique of Y and Y’s critique of X, and both critiques are made of Gears, you can use those critiques to emphasize a boundary between them and to intentionally switch between them.
In practice, we usually want to switch between a kind of science-based frame and a new hypothetical one we want to test out. When both the science frame and the new to-be-sandboxed frame both have allergic reactions to the other, then they’re never going to mix, and there’s no risk of the “Aha, consciousness collapses quantum probability waves!” type error. You can then leverage each frame’s critique of the other to switch between them, or to verify which one you’re in.
After that you can set up some TAPs to create mental warning bells whenever you enter one, or to remember to verify which one you’re in if you want to double-check before doing a given kind of reasoning or making a given kind of decision.
In practice I find this makes each mode more clear and internally consistent, in part by exposing and removing internal inconsistencies. E.g., in the “consciousness collapses quantum probability waves” thing, you can actually find the logical point where “consciousness first” and quantum mechanics slam into one another, at which point you need to separate them more fully. Then it becomes more obvious that the “consciousness first” paradigm doesn’t allow us to start with the frame of there being an objective reality that there is subjective experience of. This lets you keep your sanity in quantum mechanics even when sometimes trying on the “consciousness first” paradigm, because the two basically can’t coexist in the same effort to explain a given phenomenon.
The only thing I know of that breaks these sandboxes is if you find a Gears-based link between the two. But if you actually find a Gears-based link between the science frame and a new frame, then what you have is a scientific hypothesis. At that point you can test it empirically.
Unless and until you find such a Gears-based link, though, the science frame will find it correct to view those other frames as possibly or definitely wrong or misguided in some way. Hence preemptive naming of such frameworks as “fake”: it acts as a reminder to come back to your home ontology and to keep it from being corrupted by these other ones you’re playing with.
Alright, I think I now understand much better what you mean, thank you.
[…]these immune responses are there for a reason.
Of course. As with all other systems.
Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
The following has been said many times already, but I’ll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.
My impression from the “phone” allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don’t want to acknowledge it because it would disrupt some self-deception.
People don’t need to already know it in order for this dynamic to play out. All that’s required is that the person have some kind of idea of what type of impact it’ll have on their mental architecture — and that “some kind of idea” needn’t be accurate.
This gets badly exacerbated if the concept is hard to understand. See e.g. “consciousness collapses quantum uncertainty” type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.
But to illustrate in a little more detail how this might make Looking more difficult to understand, here’s a slightly fictionalized exchange I’ve had with many, many people:
Them: “Give me an example of Looking.”
Me: “Okay. If you Look at your hand, you can separate the interpretation of ‘hand’ and ‘blood flow’ and all that, and just directly experience the this-ness of what’s there…”
Them: “That sounds like woo.”
Me: “I’m not sure what you mean by ‘woo’ here. I’m inviting you to pay attention to something that’s already present in your experience.”
Them: “Nope, I don’t believe you. You’re trying to sell me snake oil.”
After a few months of exploring this, I gathered that the problem was that Looking didn’t have a conceptual place to land in their framework that didn’t set off “mystical woo” alarm bells. Suddenly I’m talking to their epistemic immunization maximizer, which has some sense that whatever “Looking” is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem’s need to justify its predetermined rejection of attempts to understand what I’m saying.
Certainly not everyone does this particular one. I’m just offering one specific example of a type.
Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning.
I’m not disagreeing with that. I’m saying that:
It’s pretty normal to miss the confusion in this case.
Looking isn’t reasoning.
The reason the paperclip maximizer won’t listen is because it doesn’t care, not because it doesn’t understand what you’re saying. So, this allegory would only make sense if, some parts of our mind don’t care about the benefits of Looking while other parts do care. It still shouldn’t be an impediment to understand what Looking is.
…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.
But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/… that it doesn’t (and maybe can’t) cover.
Doesn’t: yes, for sure.
Can’t: mmm, maybe? I expect that by the end of the sequence I’m writing, we’ll return to Kaj’s interpretation of Looking and basically just use it as a given — but it’ll mean something slightly different. Right now, I expect that if we just assume Kaj’s interpretation, we’re going to encounter a logjam when we apply Looking to the favored LW ontology, and the social web will have a kind of allergic reaction to the logjam that prevents collective understanding of where it came from. Once we collectively understand the structure of that whole process, we can smash face-first into the logjam, notice the confusion that results, and then make some meaningful progress on making our epistemic methods up to tackling serious meta-ontological challenges. At that point I think it’ll be just fine to say “Yep, we can think of Looking as compatible with the standard LW ontology.” Just not before.
Meta: Okay, I’m super confused what just happened. The webpage refreshed before I submitted my reply and from what I could tell just erased it. Then I wrote this one, submitted it, and the one I had thought was erased appeared as though I’d posted it.
(And also, I can’t erase either one…?)
I have largely lost hope, though, that any of the Enlightened will seriously attempt to explain how, rather than just continuing to tell us Unenlightened folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding.
I really am trying. When I talk about paperclip-maximizer-like subagents or ontological self-reference, it’s not my intent to say “You can’t understand because of XYZ.” I’m trying to say something more like, “I’d like you to notice the structure of XYZ and how it interferes with understanding, so that you notice and understand XYZ’s influence while we talk about the thing.”
Right now there’s too large of an inferential gap for me to answer the “how” question directly, and I can see specific ways in which my trying will just generate confusion, because of XYZs. But I really am trying to get there. It’s just going to take me a little while.
One specific possibility relevant to those footnotes is worth being explicit about: it could be that the Enlightened have genuine insights that they have gained through their Enlightenment—but that some of the Unenlightened have some of the same insights too, but it’s difficult to recognize that one insight is the same as the other.