This all sounds true (and I meant it to be sort of implied by the post, although I didn’t delve into every given possible “improved algorithm”, and perhaps could have picked a better example.)
What seemed to me was the gears/model-based-thinking still seems implemented on babble, not just for the lower level steps, but for the higher level systematic strategy. (I do think this involves first building some middle-order thought processes on top of the babble, and then building the high level strategy out of those pieces)
i.e. when I use gears-based-systematic-planning, the way the pieces of the plan come together still feel like they’re connected via the same underlying associative babbling. It’s just that I’d have a lot of tight associations between collections of strategies, like:
Notice I’m dealing with a complex problem
Complex problem associates into “use the appropriate high level strategy for this problem” (which might involve first checking possible strategies, or might involve leaping directly to the correct strategy)
Once I have a gears-oriented strategy, it’ll usually have a step one, then a step two, etc (maybe looping around recursively, or with branching paths) and each step is closely associated with the previous step.
Does it feel differently to you?
When you do the various techniques you describe above, what is the qualia and low-level execution of it feel like?
I do think it’s all ultimately implemented on top of something babbly, yeah. The babbly part seems like the machine code of the brain—ultimately everything has to be implemented in that.
I think what I mean by “gearsy reasoning” is something different than how you’re using the phrase. It sounds like you’re using it as a synonym for systematic or system-2 reasoning, whereas I see gears as more specifically about decomposing systems into their parts. Gearsy reasoning doesn’t need to look very systematic, and systematic reasoning doesn’t need to be gearsy—e.g. simply breaking things into steps is not gearsy reasoning in itself. So the specific “tight associations” you list do not sound like the things I associate with gearsy thinking specifically.
As an example, let’s say I’m playing a complicated board game and figuring out how to get maximum value out of my resources. The thought process would be something like:
Ok, main things I want are X, Y, Z → what resources do I need for all that?
(add it up)
I have excess A and B but not enough C → can I get more C?
I have like half a dozen ways of getting more C, it’s basically interchangeable with B at a rate of 2-to-1 → do I have enough B for that?
...
So that does look like associative babbling; the “associations” it’s following are mainly the relationships between objects given by the game actions, plus the general habit of checking what’s needed (i.e. the constraints) and what’s available.
I guess one insight from this: when engaged in gears-thinking, it feels like the associations are more a feature of the territory than of my brain. It’s not about taste, it’s about following the structure of reality (or at least that’s how it feels).
I think what I mean by “gearsy reasoning” is something different than how you’re using the phrase. It sounds like you’re using it as a synonym for systematic or system-2 reasoning, whereas I see gears as more specifically about decomposing systems into their parts.
Yeah. My reply was somewhat general and would work for non-gearsy strategies as well. I do get that gearsiness and systematicness are different axes and strategies can employ them independently. I was referring offhandedly to “systematic gearsiness” because it’s what you had just mentioned and I just meant to convey that the babble-process worked for it.
i.e, I think your list that begins “Okay, the main things I want are X, Y and Z...” follows naturally from my list that ends “Once I have a gears-oriented strategy, it’ll usually have a step one...”
I guess one insight from this: when engaged in gears-thinking, it feels like the associations are more a feature of the territory than of my brain. It’s not about taste, it’s about following the structure of reality.
The way I’d parse it is that I have some internalized taste that “when figuring out a novel, complex problem, it’s important to look for associations that are entangled with reality”. And then as I start exploring possible strategies to use, or facts that might be relevant, “does this taste gearsy?” and “does this taste ‘entangled with reality’” are useful things to be able to check. (Having an aesthetic taste oriented around gearsy-entangledness lets you quickly search or rule out directions of thought at the sub-second level, which might then turn into deliberate, conscious thought)
Alternately: I’m developing a distaste for “babbling that isn’t trying to be methodical” when working on certain types of problems, which helps remind me to move in a more methodical direction (which is often but not always gearsy)
[edit: I think you can employ gearsy strategies without taste, I just think taste is a useful thing to acquire
This all sounds true (and I meant it to be sort of implied by the post, although I didn’t delve into every given possible “improved algorithm”, and perhaps could have picked a better example.)
What seemed to me was the gears/model-based-thinking still seems implemented on babble, not just for the lower level steps, but for the higher level systematic strategy. (I do think this involves first building some middle-order thought processes on top of the babble, and then building the high level strategy out of those pieces)
i.e. when I use gears-based-systematic-planning, the way the pieces of the plan come together still feel like they’re connected via the same underlying associative babbling. It’s just that I’d have a lot of tight associations between collections of strategies, like:
Notice I’m dealing with a complex problem
Complex problem associates into “use the appropriate high level strategy for this problem” (which might involve first checking possible strategies, or might involve leaping directly to the correct strategy)
Once I have a gears-oriented strategy, it’ll usually have a step one, then a step two, etc (maybe looping around recursively, or with branching paths) and each step is closely associated with the previous step.
Does it feel differently to you?
When you do the various techniques you describe above, what is the qualia and low-level execution of it feel like?
I do think it’s all ultimately implemented on top of something babbly, yeah. The babbly part seems like the machine code of the brain—ultimately everything has to be implemented in that.
I think what I mean by “gearsy reasoning” is something different than how you’re using the phrase. It sounds like you’re using it as a synonym for systematic or system-2 reasoning, whereas I see gears as more specifically about decomposing systems into their parts. Gearsy reasoning doesn’t need to look very systematic, and systematic reasoning doesn’t need to be gearsy—e.g. simply breaking things into steps is not gearsy reasoning in itself. So the specific “tight associations” you list do not sound like the things I associate with gearsy thinking specifically.
As an example, let’s say I’m playing a complicated board game and figuring out how to get maximum value out of my resources. The thought process would be something like:
Ok, main things I want are X, Y, Z → what resources do I need for all that?
(add it up)
I have excess A and B but not enough C → can I get more C?
I have like half a dozen ways of getting more C, it’s basically interchangeable with B at a rate of 2-to-1 → do I have enough B for that?
...
So that does look like associative babbling; the “associations” it’s following are mainly the relationships between objects given by the game actions, plus the general habit of checking what’s needed (i.e. the constraints) and what’s available.
I guess one insight from this: when engaged in gears-thinking, it feels like the associations are more a feature of the territory than of my brain. It’s not about taste, it’s about following the structure of reality (or at least that’s how it feels).
Yeah. My reply was somewhat general and would work for non-gearsy strategies as well. I do get that gearsiness and systematicness are different axes and strategies can employ them independently. I was referring offhandedly to “systematic gearsiness” because it’s what you had just mentioned and I just meant to convey that the babble-process worked for it.
i.e, I think your list that begins “Okay, the main things I want are X, Y and Z...” follows naturally from my list that ends “Once I have a gears-oriented strategy, it’ll usually have a step one...”
The way I’d parse it is that I have some internalized taste that “when figuring out a novel, complex problem, it’s important to look for associations that are entangled with reality”. And then as I start exploring possible strategies to use, or facts that might be relevant, “does this taste gearsy?” and “does this taste ‘entangled with reality’” are useful things to be able to check. (Having an aesthetic taste oriented around gearsy-entangledness lets you quickly search or rule out directions of thought at the sub-second level, which might then turn into deliberate, conscious thought)
Alternately: I’m developing a distaste for “babbling that isn’t trying to be methodical” when working on certain types of problems, which helps remind me to move in a more methodical direction (which is often but not always gearsy)
[edit: I think you can employ gearsy strategies without taste, I just think taste is a useful thing to acquire