Computer scientist
Fairly deep experience with self-programming and modification of intuition/reflexes
Personal jargon/nomenclature was developed in isolation and seldom matches what other people use
SilverFlame
Parametrize Priority Evaluations
Programming an IFS for alternate uses
Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.
Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.
Have you encountered anything similar?
I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:
General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
Complicated/technical problems also exaggerate the overhead costs of trying to harmonize thought and communication patterns amongst the team(s) due to reduced tolerance of failures
With these in mind, I would posit that a factor worth considering is that the traditional models of collaboration simply don’t meet the quality and cost requirements in their unmodified form. It is quite easy to picture a rationalist determining that the cost of forging new collaboration models isn’t worth the opportunity costs, especially if they aren’t actively on the front lines of some issue they consider Worth It.
The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my “path to goal” generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It’s something of an indirect system, but the intentionality is there.
My visibility into the current intricacies of my pseudo-IFS is currently low due to the energy costs maintaining such visibility produces, and circumstances do not make regaining it feasible for a while. As a result, I find myself having some difficulty identifying any specific processes that are Type 2 that aren’t super implementation-specific and vague on the intricacies. I apologize for not having more helpful details on that front.
I have something a bit clearer as an example of what started as Type 2 behavior and transitioned to Type 1 behavior. I noticed at one point that I was calculating gradients in a timeframe that seemed automatic. Later investigation seemed to suggest that I had ended up with a Type 1 estimator that could handle a number of common data forms that I might want gradients of (it seems to resemble Riemann sums), and I have something of a felt sense for whether the form of data I’m looking at will mesh well with the estimator’s scope.
[Question] Looking for a post I read if anyone recognizes it
The goal of naturalism is to reach a point where you relate to a part of the world in such a way that perpetual learning is inevitable.
I utilize a stance that seems very similar in spirit and a number of details to what is described here, and I would like to emphasize the value of frequent, small experiments to gather knowledge and expand awareness of options. I have found the practice valuable in reducing the complexity and investment requirements of experimentation, as well as synchronizing well with the update speed of mental models and other “deep knowledge”.
I think naturalism can be directed even at things “contaminated by human design”, if you apply the framing correctly. In a way, that’s how I started out as something of a naturalist, so it is territory I’d consider a bit familiar.
The best starting point I can offer based on Raemon’s comment is to look at changes in a field of study or technology over time, preferably one you already have some interest in (perhaps AI-related?). The naturalist perspective focuses on small observations over time, so I recommend embarking on brief “nature walks” where you find some way to expose yourself to information regarding some innovation in the field, be it ancient or modern. An example of this could be reading up on a new training algorithm you are not already familiar with (since it will be easier to use Original Seeing upon), without expending too much concentration or energy upon trying to calculate major insights.
Another idea if you want to push against the mental pressure that kills good ideas, from Paul Graham’s recent essay on how to do good work: “One way to do that is to ask what would be good ideas for someone else to explore. Then your subconscious won’t shoot them down to protect you.” I don’t know of anyone using this technique, but it might work.
This angle of attack sounds worth investigating for myself, especially because it can circumvent censorship for other reasons, such as resource availability or personal interests. I’ve had ideas before that I immediately knew weren’t something I’d be interested in pursuing myself, and it would be a waste to automatically throw them out without trying to think of someone more willing to take up the torch.
Circling back a few months later, I have some observations from trying out this idea:
I found myself tossing ideas to friends and acquaintances more often, which tended to improve my relationships with them somewhat
I noticed that some of the ideas I was preparing to hand off to someone else had glimmers of concepts I could use for other things, which had obvious benefits
I didn’t notice any impact to my normal ideation/processing bandwidth as a result of the change in operating method
Sometimes ideas I handed off to someone else would circle back later and benefit one of my own projects, although I suspect the success rates for such second-order results will vary wildly
Overall, it seems to have been worth trying, and I’ll probably keep it going.
I assign weights to terminal and instrumental value differently, with instrumental value growing higher for steps that are less removed from producing terminal value and/or for steps that won’t easily backslide/revert without maintenance.
As far as uncertainty goes, my general formula is to focus upon keeping plans composed of “sure bet” steps if the risk of failure is high, but I’ll allow less surefire steps to be attempted if there is more wiggle room in play. This sometimes results in plans that are overly circuitous, but resistant to common points of failure. The success rate of a step is calculated from my relevant experience and practice levels, as well as awareness of any relevant environmental factors. The actual weights were developed through iteration, and are likely specific to my framework.
Here’s a real example of a decision calculation, as requested:
Scenario: I’m driving home from work, and need to pick which restaurant to get dinner from.
Value Categories (a sampling):
Existing Desires: Is there anything I’m already in the mood for, or conversely something I’m not in the mood for?
Diminishing Returns: Have I chosen one or more of the options too recently, or has it been a while since I chose one of the options?
Travel Distance: Is it a short or long diversion from my route home to reach the restaurant(s)?
Price Tag: How pricey or cheap are the food options?
I don’t enjoy driving much, so Travel Distance is usually the highest-ranked Value Category, thoroughly eliminating food options that are too much of a deviation from my route. Next is Existing Desires, then Diminishing Returns, which let me pursue my desires and avoid getting overexposed to things. My finances are generally in a state where Price Tag doesn’t make much difference on location selection, but it will play a more noticeable role when it comes time to figure out my order.