Yay model building and experimenting!
I like, and would love to see more of, people building simple models to experiment with the plausibility of ideas and build intuition. You also seem to have approached this with a good epistemic attitude; yes, this does not constitute strong evidence that humans are implemented with sub agents, but does demonstrate that familiar humany behaviours can arise from some form of sub agents.
“It feels good and right for me have a life where I’m producing more than I’m consuming. Wait, if it was actually a good thing to produce more than I consume, wouldn’t that mean we should have a society every one is pumping out production that never get’s used by anyone?”
The above is not something I’m very concerned with, but it did feel easy to jump to “this is now a question of the effects of this policy instantiated across all humans.”
I was going to just write a comment, but it turned into a post. Here, I outlined the models I was using to think about this, and what that said about my reaction to ignoring “under the hood” stuff.
A forming thought on post-rationality. I’ve been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:
State driven rational planning (episteme) destroys local knowledge (metis), often resulting in metrics getting better, yet life getting worse, and it’s impossible to complain about this in a language the state understands.
The quip that most readily comes to mind is “well if rationality is about winning, it sounds like the state isn’t being very rational, and this isn’t a fair attack on rationality itself” (this comment quotes a similar argument).
Similarly, I was having a conversation with two friends once. Person A expressed that they were worried if they started hanging around more EA’s and rationalists, they might end up having a super boring optimized life and never do fun things like cook meals with friends (because soylent) or go dancing. Friend B expressed, “I dunno, that sounds pretty optimal to me.”
I don’t think friend A was legitimately worried about the general concept of optimization. I do think they were worried about what they expected there implementation (or their friends implementation) of “optimality” in their own lives.
Current most charitable claim I have of the post-rationalist mindset: the best and most technical specifications that we have for what things like optimal/truth/rational might look like contain very little information about what to actually do. In your pursuit of “truth”/”rationality”/”the optimal” as it pertains to your life, you will be making up most of your art along the way, not deriving it from first principles. Furthermore, thinking in terms of the truth/rationality/optimality will [somehow] lead you to make important errors you wouldn’t have made otherwise.
A more blase version of what I think the post rationalist mindset is: you can’t handle the (concept of the) truth.
Short framing on one reason it’s often hard to resolve disagreements:
[with some frequency] disagreements don’t come from the same place that they are found. You’re brain is always running inference on “what other people think”. From a statement like, “I really don’t think it’s a good idea to homeschool”, you’re mind might already be guessing at a disagreement you have 3 concepts away, yet only ping you with a “disagreement” alarm.
Combine that with a decent ability to confabulate. You ask yourself “Why do I disagree about homeschooling?” and you are given a plethora of possible reasons to disagree and start talking about those.
Highlighting the parts that felt important:
I think the frame in which it’s important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don’t know what’s good locally.
Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god.
“I don’t know what a single just soul looks like, so let’s figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that.”
I can see ways in which my own thinking has fallen into the frame you mention in the first quote. It’s an interesting and subtle transition, going from asking, “What is it best for me to do?” to “What is it best for a human to do?”/”What would it be best for everyone to be doing?”. I notice that I feel very compelled to make this transition when thinking.
Users don’t need to know what’s going on under the hood; the algorithms and proofs generally “just work” without the user needing to worry about the details. The user’s job is to understand the language of the framework, the interface, and translate their own problems into that language.
Interesting, my gut reaction to this approach as applied to math was “ugh, that sounds horrible, I don’t want to ignore the under the hood details, the whole point of math is understanding what’s going on” .
Yet when I consider the same approach to programming and computer sciency stuff my reaction is “well duh, of course we’re trying to find good abstractions to package away as much of the nitty gritty details as possible, otherwise you can’t make/build really big interesting stuff.”
I’ll think more about why these feel different.
I really appreciate “Here’s a collection of a lot of the work that has been done on this over the years, and important summaries” type posts. Thanks for writing this!
I appreciate that you outlined what predictions are made from the Solow model applied to software. Do you know of any other models that might be applied?
Yay intents for quiet! Hope you get something out of it.
Reasons why I currently track or have tracked various metrics in my life:
1. A mindfulness tool. Tacking the time to record and note some metric is itself the goal.
2. Have data to be able to test an hypothesis about ways some intervention would affect my life. (i.e Did waking up earlier give me less energy in the day?)
3. Have data that enables me to make better predictions about the future (mostly related to time tracking, “how long does X amount of work take?”)
4. Understanding how [THE PAST] was different of [THE PRESENT] to help defeat the Deadly Demons of Doubt and Shitty Serpents of Should (ala Deliberate Once).
I have not always had these in mind when deciding to track a metric. Often I tracked because “that’s wut productive people do right?”. When I keep these in mind, tracking gets more useful.
The idea was less “Individual humans are ontologically basic” and more: I see I often talking about broad groups of people has been less useful than dropping down to talk about interactions I’ve had with individual people.
In writing the comment I was focusing more on what the action I wanted to take was (think about specific encounters with people when evaluating my impressions) and less my my ontological claims of what exists. I see how me lax opening sentence doesn’t make that clear :)
Me circa March 2018
“Should”s only make sense in a realm where you are divorced form yourself. Where you are bargaining with some other being that controls your body, and you are threatening it.
Update: This past week I’ve had an unusual amount of spontaneous introspective awareness on moments when I was feeling pulled my a should, especially one that came from comparing myself from others. I’ve also been meeting these thoughts with an, “Oh interesting. I wonder why this made me feel a should?” as opposed to a standard “endorse or disavow” response.
Meta Thoughts: What do I know about “should”s that I didn’t know in March 2018?
I’m more aware of how incredibly pervasive “should”s are in my thinking. Last saturday alone I counted over 30 moments of feeling the negative tug of some “should”.
I know see that even for things I consider cool, dope, and virtuous, I’ve been using “you should do this or else” to get myself to do them.
Since CFAR last fall I’ve gained a lot of metis on aligning myself, a task that I’ve previously trivialized or brought in “willpower” to conquer. Last year I was more inclined to go, “Well okay fine, I’m still saying I should do XYZ, but the part of me that is resisting that is actually just stupid and deserves to be coerced.”
I’ve missed seven days of journaling in the last month plus (non consecutive though).
Thoughts: I’ve gotten some good insight from this time. Towards the end, it became more, “What are the important things that happened recently?” journaling.
I’ve put much less ritual-intent into this habit than with meditation. In the past week I changed my sleep schedule (I now sleep in till whenever instead of getting up with an alarm at 7am) which makes it slightly harder to ensure the sanctity of morning journaling, but I’m currently okay with that because sleeping more and getting up at my own pace has had a wonderful positive affect this past week (keeping a keen eye on if that trend continues).
It feels vaguely important to not go into this journaling with an agenda. I get more rewarding journaling when I wait until the thing that catches my interest most works its way to the top of my mind.
The general does not exist, there are only specifics.
If I have a thought in my head, “Texans like their guns”, that thought got there from a finite amount of specific interactions. Maybe I heard a joke about texans. Maybe my family is from texas. Maybe I hear a lot about it on the news.
“People don’t like it when you cut them off mid sentence”. Which people?
At a local meetup we do a thing called encounter groups, and one rule of encounter groups is “there is no ‘the group’, just individual people”. Having conversations in that mode has been incredibly helpful to realize that, in fact, there is no “the group”.
(Less a reply and more just related)
I often think a sentence like, “I want to have a really big brain!”. What would that actually look like?
Not experiencing fear or worry when encountering new math.
Really quick to determine what I’m most curious about.
Not having my head hurt when I’m thinking hard, and generally not feeling much “cognitive strain”.
Be able to fill in the vague and general impressions with the concrete examples that originally created them.
Doing a hammers and nails scan when I encounter new ideas.
Having a clear, quickly accessible understanding of the “proof chains” of ideas, as well as the “motivation chains”.
I don’t need to know all the proofs or motivations, but I do have a clear sense of what I understand myself, and what I’ve outsourced.
Instead of feeling “generally confused” by things of just “not getting them”, I always have concrete, “This doesn’t make sense because BLANK” expressions that allow me to move forward.
What are the barriers to having really high “knowledge work output”?
I’m not capable of “being productive on arbitrary tasks”. One winter break I made a plan to apply for all the small $100 essay scholarships people were always telling me no one applied for. After two days of sheer misery, I had to admit to myself that I wasn’t able be productive on a task that involved making up bullshit opinions about topics I didn’t care about.
Conviction is important. From experiments with TAPs and a recent bout of meditation, it seems like when I bail on an intention, on some level I am no longer convinced the intention is a good idea/what I actually want to do. Strong conviction feels like confidence all the way up in the fact that this task/project is the right thing to spend your time on.
There’s probably a lot in the vein of have good chemistry: sleep well, eat well, get exercise.
One of the more mysterious quantities seems to be “cognitive effort”. Sometimes thinking hard feel like it hurts my brain. This post has a lot of advice in that regard.
I’ve previously hypothesized that the a huge chunk of painful brain fog is the experience of thinking at a problem, but not actually engaging with it. (similar to how Mark Forster has posited that the resistance one feels to a given task is proportional to how many times it has been rejected)
Having the rest of your life together and time boxing your work is insanely important for reducing the frequency with which your brains promotes “unrelated” thoughts to your consciousness (if there’s important stuff that isn’t getting done, and you haven’t convinced yourself that it will be handled adequately, your mind’s tendency is to keep it in a loop).
I’ve got a feeling that there’s a large amount of gains in the 5-second level. I would be super interested in seeing anyone’s thoughts or writings on the 5-second level of doing better work and avoiding cognitive fatigue.
Yay self-study! Positive reinforcement!
I was just thinking about this earlier today while re-reading a similar point by stuart armstrong.