One other way in which regret-minimizing is not perfectly dual to value-maximizing: this model also suggests that people, insofar as they are regret-minimizers, will artificially restrict their choice set. This explains quite a bit of self-handicapping/sabotage and anxiety about trying new things. Might this be the main difference in practice between the two mindsets you describe?
alkjash
In Defense of Psycho-Conservatism
Thanks, I’ll do that in the future.
Babble
Thank you for highlighting that dichotomy; I pay a lot of attention to optimizing aesthetic experience over insight. A quote of Tolstoy’s really stuck with me, “If I were told that I could write a novel whereby I might irrefutably establish what seemed to me the correct point of view on all social problems, I would not even devote two hours to such a novel; but if I were to be told that what I should write would be read in about twenty years’ time by those who are now children and that they would laugh and cry over it and love life, I would devote all my own life and all my energies to it.” I’m not sure I quite agree with the first half of his sentiment, but I definitely agree with the second.
Fair enough, that’s definitely an oversimplification on my part. I think the broader point that plugging into a weird filter is a general method (perhaps THE method) of stretching your Babble generator still stands.
More Babble
I remember exactly the same exercise from elementary school, because I was the last one to catch on.
Cool, it seems like we’re independently circumambulating the same set of ideas. I’m curious how much your models agree with the more fleshed out version I described in the other post.
I think I mostly agree and tried to elaborate a lot more in the followup. Could you provide more detail about your hypothetical-deductive model and in what ways that’s different?
I’m trying to parse this, and I think we’re saying the same thing and you’re just using the word Babble differently. I’ve roughly defined Babble as “pseudo-randomly generated proto-thoughts”, and good Babble as “insight-rich input from which Prune can find insight.” Help?
Prune
I think I was intentionally vague about the things you are emphasizing because I don’t have a higher-resolution picture of what’s going on. I mentioned that “random” means something like “random, biased by the weak, local filter,” but your picture of pattern-matching seems like a better description of the kind of bias that’s actually going on.
Similarly, it’s probably true that there are different levels of Babble going on, at some points you are pattern-matching with literal words, at other points you are using phrases or concepts or entire cached arguments, and I roughly defined the Babble graph to contain all of these things.
I really like the idea that Prune gradually pushes your skills down and makes them implicit in your Babble. It feels something like if your Prune allows stuff through, your Babble goes back and retrains on that stuff and eventually you start just Babbling what you wanted, no filter necessary. It seems retroactively obvious that this is how the exact adversarial training works.
I also definitely see what you’re saying about Rao, my experience of reading him is roughly similar to my experience reading Moldbug in that I end up Pruning some small subset that feels extraordinarily insightful without having the energy to understand the main arc of the argument.
Regarding rationalist training, I’m referring to the category of error containing Knowing about Biases can Hurt People and the “Rationalist Uncanny Valley”, i.e. that an incomplete random sample of the Sequences will leave the reader with mostly just a toolkit of biases and fallacies to throw at people in debate team, and worse, themselves. This roughly translates to building more logic Gates in your own Prune. I think a substantial majority of rationalist training is this kind of Prune exercise, although there’s definitely confirmation bias (see what I mean? That thought almost made me delete the last sentence) going on. Curious to hear the examples of rationalist training encouraging Babble you have in mind.
I actually didn’t know about Hanson’s usage and my definition of Babble allows for pieces that contain entire cached arguments and that can generate deep content. I wanted it to be sufficiently general to contain most patterns of unfiltered thoughts that appear in my head.
Excellent post! I think the positive direction of boiling the crab deserves more emphasis. A lot of habits have slow, accumulating returns that are below our sensory threshold but still worth sticking to, and just as people are unreasonably hesitant to abandon slowly deteriorating things, we are unreasonably willing to give up slowly improving things. I sense that a primary difficulty with diets is that their effects take several months and the changes are too gradual for people to notice.
Syllable
I’m very much interested in these mythological structures—thank you for adding some depth to the metaphors. One of the big projects the rationalist community is already working on (it seems to me) is the rebuilding from scratch of mythology for the modern era, and hopefully these posts can be a small part of that. It seems that this kind of rebirth and refreshing is necessary as our environment shifts and our understanding grows crisper, but perhaps it would benefit from more dialogue with classical ideas.
I think the problem isn’t exactly that nerds care about reality and normals care about status games, but rather that different data structures are called for in different applications, and the reality vs. status games dichotomy is just one dimension of “different,” and a secondary one at that. See my post Data Structures.