Oh I see the directions now. Yes, it would help if you included all of this into a detailed blogpost and explained what other meals you consume (and how often) to get a complete picture of how to adopt the diet oneself. I would like to experiment.
Unreal
I have encountered a person who could not do TAPs before. I may be misremembering exactly. But my impression was they had to consciously do everything. They’d consciously, deliberately move their hand in order to open a door or move their head to look at something. I think they basically couldn’t do things that required muscle memory? Like sports, dancing, juggling, etc.
Is it like this for you?
[Previous MTG blogger here; so I’ve had thoughts about this before]
I think it’s important to note that systems are almost always white.
As in, the individuals who partake in polyamory and BDSM are more likely to be B or R. But polyamory and the BDSM community itself are about using white frameworks in order to foster B/R needs in a safe, predictable way.
White makes things more scalable. So if you have any system at all that includes lots of agents in it, there’s probably white involved. (It’s easier to make copies of something rigid than something moving/chaotic.)
So the institution of marriage is white, but so is polyamory, and both have lots of norms and rules.
I’d be interested in hearing examples of non-white systems.
I feel confused by this comment.
The value of the framework has been demonstrated (at least to the author) by the use cases given in the post. The fact that it might be ‘humans imposing order on chaos’ does not affect the question of its usefulness. Usefulness is a separate question from whether someone ‘just made it up’. They’re orthogonal. Which I feel was one of the points of Val’s post on Fake Frameworks.
I can’t shake the feeling that you’re making the type error as described here:
I suspect it’s a type error to think of an ontology as correct or wrong. Ontologies are toolkits for building maps. It makes sense to ask whether it carves reality at its joints, but that’s different. That’s looking at fit. Something weird happens to your epistemology when you start asking whether quarks are real independent of ontology.
In particular, I’m confused by the phrase ‘start believing in horoscopes’. Which implies that I’d take horoscopes as some kind of truth about the universe. Which is not the stance of someone using a fake framework the way Val describes. I want to avoid ‘believing’ in a fake framework—I want to hold it very lightly. This is how one gets the benefits of ‘ki’ while being able to understand it as conflicting with physics. So I’ll use physics to predict most observed physical phenomenon, while I’ll use ki to learn martial arts, and the two don’t have to sit badly with each other.
I hear your worry about confirmation bias. It’s good to watch out for it. But my preference is not to avoid all systems that might lead to confirmation bias. But to use the system and ‘hold it lightly’ so that evidence of its ‘bad fit’ can be raised to my attention as it comes up. I do not want to live in fear of ‘confirmation bias’.
In general, my stance is to try to wear any sufficiently interesting fake frameworks as they come into my view. And then see what happens. If they seem good/useful/insightful, I keep using them. If not, I discard them, probably in favor of other better frameworks. I could see myself starting off using horoscopes, but I imagine I’d quickly find better things.
I am pretty curious about why you have some Trigger Action Patterns and not others.
I would classify, “Notice stomach hurts from hunger” → “Think about eating” to be a TAP
As well as “Think about eating” → “Get up to get a yogurt”
Including all the steps involved between this and actually getting and eating a yogurt
Maybe you’re “holding the intention” to go get something to eat basically the entire time, as you walk toward the yogurt. IF this were happening, I’d expect that (pretty frequently) you’d get up to grab a yogurt and then forget what you were doing and end up doing something else. Does this happen to you?
This seems like a VERY important point to Double Crux on! I’m excited to see it come up.
Hmm… if you genuinely meant to say “Have you stopped to consider to what extent my opinion counts as evidence or not, including possibly deciding that it’s neutral or anti-evidence?” then I just want to say “No.” and I claim this is the correct thing to do. I genuinely think that social bayes/aumanning is a bad idea. To capture what I expect is a 4,000-word post in a catchy sentence: If I don’t understand something, just because Conor believes it’s true, doesn’t cause me to understand it any better.
Would love to read about a Double Crux on this point. (Perhaps you two could email back and forth and then compile the resulting text, with some minor edits, and then publish on LW2?)
Personally, I agree with Ben Pace, and the fact that it ‘might be able to be done right’ is not a crux. But I could see changing my mind.
I’m still into the idea of reading a transcript after-the-fact. Or at least a summary.
Do you believe the situation above RE: the MTG Color Wheel is an example of a time “when you have to take action and can’t figure it out yourself”?
Typical Minding Guilt/Shame
Being Correct as Attire
I think this gets confusing if you don’t clarify what level of abstraction you’re on.
The concept of Slack can be tried to be classified from a number of angles:
a) The type of person who most likely values Slack
b) Institutions that have come to value Slack and then institutionally protect it (which are likely white orgs)
c) Slack as an abstract entity—if it had some kind of agency, like Moloch, what color would it be
d) Which color’s list of properties most closely seems to match up with Slack’s properties
e) Which environment does Slack most thrive in, one where Green is cultivated, one where White is cultivated, etc. ?
f) Which color thinks Slack is winning
These have conflicting answers.
For instance, polyamory as an institution / practice is white—because almost all institutions / norms / communities are at least somewhat white. But polyamory as an abstract entity / which color thinks polyamory is winning—that is red. The type of person who values poly is probably red.
The abstract concept of a ‘system’ is white. The abstract concept of ‘culture’ is green. But specific systems or cultures do not have to be white or green. So this gets confusing to talk about without first defining your layers.
Slack for your belief system
Attempt at definition.
If I have less slack in my belief system, that means I have more constraints in what counts as ‘evidence’ for a given statement or more preconceptions about what can count as ‘true’ or ‘real’.
Either, I can be looking for specific signs/evidence/proofs/data (“I will only consider X if you can prove Y.” “I will only consider X if you show me a person who flosses with their shoelace.”).
Or, I can be looking for certain categories or classes of evidence (“I will only consider X if there are studies showing X.” “I will only consider X if your arguments takes a certain form.” “I will only consider X if 5 experts agree.” Etc.)
Sometimes, it’s better to have less slack. It makes sense for certain fields of mathematics to have very little slack.
Other times, it hinders progress.
Someone mentioned Paul Feyerabend in response to this post. He was in favor of having slack in science, and I resonate strongly with some of these descriptions:
Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts. He thought that previous theory might influence natural interpretations of observed phenomena. Scientists necessarily make implicit assumptions when comparing scientific theories to facts that they observe. Such assumptions need to be changed in order to make the new theory compatible with observations. The main example of the influence of natural interpretations that Feyerabend provided was the tower argument. The tower argument was one of the main objections against the theory of a moving earth. Aristotelians assumed that the fact that a stone which is dropped from a tower lands directly beneath it shows that the earth is stationary. They thought that, if the earth moved while the stone was falling, the stone would have been “left behind”. Objects would fall diagonally instead of vertically. Since this does not happen, Aristotelians thought that it was evident that the earth did not move. If one uses ancient theories of impulse and relative motion, the Copernican theory indeed appears to be falsified by the fact that objects fall vertically on earth. This observation required a new interpretation to make it compatible with Copernican theory. Galileo was able to make such a change about the nature of impulse and relative motion. Before such theories were articulated, Galileo had to make use of ad hoc methods and proceed counterinductively. So, “ad hoc” hypotheses actually have a positive function: they temporarily make a new theory compatible with facts until the theory to be defended can be supported by other theories.
Feyerabend commented on the Galileo affair as follows:
The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo’s doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism.[5][6][7]
The following is also a nice thing to keep in mind. Although less about slack and more about the natural pull to use tools like science to further political/moral aims.
According to Feyerabend, new theories came to be accepted not because of their accord with scientific method, but because their supporters made use of any trick – rational, rhetorical or ribald – in order to advance their cause. Without a fixed ideology, or the introduction of religious tendencies, the only approach which does not inhibit progress (using whichever definition one sees fit) is “anything goes”: “‘anything goes’ is not a ‘principle’ I hold… but the terrified exclamation of a rationalist who takes a closer look at history.” (Feyerabend, 1975).
The following is more controversial, and I don’t fully agree with it. But it contains some interesting thought nuggets.
Feyerabend described science as being essentially anarchistic, obsessed with its own mythology, and as making claims to truth well beyond its actual capacity. He was especially indignant about the condescending attitudes of many scientists towards alternative traditions. For example, he thought that negative opinions about astrology and the effectivity of rain dances were not justified by scientific research, and dismissed the predominantly negative attitudes of scientists towards such phenomena as elitist or racist. In his opinion, science has become a repressing ideology, even though it arguably started as a liberating movement. Feyerabend thought that a pluralistic society should be protected from being influenced too much by science, just as it is protected from other ideologies.
Starting from the argument that a historical universal scientific method does not exist, Feyerabend argues that science does not deserve its privileged status in western society. Since scientific points of view do not arise from using a universal method which guarantees high quality conclusions, he thought that there is no justification for valuing scientific claims over claims by other ideologies like religions. Feyerabend also argued that scientific accomplishments such as the moon landings are no compelling reason to give science a special status. In his opinion, it is not fair to use scientific assumptions about which problems are worth solving in order to judge the merit of other ideologies. Additionally, success by scientists has traditionally involved non-scientific elements, such as inspiration from mythical or religious sources.
My more charitable interpretation is that, Science is a nicely rigorous method for truth-seeking, but because of its standards for rigor, it ends up missing things (like the ‘ki’ example from In praise of fake frameworks).
Also, I sense elitist attitudes from science / rationality / EA as not entirely justified. (Possibly this elitism is even counter to the stated goals of each.) I feel like I often witness ‘science’ or ‘rationality’ getting hijacked for goals unrelated to truth-seeking. And I’m currently a tiny bit skeptical of the confidence of EA’s moral authority.
Circling
They talk about this kind of thing a lot in Integral Theory (Ken Wilber). I don’t understand it well enough to say anything substantial, but they describe the stage after postmodernism / green similarly to what you’re saying. Post-postmodernism / teal rediscovers the value and truth of hierarchy and that not all perspectives are equal after all. And from there, you can try doing science again but with the added awareness that your background context affects your science.
Anyway, there’s more stuff there if you want to look. I will probably try to read one of Wilber’s books at some point.
Appreciate you bringing this perspective. I think it’s true for the majority of Circling I’ve seen, in most places. But there are some brands(?) of Circling that seem more directly rationality-aligned and ultimately focused on truth-seeking. That said, I think you’re still mostly right.
So, I’m going to say this because it might be counterintuitive: I don’t see a contradiction between my article and these comments here.
All the pitfalls of humanity (Goodharting, cognitive blindspots, status games, ulterior motives, etc.) can come alive in Circling. They are present because the ingredients you start with in a circle are humans. So all the human errors totally play out. They’re baked into the final pie.
If you prefer to only put in totally trusted ingredients, that makes sense to me. If you prefer not to put things at risk you don’t want to risk, that makes sense to me, and I endorse that behavior.
Circling isn’t “separate” from the real world. It tries to be a microcosm of the real world, with a few notable tweaks, such as: You are encouraged to be more mindful of the present moment. There is also a trend towards making things “object” that were “subject.” (I.e. revealing the water that you’ve been swimming in, unawares)
But, humans being humans, we do not always notice. We do not always see the patterns we are stuck in / re-enacting. And most of us are not trustworthy. Thus there is always risk.
Like in real life, it is up to you which risks you want to take on.
I will try to be as upfront as possible about the risks as I see them. And yeah, I agree all the risks you named in the comment above (starting with “losing self-image and identity”) are included.
I’m engaging in the risks personally for a number of reasons. One of them is that these risks all exist in the real world, and I’d like to learn to navigate them in real life. Another is that I have reason to believe I have an appropriate skill set that helps.
I am fairly confident in this: Circling does not involve hypnosis and does not borrow from hypnosis.
I have had lots of exposure to Circling, its leadership, and have passed a training course in it. If Circling had anything to do with hypnosis, I would expect at least some of its curricula to recommend hypnosis-centered books or mention hypnosis techniques in their teachings. Or I would expect their trainings to include lessons on how to cause people to go into trance states or anything resembling this. Or I would have found some instances of people trying to use rhythm or patterned behaviors or “giving directions” or something like you’re describing.
I have been exposed to all the main schools of Circling, and I haven’t found anything remotely like this.
I want to be clear on this point, because I do think there are real risks and pitfalls of Circling, and conflating Circling with hypnosis is likely to muddy the waters, rather than bringing clarity.
I’m curious about how you make your Soylent. Do you just take all those ingredients and mix them in a blender? Do you have another page with more information?