This is really cool, thanks for the link!
It’s not just money, but short term profits. A/B testing is an exceptionally good tool for measuring short term profits, but not as good a tool for measuring long term changes in behavior that come as a result of “optimized” design.
It might make sense to create this as a sequence using the LW sequence feature.
I don’t think I’ve seen this in Boston or SF, but have in Portland and Berkelely. It appears to me that there are strong cultural differences between different liberal tech cities I’ve lived in related to skills and competence.
SF? Is there a reason you’re being obtuse here?
It’s very hard to figure out if I agree with your premise if I can’t compare to my own experience.
In what part of the US do you live?
At last year’s CFAR reunion, for instance, there was a talk uncritically presenting chakras as a real thing, and when someone in the audience proposed doing an experiment to verify if they are real or it’s a placebo effect, the presenter said (paraphrasing) “Hmm, no, let’s not do that. It makes me uncomfortable. I can’t tell why, but I don’t want to do it, so let’s not” and then they didn’t.
I attended that talk and have a slightly different memory.
To my memory, the claim was “I tried this exercise related to my body, and it had a strong internal effect. Then I started playing around with other areas related to chakras, and they had really strong effects too. Try playing around with this exercise on different parts of your body, and see if there’s a strong effect on you.”
The second part matches my memory, and I was a bit dissapointed we didn’t get to do more of an experiment, but in no way were “chakras uncritically presented as a real thing.”
I think this has developed gradually. The idea of “behavior is based on unconscious desires” goes back as far as at least Freud, probably earlier.
Note that this is exacerbated by the fact that the original questionnaire Jacob used to gather this data further implied the adversarial relationship between cognition and intuition.
This seems fairly easy by randomizing the types of arguments and the positions, no?
The field is actually called heuristics and biases. Intuitions represent both. Trying to overcome them rather than understand and use them is a naive and counterproductive view of rationality.
Intuitions are not something to be overcome.
Sorry for all the hashtags, this was originally written in Roam.
It also seems to involve exaggerating and/or downplaying one’s own preferences.
There’s a large portion of auction theory/mechanism design specifically designed to avoid this problem. The “you cut the cake, I choose the pieces” is a simple example. I’ve tried to implement some of these types of solutions in previous group houses and organizations, and there’s often a large initial hurdle to overcome, some of which just outright failed.
However, enough has succeeded that I think it’s worth trying to more explicitly work game theoretically optimal decision procedures into communities and organizations, and worth familiarizing yourself with the existing tools out there for this sort of thing.
Today I had a great chat with a friend on the difference between #Fluidity and #Congruency
For the past decade+ my goal has been #Congruency (also often called #Alignment), the idea that there should be no difference between who I am internally, what I do externally, and how I represent myself to others
This worked well for quite a long time, and led me great places, but the problems with #Congruency started to show more obviously recently.
Firstly, my internal sense of “rightness” wasn’t easily encapsulated in a single sense of consistent principles, it’s very fuzzy and context specific. And furthermore, what I can even define as “right” shifts as my #Ontology shifts.
Secondly, and in parallel, as the idea of #Self starts to appear less and less coherent to me, the whole base that the house is built on starts to collapse.
This had led me to begin a shift from #Congruency to #Fluidity. #Fluidity is NOT about behaving by an internally and externally consistent set of principles, rather it’s being able to find that sense of “Rightness”—the right way forward—in increasingly complex and nuanced situations.
This “rightness” in any given situation is influenced by the #Ontology’s that I’m operating under at any given time, and the #Ontologies are influenced by the sense of “rightness”.
But as I hone my ability to fluidly shift ontologies, and my ability to have enough awareness to be in touch with that sense of rightness, it becomes easier to find that sense of rightness/wrongness in a given situation. This is as close as I can come to describing what is sometimes called #SenseMaking.