Open problems in human rationality: guesses
A couple months back Raemon wrote this excellent question to which Scott Alexander shared his ongoing list. I think it would be great to have people try to give their current guesses for a lot of these. My guesses in a comment below. My intuition is that value is created in four ways from this:
1. Discovery of things you didn’t realize you believed in the process of writing the answer.
2. Generation of cruxes if people give you feedback/alternatives for answers.
3. Realization that your guess isn’t even wrong, but fundamentally wasn’t built from building blocks that can, in principle, be rearranged to form a correct answer.
4. Help in coordination as people get a sense of what others believe about navigating this space. Seeing cognitive diversity on fundamental questions has helped me in this area.
1. Which questions are important?
a. How should we practice cause prioritization in effective altruism?
Encourage people to follow different prioritization heuristics and see what bubbles up. Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more from Jessica Graham.
b. How should we think about long shots at very large effects? (Pascal’s Mugging)
Seems to vary based on risk appetite and optionality. ie young people can do moonshots and recover in time to do other lower variance things.
c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life?
false dilemma. Inquiring into values shows that focusing on the global level contributes to my happiness. ‘Lead a normal life’ seems to be about priors on ‘normal behaviors’ leading to well being. But this prior seems bad, average outcomes aren’t very happy.
d. How do we identify gaps in our knowledge that might be wrong and need further evaluation?
More focus on critiques that induce physical discomfort and avoidance.
e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?
per Buddhism, probably. For intellectual types the how is often somatic skill training.
2. How do we determine whether we are operating in the right paradigm?
a. What are paradigms? Are they useful to think about?
a paradigm is a collection of heuristics that play well together. ie they chain into each other easily by taking each others outputs as inputs.
b. If we were using the wrong paradigm, how would we know? How could we change it?
Observing the outcomes of the people following different stances. I think Opening the Heart of Compassion is an excellent resource here, as is The Five Personality Patterns. Despite woo.
c. How do we learn new paradigms well enough to judge them at all?
Ask ourselves the question ‘do I want that person’s life?’ When evaluating strategies.
3. How do we determine what the possible hypotheses are?
a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this?
b. Are there surprising techniques that can help us with this problem?
Creativity techniques around prioritizing quantity over quality works reliably.
4. Which of the possible hypotheses is true?
a. How do we make accurate predictions?
By creating the conditions for the outcomes we want. Too many degrees of freedom and hidden variables when trying to predict useful things totally outside our control. Collecting better outside view search heuristics for things we’re forced to make predictions for that we can’t control.
b. How do we calibrate our probabilities?
Practice feeling the somatic difference between 60 and 70% confidence, increase granularity with time.
5. How do we balance our explicit reasoning vs. that of other people and society?
a. Inside vs. outside view?
Always generate an inside view the best way you know how first. Then when you run an outside view and encounter differences, inquire into the generators of those differences. This is free calibration data any time you’re about to search for something.
b. How do we identify experts? How much should we trust them?
Judgmental bootstrapping and checking how granular the feedback the expert has received in forming their model. Granularity of feedback should be >= granularity of decision model.
c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition?
Try harder to learn from tradition than you have been on the margin. Current = noisy.
d. How much should the replication crisis affect our trust in science?
It should increase our need for consilience in order to be confident about anything. If a conclusion isn’t reachable from radically different methods/domains it’s fairly suspect.
e. How well does good judgment travel across domains?
Granularity of feedback loops applies again. When we see impressive transfer I think it’s from a domain with good feedback loops to a domain with poor feedback loops where the impressive person helped clean up those poor feedback loops by applying the methods from the high feedback loop domain.
6. How do we go from accurate beliefs to accurate aliefs and effective action?
a. Akrasia and procrastination
b. Do different parts of the brain have different agendas? How can they all get on the same page?
Integrate conflicting parts via some psychotherapy modality like Focusing or Core Transformation or IFS.
7. How do we create an internal environment conducive to getting these questions right?
a. Do strong emotions help or hinder rationality?
emotional ‘strength’ seems like the wrong frame.
b. Do meditation and related practices help or hinder rationality?
Help on cognitive reflection tests and the general skill of noticing which cognitive heuristics are currently being run. CRTs were among the less correlated with g according to The Rationality Quotient.
c. Do psychedelic drugs help or hinder rationality?
Ultra high openess without skepticism/disagreeableness/epistemic hygiene seems to result in loopy beliefs. They should be leveled up in tandem.
8. How do we create a community conducive to getting these questions right?
a. Is having “a rationalist community” useful?
b. How do strong communities arise and maintain themselves?
c. Should a community be organically grown or carefully structured?
d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission?
e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish?
f. …without also admitting every homeopath who wants to convince us that “homeopathy is rational”?
g. How do we balance the need for a strong community hub with the need for strong communities on the rim?
h. Can these problems be solved by having many overlapping communities with slightly different standards?
I think communities are typically about avoiding responsibility for making personal progress. People who choose to take a more central role in a community typically have emotional problems they are trying to work out via the dynamics in the community. The whole is typically much less than the sum of its parts.
9. How does this community maintain its existence in the face of outside pressure?
The way in which outside pressure is experienced is worth investigating for what internal process it is resonating with.
Just wanted to note that this seemed like an interesting claim that seems relevant to my interests to take seriously.
Meta: I think I’d find it easier to process this if this post picked a subset of these questions rather than all at once (And then could devote more space to argue about individual answers to questions or clusters of questions)
I agree but found separating everything out a lot of work.
What does “Current = noisy” mean here?
Current events are hard to extract signal from.
What does “learn more Jessica Graham” mean?
learn more from. Edited.