I’m Screwtape, also known as Skyler. I’m an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I’m fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you’re ever in the Boston area, feel free to say hi.
Starting early in 2023, I’m the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
I have a lot of interest in the data collection puzzle.
Object Level Questions
My last best writeup of the problem is the Unofficial 2024 LessWrong Community Census, in one of the fishing expeditions. My strategy has been to ask about things that might make people more rational (e.g. going to CFAR workshops, reading The Sequences, etc) and ask questions to test people’s rationality (e.g. conjunction fallacy, units of exchange, etc) and then check if there’s any patterns.
There’s always the good ol’ self-report on comfort with techniques, but I’ve been trying to collect questions that are objective evaluations. A partial collection of my best:
Calibration questions (“What are your odds that the population of Japan is >100 million?”)
Conjunction fallacy questions (Ask group A “What are your odds Russia and Ukraine are still at war in 2026?” and ask group B “What are your odds Putin is dead and Russia and Ukraine are still at war in 2026?”)
Units of Exchange questions (See the “Values and Dutch Booking” section of the census for one way I test that.)
Argument by authority (“Do you agree with Scott Alexander that Ritalin has less risk of Parkinson’s than Adderall?” where this is a mistake he’s since admitted.)
Brainstorm count (“How many unaccustomed uses of objects in this room for combat can you come up with?”)
Still, self-reports aren’t worthless.
Meta, how do we find good questions?
I’m tempted to ask people their goals, ask who’s succeeding at their goals or at common goals, and then operate as though that’s a useful proxy. There’s a fair number of people who say they want a well paying job and a happy relationship, and other people who have those things. Selection effects are sneaky though, and I don’t trust my ability to sort out people who are doing well financially because of CFAR’s good teachings from the people who were able to attend CFAR because they were already doing well financially.
On a meta level, I feel pretty excited about different groups that are trying to increase rationality asking each other’s questions. That is, if ESPR had a question, CFAR had another question, and the Guild of the Rose had a third question, I think it’d be great if each of them asked their attendees all three questions. Even better in my view to add a few organizations that are adjacent but not really aiming at that goal; ACX Everywhere or Manifold, for instance. Those would be control groups. The different organizations are doing different things, and if ESPR starts doing better on the evaluation questions than Guild of the Rose then maybe the Guild starts borrowing more from ESPR’s approach. If ACX Everywhere attendees have better calibration than Metaculus, then we notice we’re confused. I’ve been doing this for the ULWC Census already, and I’d be interested in adding it to after-event surveys.
Is there one or two questions CFAR wants to ask, or has historically asked, that you’d like to add to that collection? Put another way, what are the couple of evaluation questions you think CFAR alumni will do better on relative to say, ACX Everywhere attendees?