Congratulations to Anna and the team for cohering around a vision and set of experiments. I donated to the new CFAR; I hope you continue posting about what you learn through the upcoming workshops.
One {note? suggestion? “real spirit” discussion point?} - I feel like the framing of aCFAR was missing something important about the state of rationality today. Namely from this, the year 2026 onward, being more rational is unlikely to be a “human technique” only affair. It will look more like cyborgs and centaurs—humans using AI tools and agents in different configurations and ways to make better decisions.
I won’t belabor how good the AIs have gotten, and instead will just note that they are effective aids for rationalist techniques:
I wrote a post about backchaining where I had Claude create malleable, customizable timelines. I found this to be a really effective way to “feel” at the S1 level the constraints and targets.
They’re very good at making Fermi estimates.
There’s ongoing research and experiments into using them as mediators and for fostering cooperation, à la double crux.
They’re probably useful for Focusing and internal work too (I know the Jhourney team has been running experiments here, though I haven’t found it that effective personally).
I appreciate that it’s a Center for Applied Rationality, and maybe this particular center doesn’t need to think about the cyborg angle and can just focus on developing better models of “who-ness”. Maybe a different center should!
But it seems valuable to consider, to the extent you want to push forward the frontier of rationality. I suspect there’s some connection between the moments when AI meaningfully aids my real thinking, the moments when I’m doing slop-ful fake thinking where the AI is aiding my delusions, and the concept you’re defining as “who-ness.” Who-ness seems adjacent to taste, which might matter a lot for steering AI fleets towards goodness and meaningful concepts. And it’s probably the case that the general rationality techniques you’re imparting and working on with attendees could be more effective with AI assistance.
I like this idea of insurance here—it would help with some of my fears—and you could imagine the company building this giving higher defense budgets at the start to help fight court cases that would establish precedent that Lifelink is suitable monitoring.