No standard metric for CFAR workshops?

Update: CFAR used standard metrics in its 2015 study, which I didn’t know about when drafting this post. It doesn’t appear that they tracked these metrics in their most recent impact report.

My outstanding questions are in this comment.

Update #2: CFAR replies to outstanding questions here.


It seems strange that CFAR doesn’t use a standardized metric to track the impact of its workshops over time.

From CFAR’s mission statement:

CFAR exists to try to make headway in this domain – the domain of understanding how human cognition already works, in practice, such that we can then start the process of making useful changes, such that we will be better positioned to solve the problems that really matter.

A couple of frameworks from psychology could serve as useful metrics for assessing progress towards this mission:


As far as I can tell, CFAR’s thesis is that cognitive changes will drive changes in behavior (and correspondingly, impact on the world).

I’d expect big cognitive changes to result in changes in big 5
personality traits. Specifically, I’d expect improved cognition to result in decreased neuroticism & increased conscientiousness.

I’d also expect big cognitive changes to result in improved performance on the Raven’s Matrices.

In other words, if CFAR workshops drive big changes in cognition, I’d expect these changes to reflect on well-validated psychological measures. If there’s not a before-workshop/​after-workshop change in these measures, that would be evidence that CFAR workshops are not causing big cognitive changes in workshop participants.

As far as I know, workshop participants aren’t being assessed on measures like this, so it’s hard to know what impact the workshops are actually having.


CFAR current metric is “increase in expected impact” or IEI:

In May 2016, we set out to count the number of alumni who have had an increase in expected impact due to their involvement with CFAR by sending out a survey to our alumni...
For each person’s responses, we manually coded whether it seemed like 1) their current path was high-impact, 2) their current path was substantially better than their old path, and 3) CFAR played a significant role in this change. We counted someone as having an “increase in expected impact” (IEI) if they met all three criteria.

18% of workshop participants surveyed had an IEI.

A metric like IEI is better than no metric at all, but it suffers from limitations:

  • IEI relies on retrospective self-report (i.e. at the time of survey, respondent thinks back to what they were doing before their workshop, what the workshop was like, and what they did afterwards. They then synthesize all this into a story about what change the workshop had.)

    • In contrast, a big 5 trait survey relies on immediate self-report (i.e. at time of survey, respondent answers about how things are for them right then).

      • This strikes me as more reliable than retrospective self-report.

    • A Raven’s Matrices test relies on cognitive performance at time of survey, which seems even higher signal than a big 5 survey.

  • IEI was developed in-house, so it is hard to compare the CFAR workshop to other interventions on the basis of IEI.

    • In contrast, many interventions measure change in big 5 traits & Raven’s Matrices performance.


I’m bringing this up because I found the lack of a standardized, well-validated metric surprising when I thought about it.

It seems plausible that CFAR is already tracking metrics like this privately. If that’s the case, I’m curious why they are held privately.

It could also be that CFAR isn’t tracking an outcome metric like big 5 trait change or Raven’s Matrices performance. If that’s case, I’m curious about why not – the surveys are cheap to administer, and it seems like they would yield valuable information about how CFAR is doing.