Perhaps the mentors changed, and the current ones put much more value on stuff like being good at coding, running ML experiments, etc, than on understanding the key problems, having conceptual clarity around AI X-risk, etc.
There’s certainly more of an ML-streetlighting effect. The most recent track has 5 mentors on “Agency”, out of whom (AFAICT), 2 work on “AI agents”, 1 works mostly on AI consciousness & welfare, and only two (Ngo & Richardson) work on “figuring out the principles of how [the thing we are trying to point at with the word ‘agency’] works”. MATS 3.0 (?) had 6 mentors focused on something in this ballpark (Wentworth & Kosoy, Soares & Hebbar, Armstrong & Gorman) (and the total number of mentors was smaller).
It might also be the case that there’s proportionally more mentors working for capabilities labs.
Perhaps the mentors changed, and the current ones put much more value on stuff like being good at coding, running ML experiments, etc, than on understanding the key problems, having conceptual clarity around AI X-risk, etc.
There’s certainly more of an ML-streetlighting effect. The most recent track has 5 mentors on “Agency”, out of whom (AFAICT), 2 work on “AI agents”, 1 works mostly on AI consciousness & welfare, and only two (Ngo & Richardson) work on “figuring out the principles of how [the thing we are trying to point at with the word ‘agency’] works”. MATS 3.0 (?) had 6 mentors focused on something in this ballpark (Wentworth & Kosoy, Soares & Hebbar, Armstrong & Gorman) (and the total number of mentors was smaller).
It might also be the case that there’s proportionally more mentors working for capabilities labs.