Thanks for an object level response!
Yup, that’s an accurate enough paraphrase.
I’ll say first, I… don’t actually endorse their model, maybe at all, but this post was to contextualize what the model even is, and that it’s maybe in principle plausible, and that their choices are made with respect to that, rather than just random-spiritual-community-is-bad-just-because-they’re-bad.
(1), people who greatly inspire others almost never started out as followers in a school for how to become inspiring (this is similar to the issues with CFAR, although I’d say it was less outlandish to assume that rationality is teachable rather than sainthood).
I think this is kind of wrong, lots of religious leaders trained within standard institutions within established traditions, lots of musicians get extensive training/coaching in all the aspects of performance besides their instrument etc. This also isn’t really a crux, because:
(2), even if you could create a bunch of particularly virtuous and x-risk-concerned individuals, the path to impact would remain non-obvious from there, since they’d neither be famous nor powerful nor particularly smart or rational or skilled, so how are they going to have an outsized impact later?
So Maple’s theory of change is not necessarily “get people enlightened, and then make sure they’re as agentic as possible”, but more like, get people enlightened, and then some combination of:
use whatever wisdom they gain to solve technical alignment
(this seems mostly just silly to me)
have them diffuse that wisdom into eg. tech culture, “purifying it from the inside-out’”
(again, I don’t think this is likely at all, like I said, but maybe more plausible)
resolve the incentives of the AI race domestically and internationally… somehow
This feels somehow like a straw, but reflecting on it briefly it also feels like a hole in my explanation, and maybe that I’m just wrong here.
Maybe a different story I could tell would be that it’s more like “if you want, you can join us in trying to do something really hard, which has power law returns, knowing that the modal outcome is burnout and some psychological damage”, so comparable to competitive bodybuilding, or maybe classical musical training, or doing a startup. (Edit: note, Maple doesn’t include the “modal outcome is moderate psychological damage” part, though neither do the examples really.)