I’ll say first, I… don’t actually endorse their model, maybe at all, but this post was to contextualize what the model even is, and that it’s maybe in principle plausible, and that their choices are made with respect to that, rather than just random-spiritual-community-is-bad-just-because-they’re-bad.
(1), people who greatly inspire others almost never started out as followers in a school for how to become inspiring (this is similar to the issues with CFAR, although I’d say it was less outlandish to assume that rationality is teachable rather than sainthood).
I think this is kind of wrong, lots of religious leaders trained within standard institutions within established traditions, lots of musicians get extensive training/coaching in all the aspects of performance besides their instrument etc. This also isn’t really a crux, because:
(2), even if you could create a bunch of particularly virtuous and x-risk-concerned individuals, the path to impact would remain non-obvious from there, since they’d neither be famous nor powerful nor particularly smart or rational or skilled, so how are they going to have an outsized impact later?
So Maple’s theory of change is not necessarily “get people enlightened, and then make sure they’re as agentic as possible”, but more like, get people enlightened, and then some combination of:
use whatever wisdom they gain to solve technical alignment
(this seems mostly just silly to me)
have them diffuse that wisdom into eg. tech culture, “purifying it from the inside-out’”
(again, I don’t think this is likely at all, like I said, but maybe more plausible)
resolve the incentives of the AI race domestically and internationally… somehow
I think this falls under the general concept of Pascal’s abuser: “Hey, I am doing something obviously harmful, but under my reasoning it has a microscopic chance of saving the world, therefore it’s okay.”
This feels somehow like a straw, but reflecting on it briefly it also feels like a hole in my explanation, and maybe that I’m just wrong here.
Maybe a different story I could tell would be that it’s more like “if you want, you can join us in trying to do something really hard, which has power law returns, knowing that the modal outcome is burnout and some psychological damage”, so comparable to competitive bodybuilding, or maybe classical musical training, or doing a startup. (Edit: note, Maple doesn’t include the “modal outcome is moderate psychological damage” part, though neither do the examples really.)
I don’t know enough about Maple to have an opinion on it. Here I am operating on feelings, and they remind me of Leverage Research. A project separate enough, so it’s not my business what they are doing… except when they fuck up, and then it becomes an “abuse in the rationalist community”. Also, they recruit a lot in the rationalist community.
These projects are justified by potential benefits, but I also see some potential negative externalities. (And by the same Pascalian logic, we should be extra careful about bad things happening to people who want to save the world?)
Thanks for an object level response!
Yup, that’s an accurate enough paraphrase.
I’ll say first, I… don’t actually endorse their model, maybe at all, but this post was to contextualize what the model even is, and that it’s maybe in principle plausible, and that their choices are made with respect to that, rather than just random-spiritual-community-is-bad-just-because-they’re-bad.
I think this is kind of wrong, lots of religious leaders trained within standard institutions within established traditions, lots of musicians get extensive training/coaching in all the aspects of performance besides their instrument etc. This also isn’t really a crux, because:
So Maple’s theory of change is not necessarily “get people enlightened, and then make sure they’re as agentic as possible”, but more like, get people enlightened, and then some combination of:
use whatever wisdom they gain to solve technical alignment
(this seems mostly just silly to me)
have them diffuse that wisdom into eg. tech culture, “purifying it from the inside-out’”
(again, I don’t think this is likely at all, like I said, but maybe more plausible)
resolve the incentives of the AI race domestically and internationally… somehow
I think this falls under the general concept of Pascal’s abuser: “Hey, I am doing something obviously harmful, but under my reasoning it has a microscopic chance of saving the world, therefore it’s okay.”
Which is precisely what Why Are There So Many Rationalist Cults? is about.
This feels somehow like a straw, but reflecting on it briefly it also feels like a hole in my explanation, and maybe that I’m just wrong here.
Maybe a different story I could tell would be that it’s more like “if you want, you can join us in trying to do something really hard, which has power law returns, knowing that the modal outcome is burnout and some psychological damage”, so comparable to competitive bodybuilding, or maybe classical musical training, or doing a startup. (Edit: note, Maple doesn’t include the “modal outcome is moderate psychological damage” part, though neither do the examples really.)
I don’t know enough about Maple to have an opinion on it. Here I am operating on feelings, and they remind me of Leverage Research. A project separate enough, so it’s not my business what they are doing… except when they fuck up, and then it becomes an “abuse in the rationalist community”. Also, they recruit a lot in the rationalist community.
These projects are justified by potential benefits, but I also see some potential negative externalities. (And by the same Pascalian logic, we should be extra careful about bad things happening to people who want to save the world?)