Yes, because it funneled all of its best and brightest into AI safety?
We might be evaluating the hypothetical at different points. I’m thinking of the movement coalescing around the sequences except the message underlying the sequence is “you should solve ageing” rather than “you should solve alignment”.
Richard is saying that in the hypothetical world in which AGI was proven to be hypothetically impossible or something of that nature, the cluster of people who can be referred as belonging to the rationalist—EA[1] set would be trying to solve aging and perfecting cryonics, whereas the cluster of people in the EA—rationalist set would be into global health and ending factory farming.
You had a (critique?) of rationalists in that they didn’t have motive force or coordination capacity to do much beyond AI safety, but Richard is saying that’s because AI safety took all the talent of the rationalist movement. If AI never existed, obviously, those rationalists would be doing something else.
Maybe you could trying to attack the hypothetical from a counterfactual angle? That the people in a hypothetical AI-less world wouldn’t have even coalesced around anything without AI safety, so there wouldn’t even be an organized community around cryonics and aging? Or that even in our current world, rationalists should have gone into cryonics and aging even with AI looming over our heads?
I think the idea that rationalists in an AI-less counterfactual world would have gone into cryonics and aging is not at all disproven by showing that rationalists in an AI world have not revolutionized cryonics and anti-aging. That doesn’t grok to me at all, I agree with Richard here.
There’s likely not that many people in the pure rationalist—EA set, but I’m referring to dispositions and norms here, the set of self-identified rationalists who are more further away from EA.
Yes, because it funneled all of its best and brightest into AI safety?
We might be evaluating the hypothetical at different points. I’m thinking of the movement coalescing around the sequences except the message underlying the sequence is “you should solve ageing” rather than “you should solve alignment”.
Maybe I’m missing something. Why are you comparing to the that hypothetical world?
Richard is saying that in the hypothetical world in which AGI was proven to be hypothetically impossible or something of that nature, the cluster of people who can be referred as belonging to the rationalist—EA[1] set would be trying to solve aging and perfecting cryonics, whereas the cluster of people in the EA—rationalist set would be into global health and ending factory farming.
You had a (critique?) of rationalists in that they didn’t have motive force or coordination capacity to do much beyond AI safety, but Richard is saying that’s because AI safety took all the talent of the rationalist movement. If AI never existed, obviously, those rationalists would be doing something else.
Maybe you could trying to attack the hypothetical from a counterfactual angle? That the people in a hypothetical AI-less world wouldn’t have even coalesced around anything without AI safety, so there wouldn’t even be an organized community around cryonics and aging? Or that even in our current world, rationalists should have gone into cryonics and aging even with AI looming over our heads?
I think the idea that rationalists in an AI-less counterfactual world would have gone into cryonics and aging is not at all disproven by showing that rationalists in an AI world have not revolutionized cryonics and anti-aging. That doesn’t grok to me at all, I agree with Richard here.
There’s likely not that many people in the pure rationalist—EA set, but I’m referring to dispositions and norms here, the set of self-identified rationalists who are more further away from EA.