Thanks, I had missed this in my reading. It does seem a strange choice to include in the speech (in a negative way) if the goal is to build a broad alliance against building ASI. Many rationalist are against building ASI in our current civilizational state, including Eliezer who started the movement/community.
@geoffreymiller, can you please explain your thought process for including this word in your sentence? I’m really surprised that you seem to consider yourself a rationalist (using “we” in connection with rationalism and arguing against people who do not consider you to be a community member “in good standing”[1]) and also talk about us in an antagonistic/unfriendly way in front of others, without some overriding reason that I can see.
I didn’t say that all Rationalists are evil. I do consider myself a Rationalist in many ways, and I’ve been an active member of LessWrong and EA for years, and have taught several college courses on EA that include Rationalist readings.
What I did say, in relation to my claim that ‘they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs’, is that ‘This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists’.
The ‘common denominator’ language implies overlap, not total agreement.
And I think there is substantial overlap among these communities—socially, financially, ethically, geographically.
Many Rationalists have been absolutely central to analyzing AI risks, advocating for AI safety, and fighting the good fight. But many others have gone to work for AI companies, often in ‘AI safety’ roles that do not actually slow down AI capabilities development. And many have become e/accs or transhumanists who see humanity as a disposable stepping-stone to something better.
Yes, on the surface all you did was to point out an overlap between Rationalists and other groups, but what I don’t understand is why you chose to emphasize this particular overlap, instead of for example the overlap between us and conservatives of wanting to stop ASI from being built, or simply leaving the Rationalists out of this speech and talk about us another time when you can speak with more nuance.
My hypotheses:
You just want to speak the truth as you see it, without regard to the political consequences. You had room to insert “Rationalist” into that derogatory sentence, but not room to say something longer about how rationalists and conservatives should be allies in this fight.
You had other political considerations that you can’t make explicit here, e.g. trying to signal honesty or loyalty to your new potential allies, or preempting a possible attack from other conservatives that you’re a Rationalist who shouldn’t be trusted (e.g. because we’re generally against religion).
I’m leaning strongly towards 2 (as 1 seems implausible given the political nature of the occasion), but still find it quite baffling, in part because it seems like you probably could have found a better way to accomplish what you wanted, without as much of the negative consequences (i.e., alienating the community that originated much of the thinking on AI risk, and making future coalition-building between our communities more difficult).
I think I’ll stop here and not pursue this line of questioning/criticism further. Perhaps you have some considerations or difficulties that are hard to talk about and for me to appreciate from afar.
Thanks, I had missed this in my reading. It does seem a strange choice to include in the speech (in a negative way) if the goal is to build a broad alliance against building ASI. Many rationalist are against building ASI in our current civilizational state, including Eliezer who started the movement/community.
@geoffreymiller, can you please explain your thought process for including this word in your sentence? I’m really surprised that you seem to consider yourself a rationalist (using “we” in connection with rationalism and arguing against people who do not consider you to be a community member “in good standing”[1]) and also talk about us in an antagonistic/unfriendly way in front of others, without some overriding reason that I can see.
I had upvoted a bunch of your comments in that thread, thinking that we should consider you a member in good standing.
I didn’t say that all Rationalists are evil. I do consider myself a Rationalist in many ways, and I’ve been an active member of LessWrong and EA for years, and have taught several college courses on EA that include Rationalist readings.
What I did say, in relation to my claim that ‘they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs’, is that ‘This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists’.
The ‘common denominator’ language implies overlap, not total agreement.
And I think there is substantial overlap among these communities—socially, financially, ethically, geographically.
Many Rationalists have been absolutely central to analyzing AI risks, advocating for AI safety, and fighting the good fight. But many others have gone to work for AI companies, often in ‘AI safety’ roles that do not actually slow down AI capabilities development. And many have become e/accs or transhumanists who see humanity as a disposable stepping-stone to something better.
Yes, on the surface all you did was to point out an overlap between Rationalists and other groups, but what I don’t understand is why you chose to emphasize this particular overlap, instead of for example the overlap between us and conservatives of wanting to stop ASI from being built, or simply leaving the Rationalists out of this speech and talk about us another time when you can speak with more nuance.
My hypotheses:
You just want to speak the truth as you see it, without regard to the political consequences. You had room to insert “Rationalist” into that derogatory sentence, but not room to say something longer about how rationalists and conservatives should be allies in this fight.
You had other political considerations that you can’t make explicit here, e.g. trying to signal honesty or loyalty to your new potential allies, or preempting a possible attack from other conservatives that you’re a Rationalist who shouldn’t be trusted (e.g. because we’re generally against religion).
I’m leaning strongly towards 2 (as 1 seems implausible given the political nature of the occasion), but still find it quite baffling, in part because it seems like you probably could have found a better way to accomplish what you wanted, without as much of the negative consequences (i.e., alienating the community that originated much of the thinking on AI risk, and making future coalition-building between our communities more difficult).
I think I’ll stop here and not pursue this line of questioning/criticism further. Perhaps you have some considerations or difficulties that are hard to talk about and for me to appreciate from afar.