I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
I agree with your middle paragraph.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
Thank you Ben. I don’t think name calling and comparisons are helpful to a constructive debate, which I am happy to have. Happy 4th!
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.