Is that who Eichmann was? I haven’t read the classic book on him, but I thought the point of ‘the banality of evil’ was that he seemed quite boring and like many other people? Is it the case that you could replace Eichmann with like >10% of the population and get similar outcomes? 1%? I am not sure if it is accurate to think of that large a chunk of people as ‘evil’, as being the kind of robustly bad people who should probably be thrown in prison for the protection of civilization. My current (superficial) understanding is that Eichmann enacted an atrocity without being someone who would persistently do so in many societies. He had the capacity for great evil, but this was not something he would reliably seek out.
Eichmann was definitely evil. The popular conception of Eichmann as merely an ordinary guy who was “just doing his job” and was “boring” is partly mischaracterization of Arendt’s work, partly her own mistakes (i.e., her characterizations, which are no longer considered accurate by historians).
Arendt also notes very explicitly in the article, Eichmann’s evident pride in having been responsible for so many deaths.[24] She notes also the rhetorical contradiction of this pride with his claim that he will gladly go to the gallows as a warning to all future antisemites, as well as the contradiction between this sentiment and the entire argument of his defense (that he should not be put to death). Eichmann, as Arendt observes, does not experience the dissonance between these evidently contradictory assertions but finds the aptness of his clichés themselves to be —from the perspective of his own inclination—satisfactory substitutes for moral or ethical evaluation. He has no concern about their contradiction. This, coupled with an inability to imagine the perspective [of] others, is the individually psychological expression of what Arendt calls the banality of evil.[15] Careerism without moral evaluation of the consequences of one’s work is the collective or social aspect of this banality of evil.
(Hmm… that last line is rather reminiscent of something, no?)
In her 2011 book Eichmann Before Jerusalem, based largely on the Sassen interviews and Eichmann’s notes made while in exile, Bettina Stangneth argues that Eichmann was an ideologically motivated antisemite and lifelong committed Nazi who intentionally built a persona as a faceless bureaucrat for presentation at the trial.[228] Historians such as Christopher Browning, Deborah Lipstadt, Yaacov Lozowick, and David Cesarani reached a similar conclusion: that Eichmann was not the unthinking bureaucratic functionary that Arendt believed him to be.[229] Historian Barbara W. Tuchman wrote of Eichmann, “The evidence shows him pursuing his job with initiative and enthusiasm that often outdistanced his orders. Such was his zeal that he learned Hebrew and Yiddish the better to deal with the victims.”[230] Concerning the famous characterisation of his banality, Tuchman observed, “Eichmann was an extraordinary, not an ordinary man, whose record is hardly one of the ‘banality’ of evil. …”
I am one of those people that are supposed to be stigmatized/detterred by this action. I doubt this tactic will be effective. This thread (including the disgusting comparison to Eichmann who directed the killing of millions in the real world—not in some hypothetical future one) does not motivate me to interact with the people holding such positions. Given that much of my extended family was wiped out by the holocaust, I find these Nazi comparisons abhorrent, and would not look forward to interact with people making them whether or not they decide to boycott me.
BTW this is not some original tactic, PETA is using similar approaches for veganism. I don’t think they are very effective either.
To @So8res—I am surprised and disappointed that this Godwin’s law thread survived a moderation policy that is described as “Reign of Terror”
I’ve often appreciated your contributions here, but given the stakes of existential risk, I do think that if my beliefs about risk from AI are even remotely correct, then it’s hard to escape the conclusion that the people presently working at labs are committing the greatest atrocity that anyone in human history has or will ever commit.
The logic of this does not seem that complicated, and while I disagree with Geoffrey Miller on how he goes about doing things, I have even less sympathy for someone reacting to a bunch of people really thinking extremely seriously and carefully about whether what that person is doing might be extremely bad with “if people making such comparisons decide to ostracize me then I consider it a nice bonus”. You don’t have to agree, but man, I feel like you clearly have the logical pieces to understand why one could believe you are causing extremely great harm, without that implying the insanity of the person believing that.
I respect at least some of the people working at capability labs. One thing that unites all of the ones I do respect is that they treat their role at those labs with the understanding that they are in a position of momentous responsibility, and that them making mistakes could indeed cause historically unprecedented levels of harm. I wish you did the same here.
I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
I consider the following question-cluster to be squarely topical: “Suppose one believes it is evil to advance AI capabilities towards superintelligence, on the grounds that such a superintelligence would quite likely to kill us all. Suppose further that one fails to unapologetically name this perceived evil as ‘evil’, e.g. out of a sense of social discomfort. Is that a failure of courage, in the sense of this post?”
I consider the following question-cluster to be a tangent: “Suppose person X is contributing to a project that I believe will, in the future, cause great harms. Does person X count as ‘evil’? Even if X agrees with me about which outcomes are good and disagrees about the consequences of the project? Even if the harms of the project have not yet occurred? Even if X would not be robustly harmful in other circumstances? What if X thinks they’re trying to nudge the project in a less-bad direction?”
I consider the following sort of question to be sliding into the controversy attractor: “Are people working at AI companies evil?”
The LW mods told me they’re considering implementing a tool to move discussions to the open thread (so that they may continue without derailing the topical discussions). FYI @habryka: if it existed, I might use it on the tangents, idk. I encourage people to pump against the controversy attractor.)
I agree with you on the categorization of 1 and 2. I think there is a reason why Godwin’s law was created once thread follow the controversy attractor to this direction they tend to be unproductive.
I completely agree this discussion should be moved outside your post. But the counterintuitive mechanics of LessWrong mean a derailing discussion may actually increase the visibility and upvotes of your original message (by bumping it in the “recent discussion”).
(It’s probably still bad if it’s high up in the comment section.)
It’s too bad you can only delete comment threads, you can’t move them to the bottom or make them collapsed by default.
I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.
That’s interesting to hear that many historians believe he was secretly more ideologically motivated than Arendt thought, and also believe that he portrayed a false face during all of the trials, thanks for the info.
Eichmann was definitely evil. The popular conception of Eichmann as merely an ordinary guy who was “just doing his job” and was “boring” is partly mischaracterization of Arendt’s work, partly her own mistakes (i.e., her characterizations, which are no longer considered accurate by historians).
An example of the former:
(Hmm… that last line is rather reminiscent of something, no?)
Concerning the latter:
I am one of those people that are supposed to be stigmatized/detterred by this action. I doubt this tactic will be effective. This thread (including the disgusting comparison to Eichmann who directed the killing of millions in the real world—not in some hypothetical future one) does not motivate me to interact with the people holding such positions. Given that much of my extended family was wiped out by the holocaust, I find these Nazi comparisons abhorrent, and would not look forward to interact with people making them whether or not they decide to boycott me.
BTW this is not some original tactic, PETA is using similar approaches for veganism. I don’t think they are very effective either.
To @So8res—I am surprised and disappointed that this Godwin’s law thread survived a moderation policy that is described as “Reign of Terror”
I’ve often appreciated your contributions here, but given the stakes of existential risk, I do think that if my beliefs about risk from AI are even remotely correct, then it’s hard to escape the conclusion that the people presently working at labs are committing the greatest atrocity that anyone in human history has or will ever commit.
The logic of this does not seem that complicated, and while I disagree with Geoffrey Miller on how he goes about doing things, I have even less sympathy for someone reacting to a bunch of people really thinking extremely seriously and carefully about whether what that person is doing might be extremely bad with “if people making such comparisons decide to ostracize me then I consider it a nice bonus”. You don’t have to agree, but man, I feel like you clearly have the logical pieces to understand why one could believe you are causing extremely great harm, without that implying the insanity of the person believing that.
I respect at least some of the people working at capability labs. One thing that unites all of the ones I do respect is that they treat their role at those labs with the understanding that they are in a position of momentous responsibility, and that them making mistakes could indeed cause historically unprecedented levels of harm. I wish you did the same here.
I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
I agree with your middle paragraph.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
Thank you Ben. I don’t think name calling and comparisons are helpful to a constructive debate, which I am happy to have. Happy 4th!
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
(From a moderation perspective:
I consider the following question-cluster to be squarely topical: “Suppose one believes it is evil to advance AI capabilities towards superintelligence, on the grounds that such a superintelligence would quite likely to kill us all. Suppose further that one fails to unapologetically name this perceived evil as ‘evil’, e.g. out of a sense of social discomfort. Is that a failure of courage, in the sense of this post?”
I consider the following question-cluster to be a tangent: “Suppose person X is contributing to a project that I believe will, in the future, cause great harms. Does person X count as ‘evil’? Even if X agrees with me about which outcomes are good and disagrees about the consequences of the project? Even if the harms of the project have not yet occurred? Even if X would not be robustly harmful in other circumstances? What if X thinks they’re trying to nudge the project in a less-bad direction?”
I consider the following sort of question to be sliding into the controversy attractor: “Are people working at AI companies evil?”
The LW mods told me they’re considering implementing a tool to move discussions to the open thread (so that they may continue without derailing the topical discussions). FYI @habryka: if it existed, I might use it on the tangents, idk. I encourage people to pump against the controversy attractor.)
I agree with you on the categorization of 1 and 2. I think there is a reason why Godwin’s law was created once thread follow the controversy attractor to this direction they tend to be unproductive.
I completely agree this discussion should be moved outside your post. But the counterintuitive mechanics of LessWrong mean a derailing discussion may actually increase the visibility and upvotes of your original message (by bumping it in the “recent discussion”).
(It’s probably still bad if it’s high up in the comment section.)
It’s too bad you can only delete comment threads, you can’t move them to the bottom or make them collapsed by default.
The apparent aim of OpenAI (making AGI, even though we don’t know how to do so without killing everyone) is evil.
I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.
That’s interesting to hear that many historians believe he was secretly more ideologically motivated than Arendt thought, and also believe that he portrayed a false face during all of the trials, thanks for the info.