For two, a person who has done evil, versus a person who is evil, are quite different things. I think that it’s sadly not always the case that a person’s character is aligned with a particular behavior of theirs.
I do think many of the historical people most widely considered to be evil now were similarly not awful in full generality, or even across most contexts. For example, Eichmann, the ops lead for the Holocaust, was apparently a good husband and father, and generally took care not to violate local norms in his life or work. Yet personally I feel quite comfortable describing him as evil, despite “evil” being a fuzzy folk term of the sort which tends to imperfectly/lossily describe any given referent.
I’m not quite sure what I make of this, I’ll take this opportunity to think aloud about it.
I often take a perspective where most people are born a kludgey mess, and then if they work hard they can become something principled and consistent and well-defined. But without that, they don’t have much in the way of persistent beliefs or morals such that they can be called ‘good’ or ‘evil’.
I think of an evil person as someone more like Voldemort in HPMOR, who has reflected on his principles and will be persistently a murdering sociopath, than someone who ended up making horrendous decisions but wouldn’t in a different time and place. I think if you put me under a lot of unexpected political forces and forced me to make high-stakes decisions, I could make bad decisions, but not because I’m a fundamentally bad person.
I do think it makes sense to write people off as bad people, in our civilization. There are people who have poor impulse control, who have poor empathy, who are pathological liars, and who aren’t save-able by any of our current means, and will always end up in jail or hurting people around them. I rarely interact with such people so it’s hard for me to keep this in mind, but I do believe such people exist.
But evil seems a bit stronger than that, it seems a bit more exceptional. Perhaps I would consider SBF an evil person; he seems to me someone who knew he was a sociopath from a young age, and didn’t care about people, and would lie and deceive, was hyper-competent, and I expect that if you release him into society he will robustly continue to do extreme amounts of damage.
Is that who Eichmann was? I haven’t read the classic book on him, but I thought the point of ‘the banality of evil’ was that he seemed quite boring and like many other people? Is it the case that you could replace Eichmann with like >10% of the population and get similar outcomes? 1%? I am not sure if it is accurate to think of that large a chunk of people as ‘evil’, as being the kind of robustly bad people who should probably be thrown in prison for the protection of civilization. My current (superficial) understanding is that Eichmann enacted an atrocity without being someone who would persistently do so in many societies. He had the capacity for great evil, but this was not something he would reliably seek out.
It is possible that somehow thousands of people like SBF and Voldemort have gotten together to work at AI companies; I don’t currently believe that. To be clear, I think that if we believe there are evil people, then it must surely describe some of the people working at big AI companies that are building doomsday machines, who are very resiliently doing so in the face of knowing that they’re hastening the end of humanity, but I don’t currently think it describes most of the people.
This concludes my thinking aloud; I would be quite interested to read more of how your perspective differs, and why.
Ben—your subtext here seems to be that only lower-class violent criminals are truly ‘evil’, whereas very few middle/upper-class white-collar people are truly evil (with a few notable exceptions such as SBF or Voldemort) -- with the implications that the majority of ASI devs can’t possibly be evil in the ways I’ve argued.
I think that doesn’t fit the psychological and criminological research on the substantial overlap between psychopathy and sociopathy, and between violent and non-violent crime.
It also doesn’t fit the standard EA point that a lot of ‘non-evil’ people can get swept up in doing evil collective acts as parts of collectively evil industries, such as slave-trading, factory farming, Big Tobacco, the private prison system, etc. - but that often, the best way to fight such industries is to use moral stigmatization.
You mis-read me on the first point; I said that (something kind of like) ‘lower-class violent criminals’ are sometimes dysfunctional and bad people, but I was distinguishing that from someone more hyper competent and self-aware like SBF or Voldemort; I said that only the latter are evil. (For instance, they’ve hurt orders of magnitude more people.)
(I’m genuinely not sure what research you’re referring to – I am expect you are 100x as familiar with the literature as I am, and FWIW I’d be happy to get a pointer or two of things to read.[1])
The standard EA point is to use moral stigmatization? Even if that’s accurate, I’m afraid I no longer have any trust in EAs to do ethics well. As an example that you will be sympathetic to, lots of them have endorsed working at AI companies over the past decade (but many many other examples have persuaded me of this point).
To be clear, I am supportive of moral stigma being associated with working at AI companies. I’ve shown up to multiple protests outside the companies (and I brought my mum!). If you have any particular actions in mind to encourage me to do (I’m probably not doing as much as I could) I’m interested to hear them. Perhaps you could write a guide to how to act when dealing with people in your social scene who work on building doomsday devices in a way that holds a firm moral line while not being socially self-destructive / not immediately blowing up all of your friendships. I do think more actionable advice would be helpful.
I expect it’s the case that crime rates correlate with impulsivity, low-IQ, and wealth (negatively). Perhaps you’re saying that psychopathy and sociopathy do not correlate with social class? That sounds plausible. (I’m also not sure what you’re referring to with the violent part, my guess is that violent crimes do correlate with social class.)
Is that who Eichmann was? I haven’t read the classic book on him, but I thought the point of ‘the banality of evil’ was that he seemed quite boring and like many other people? Is it the case that you could replace Eichmann with like >10% of the population and get similar outcomes? 1%? I am not sure if it is accurate to think of that large a chunk of people as ‘evil’, as being the kind of robustly bad people who should probably be thrown in prison for the protection of civilization. My current (superficial) understanding is that Eichmann enacted an atrocity without being someone who would persistently do so in many societies. He had the capacity for great evil, but this was not something he would reliably seek out.
Eichmann was definitely evil. The popular conception of Eichmann as merely an ordinary guy who was “just doing his job” and was “boring” is partly mischaracterization of Arendt’s work, partly her own mistakes (i.e., her characterizations, which are no longer considered accurate by historians).
Arendt also notes very explicitly in the article, Eichmann’s evident pride in having been responsible for so many deaths.[24] She notes also the rhetorical contradiction of this pride with his claim that he will gladly go to the gallows as a warning to all future antisemites, as well as the contradiction between this sentiment and the entire argument of his defense (that he should not be put to death). Eichmann, as Arendt observes, does not experience the dissonance between these evidently contradictory assertions but finds the aptness of his clichés themselves to be —from the perspective of his own inclination—satisfactory substitutes for moral or ethical evaluation. He has no concern about their contradiction. This, coupled with an inability to imagine the perspective [of] others, is the individually psychological expression of what Arendt calls the banality of evil.[15] Careerism without moral evaluation of the consequences of one’s work is the collective or social aspect of this banality of evil.
(Hmm… that last line is rather reminiscent of something, no?)
In her 2011 book Eichmann Before Jerusalem, based largely on the Sassen interviews and Eichmann’s notes made while in exile, Bettina Stangneth argues that Eichmann was an ideologically motivated antisemite and lifelong committed Nazi who intentionally built a persona as a faceless bureaucrat for presentation at the trial.[228] Historians such as Christopher Browning, Deborah Lipstadt, Yaacov Lozowick, and David Cesarani reached a similar conclusion: that Eichmann was not the unthinking bureaucratic functionary that Arendt believed him to be.[229] Historian Barbara W. Tuchman wrote of Eichmann, “The evidence shows him pursuing his job with initiative and enthusiasm that often outdistanced his orders. Such was his zeal that he learned Hebrew and Yiddish the better to deal with the victims.”[230] Concerning the famous characterisation of his banality, Tuchman observed, “Eichmann was an extraordinary, not an ordinary man, whose record is hardly one of the ‘banality’ of evil. …”
I am one of those people that are supposed to be stigmatized/detterred by this action. I doubt this tactic will be effective. This thread (including the disgusting comparison to Eichmann who directed the killing of millions in the real world—not in some hypothetical future one) does not motivate me to interact with the people holding such positions. Given that much of my extended family was wiped out by the holocaust, I find these Nazi comparisons abhorrent, and would not look forward to interact with people making them whether or not they decide to boycott me.
BTW this is not some original tactic, PETA is using similar approaches for veganism. I don’t think they are very effective either.
To @So8res—I am surprised and disappointed that this Godwin’s law thread survived a moderation policy that is described as “Reign of Terror”
I’ve often appreciated your contributions here, but given the stakes of existential risk, I do think that if my beliefs about risk from AI are even remotely correct, then it’s hard to escape the conclusion that the people presently working at labs are committing the greatest atrocity that anyone in human history has or will ever commit.
The logic of this does not seem that complicated, and while I disagree with Geoffrey Miller on how he goes about doing things, I have even less sympathy for someone reacting to a bunch of people really thinking extremely seriously and carefully about whether what that person is doing might be extremely bad with “if people making such comparisons decide to ostracize me then I consider it a nice bonus”. You don’t have to agree, but man, I feel like you clearly have the logical pieces to understand why one could believe you are causing extremely great harm, without that implying the insanity of the person believing that.
I respect at least some of the people working at capability labs. One thing that unites all of the ones I do respect is that they treat their role at those labs with the understanding that they are in a position of momentous responsibility, and that them making mistakes could indeed cause historically unprecedented levels of harm. I wish you did the same here.
I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
I consider the following question-cluster to be squarely topical: “Suppose one believes it is evil to advance AI capabilities towards superintelligence, on the grounds that such a superintelligence would quite likely to kill us all. Suppose further that one fails to unapologetically name this perceived evil as ‘evil’, e.g. out of a sense of social discomfort. Is that a failure of courage, in the sense of this post?”
I consider the following question-cluster to be a tangent: “Suppose person X is contributing to a project that I believe will, in the future, cause great harms. Does person X count as ‘evil’? Even if X agrees with me about which outcomes are good and disagrees about the consequences of the project? Even if the harms of the project have not yet occurred? Even if X would not be robustly harmful in other circumstances? What if X thinks they’re trying to nudge the project in a less-bad direction?”
I consider the following sort of question to be sliding into the controversy attractor: “Are people working at AI companies evil?”
The LW mods told me they’re considering implementing a tool to move discussions to the open thread (so that they may continue without derailing the topical discussions). FYI @habryka: if it existed, I might use it on the tangents, idk. I encourage people to pump against the controversy attractor.)
I agree with you on the categorization of 1 and 2. I think there is a reason why Godwin’s law was created once thread follow the controversy attractor to this direction they tend to be unproductive.
I completely agree this discussion should be moved outside your post. But the counterintuitive mechanics of LessWrong mean a derailing discussion may actually increase the visibility and upvotes of your original message (by bumping it in the “recent discussion”).
(It’s probably still bad if it’s high up in the comment section.)
It’s too bad you can only delete comment threads, you can’t move them to the bottom or make them collapsed by default.
I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.
That’s interesting to hear that many historians believe he was secretly more ideologically motivated than Arendt thought, and also believe that he portrayed a false face during all of the trials, thanks for the info.
I do think many of the historical people most widely considered to be evil now were similarly not awful in full generality, or even across most contexts. For example, Eichmann, the ops lead for the Holocaust, was apparently a good husband and father, and generally took care not to violate local norms in his life or work. Yet personally I feel quite comfortable describing him as evil, despite “evil” being a fuzzy folk term of the sort which tends to imperfectly/lossily describe any given referent.
I’m not quite sure what I make of this, I’ll take this opportunity to think aloud about it.
I often take a perspective where most people are born a kludgey mess, and then if they work hard they can become something principled and consistent and well-defined. But without that, they don’t have much in the way of persistent beliefs or morals such that they can be called ‘good’ or ‘evil’.
I think of an evil person as someone more like Voldemort in HPMOR, who has reflected on his principles and will be persistently a murdering sociopath, than someone who ended up making horrendous decisions but wouldn’t in a different time and place. I think if you put me under a lot of unexpected political forces and forced me to make high-stakes decisions, I could make bad decisions, but not because I’m a fundamentally bad person.
I do think it makes sense to write people off as bad people, in our civilization. There are people who have poor impulse control, who have poor empathy, who are pathological liars, and who aren’t save-able by any of our current means, and will always end up in jail or hurting people around them. I rarely interact with such people so it’s hard for me to keep this in mind, but I do believe such people exist.
But evil seems a bit stronger than that, it seems a bit more exceptional. Perhaps I would consider SBF an evil person; he seems to me someone who knew he was a sociopath from a young age, and didn’t care about people, and would lie and deceive, was hyper-competent, and I expect that if you release him into society he will robustly continue to do extreme amounts of damage.
Is that who Eichmann was? I haven’t read the classic book on him, but I thought the point of ‘the banality of evil’ was that he seemed quite boring and like many other people? Is it the case that you could replace Eichmann with like >10% of the population and get similar outcomes? 1%? I am not sure if it is accurate to think of that large a chunk of people as ‘evil’, as being the kind of robustly bad people who should probably be thrown in prison for the protection of civilization. My current (superficial) understanding is that Eichmann enacted an atrocity without being someone who would persistently do so in many societies. He had the capacity for great evil, but this was not something he would reliably seek out.
It is possible that somehow thousands of people like SBF and Voldemort have gotten together to work at AI companies; I don’t currently believe that. To be clear, I think that if we believe there are evil people, then it must surely describe some of the people working at big AI companies that are building doomsday machines, who are very resiliently doing so in the face of knowing that they’re hastening the end of humanity, but I don’t currently think it describes most of the people.
This concludes my thinking aloud; I would be quite interested to read more of how your perspective differs, and why.
(cf. Are Your Enemies Innately Evil? from the Sequences)
Ben—your subtext here seems to be that only lower-class violent criminals are truly ‘evil’, whereas very few middle/upper-class white-collar people are truly evil (with a few notable exceptions such as SBF or Voldemort) -- with the implications that the majority of ASI devs can’t possibly be evil in the ways I’ve argued.
I think that doesn’t fit the psychological and criminological research on the substantial overlap between psychopathy and sociopathy, and between violent and non-violent crime.
It also doesn’t fit the standard EA point that a lot of ‘non-evil’ people can get swept up in doing evil collective acts as parts of collectively evil industries, such as slave-trading, factory farming, Big Tobacco, the private prison system, etc. - but that often, the best way to fight such industries is to use moral stigmatization.
You mis-read me on the first point; I said that (something kind of like) ‘lower-class violent criminals’ are sometimes dysfunctional and bad people, but I was distinguishing that from someone more hyper competent and self-aware like SBF or Voldemort; I said that only the latter are evil. (For instance, they’ve hurt orders of magnitude more people.)
(I’m genuinely not sure what research you’re referring to – I am expect you are 100x as familiar with the literature as I am, and FWIW I’d be happy to get a pointer or two of things to read.[1])
The standard EA point is to use moral stigmatization? Even if that’s accurate, I’m afraid I no longer have any trust in EAs to do ethics well. As an example that you will be sympathetic to, lots of them have endorsed working at AI companies over the past decade (but many many other examples have persuaded me of this point).
To be clear, I am supportive of moral stigma being associated with working at AI companies. I’ve shown up to multiple protests outside the companies (and I brought my mum!). If you have any particular actions in mind to encourage me to do (I’m probably not doing as much as I could) I’m interested to hear them. Perhaps you could write a guide to how to act when dealing with people in your social scene who work on building doomsday devices in a way that holds a firm moral line while not being socially self-destructive / not immediately blowing up all of your friendships. I do think more actionable advice would be helpful.
I expect it’s the case that crime rates correlate with impulsivity, low-IQ, and wealth (negatively). Perhaps you’re saying that psychopathy and sociopathy do not correlate with social class? That sounds plausible. (I’m also not sure what you’re referring to with the violent part, my guess is that violent crimes do correlate with social class.)
Eichmann was definitely evil. The popular conception of Eichmann as merely an ordinary guy who was “just doing his job” and was “boring” is partly mischaracterization of Arendt’s work, partly her own mistakes (i.e., her characterizations, which are no longer considered accurate by historians).
An example of the former:
(Hmm… that last line is rather reminiscent of something, no?)
Concerning the latter:
I am one of those people that are supposed to be stigmatized/detterred by this action. I doubt this tactic will be effective. This thread (including the disgusting comparison to Eichmann who directed the killing of millions in the real world—not in some hypothetical future one) does not motivate me to interact with the people holding such positions. Given that much of my extended family was wiped out by the holocaust, I find these Nazi comparisons abhorrent, and would not look forward to interact with people making them whether or not they decide to boycott me.
BTW this is not some original tactic, PETA is using similar approaches for veganism. I don’t think they are very effective either.
To @So8res—I am surprised and disappointed that this Godwin’s law thread survived a moderation policy that is described as “Reign of Terror”
I’ve often appreciated your contributions here, but given the stakes of existential risk, I do think that if my beliefs about risk from AI are even remotely correct, then it’s hard to escape the conclusion that the people presently working at labs are committing the greatest atrocity that anyone in human history has or will ever commit.
The logic of this does not seem that complicated, and while I disagree with Geoffrey Miller on how he goes about doing things, I have even less sympathy for someone reacting to a bunch of people really thinking extremely seriously and carefully about whether what that person is doing might be extremely bad with “if people making such comparisons decide to ostracize me then I consider it a nice bonus”. You don’t have to agree, but man, I feel like you clearly have the logical pieces to understand why one could believe you are causing extremely great harm, without that implying the insanity of the person believing that.
I respect at least some of the people working at capability labs. One thing that unites all of the ones I do respect is that they treat their role at those labs with the understanding that they are in a position of momentous responsibility, and that them making mistakes could indeed cause historically unprecedented levels of harm. I wish you did the same here.
I edited the original post to make the same point with less sarcasm.
I take risk from AI very seriously which is precisely why I am working in alignment at OpenAI. I am also open to talking with people having different opinions, which is why I try to follow this forum (and also preordered the book). But I do draw the line at people making Nazi comparisons.
FWIW I think radicals often hurt the causes they espouse, whether it is animal rights, climate change, or Palestine. Even if after decades the radicals are perceived to have been on “the right side of history”, their impact was often negative and it caused that to have taken longer: David Shor was famously cancelled for making this point in the context of the civil rights movement.
Sorry to hear the conversation was on a difficult topic for you; I imagine that is true for many of the Jewish folks we have around these parts.
FWIW I think we were discussing Eichmann in order to analyze what ‘evil’ is or isn’t, and did not make any direct comparisons between him and anyone.
...oh, now I see what Said’s “Hmm… that last line is rather reminiscent of something, no?” is probably making such a comparison (I couldn’t tell what he meant it when I read it initially). I can see why you’d respond negatively to that. While there’s a valid point to be made about how people who just try to gain status/power/career-capital without thinking about ethics can do horrendous things, I do not think that it is healthy for discourse to express that in the passive-aggressive way that Said did.
The comparisons invite themselves, frankly. “Careerism without moral evaluation of the consequences of one’s work” is a perfect encapsulation of the attitudes of many of the people who work in frontier AI labs, and I decline to pretend otherwise.
(And I must also say that I find the “Jewish people must not be compared to Nazis” stance to be rather absurd, especially in this sort of case. I’m Jewish myself, and I think that refusing to learn, from that particular historical example, any lessons whatsoever that could possibly ever apply to our own behavior, is morally irresponsible in the extreme.)
EDIT: Although the primary motivation of my comment about Eichmann was indeed to correct the perception of the historians’ consensus, so if you prefer, I can remove the comparison to a separate comment; the rest of the comment stands without that part.
I agree with your middle paragraph.
To be clear, I would approve more of a comment that made the comparison overtly[0], rather than one that made it in a subtle way that was harder to notice or that people missed (I did not realize what you were referring to until I tried to puzzle at why boaz had gotten so upset!). I think it is not healthy for people to only realize later that they were compared to Nazis. And I think it fair for them to consider that an underhanded way to cause them social punishment, to do it in a way that was hard to directly respond to. I believe it’s healthier for attacks[1] to be open and clear.
[0] To be clear, there may still be good reasons to not throw in such a jab at this point in the conversation, but my main point is that doing it with subtlety makes it worse, not better, because it also feels sneaky.
[1] “Attacks”, a word which here means “statements that declare someone has a deeply rotten character or whose behavior has violated an important norm, in a way that if widely believed will cause people to punish them”.
(I don’t mean to derail this thread with discussion of discussion norms. Perhaps if we build that “move discourse elsewhere button” that can later be applied back to this thread.)
Thank you Ben. I don’t think name calling and comparisons are helpful to a constructive debate, which I am happy to have. Happy 4th!
boazbarak—I don’t understand your implication that my position is ‘radical’.
I have exactly the same view on the magnitude of ASI extinction risk that every leader of a major AI company does—that it’s a significant risk.
The main difference between them and me is that they are willing to push ahead with ASI development despite the significant risk of human extinction, and I think they are utterly evil for doing so, because they’re endangering all of our kids.
In my view, risking extinction for some vague promise of an ASI utopia is the radical position. Protecting us from extinction is a mainstream, commonsense, utterly normal human position.
(From a moderation perspective:
I consider the following question-cluster to be squarely topical: “Suppose one believes it is evil to advance AI capabilities towards superintelligence, on the grounds that such a superintelligence would quite likely to kill us all. Suppose further that one fails to unapologetically name this perceived evil as ‘evil’, e.g. out of a sense of social discomfort. Is that a failure of courage, in the sense of this post?”
I consider the following question-cluster to be a tangent: “Suppose person X is contributing to a project that I believe will, in the future, cause great harms. Does person X count as ‘evil’? Even if X agrees with me about which outcomes are good and disagrees about the consequences of the project? Even if the harms of the project have not yet occurred? Even if X would not be robustly harmful in other circumstances? What if X thinks they’re trying to nudge the project in a less-bad direction?”
I consider the following sort of question to be sliding into the controversy attractor: “Are people working at AI companies evil?”
The LW mods told me they’re considering implementing a tool to move discussions to the open thread (so that they may continue without derailing the topical discussions). FYI @habryka: if it existed, I might use it on the tangents, idk. I encourage people to pump against the controversy attractor.)
I agree with you on the categorization of 1 and 2. I think there is a reason why Godwin’s law was created once thread follow the controversy attractor to this direction they tend to be unproductive.
I completely agree this discussion should be moved outside your post. But the counterintuitive mechanics of LessWrong mean a derailing discussion may actually increase the visibility and upvotes of your original message (by bumping it in the “recent discussion”).
(It’s probably still bad if it’s high up in the comment section.)
It’s too bad you can only delete comment threads, you can’t move them to the bottom or make them collapsed by default.
The apparent aim of OpenAI (making AGI, even though we don’t know how to do so without killing everyone) is evil.
I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.
That’s interesting to hear that many historians believe he was secretly more ideologically motivated than Arendt thought, and also believe that he portrayed a false face during all of the trials, thanks for the info.