I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.
I agree that a comparison to Eichmann is not optimal.
Instead, if AI turns out to have consequences so bad that they outweigh the good, it’ll have been better to compare people working at AI labs to Thomas Midgley, who insisted that his leaded gasoline couldn’t be dangerous even when presented with counter-evidence, and Edward Teller, who (as far as I can tell) was simply fascinated by the engineering challenges of scaling hydrogen bombs to levels that could incinerate entire continents.
These two people still embody two archetypes of what could reasonably be called “evil”, but arguably fit better with the psychology of people currently working at AI labs.
These two examples also avoid Godwin’s law type attractors.