I understand your point about labelling industries, actions, and goals as evil, but being cautious about labelling individuals as evil.
But I don’t think it’s compelling.
You wrote ‘You’re closing off lines of communication and gradual change. You’re polarizing things.’
Yes, I am. We’ve had open lines of communication between AI devs and AI safety experts for a decade. We’ve had pleas for gradual change. Mutual respect, and all that. Trying to use normal channels of moral persuasion. Well-intentioned EAs going to work inside the AI companies to try to nudge them in safer directions.
None of that has worked. AI capabilities development is outstripping AI safety developments at an ever-increasing rate. The financial temptations to stay working inside AI companies keep increasing, even as the X risks keep increasing. Timelines are getting shorter.
The right time to ‘polarize things’ is when we still have some moral and social leverage to stop reckless ASI development. The wrong time is after it’s too late.
Altman, Amodei, Hassabis, and Wang are buying people’s souls—paying them hundreds of thousands or millions of dollars a year to work on ASI development, despite most of their workers they supervise knowing that they’re likely to be increasing extinction risk.
This isn’t just a case of ‘collective evil’ being done by otherwise good people. This is a case of paying people so much that they ignore their ethical qualms about what they’re doing. That makes the evil very individual, and very specific. And I think that’s worth pointing out.
TsviBT—thanks for a thoughtful comment.
I understand your point about labelling industries, actions, and goals as evil, but being cautious about labelling individuals as evil.
But I don’t think it’s compelling.
You wrote ‘You’re closing off lines of communication and gradual change. You’re polarizing things.’
Yes, I am. We’ve had open lines of communication between AI devs and AI safety experts for a decade. We’ve had pleas for gradual change. Mutual respect, and all that. Trying to use normal channels of moral persuasion. Well-intentioned EAs going to work inside the AI companies to try to nudge them in safer directions.
None of that has worked. AI capabilities development is outstripping AI safety developments at an ever-increasing rate. The financial temptations to stay working inside AI companies keep increasing, even as the X risks keep increasing. Timelines are getting shorter.
The right time to ‘polarize things’ is when we still have some moral and social leverage to stop reckless ASI development. The wrong time is after it’s too late.
Altman, Amodei, Hassabis, and Wang are buying people’s souls—paying them hundreds of thousands or millions of dollars a year to work on ASI development, despite most of their workers they supervise knowing that they’re likely to be increasing extinction risk.
This isn’t just a case of ‘collective evil’ being done by otherwise good people. This is a case of paying people so much that they ignore their ethical qualms about what they’re doing. That makes the evil very individual, and very specific. And I think that’s worth pointing out.