I’ve been excited about pitching AGI x-risks to conservatives since seeing the great outreach work and writup from AE Studios, Making a conservative case for alignment.
My fervant hope is that we somehow avoid making this a politically polarized issue. I fear that polarization easily overwhelms reason, and is one of the few ways the public could fail to appreciate the dreadful, simple logic in time to be of any help.
Seth—thanks for sharing that link; I hadn’t seen it, and I’ll read it.
I agree that we should avoid making AI safety either liberal-coded or conservative-coded.
But, we should not hesitate to use different messaging, emphasis, talking points, and verbal styles when addressing liberal or conservative audiences. That’s just good persuasion strategy, and it can be done with epistemic and ethical integrity.
I’ve been excited about pitching AGI x-risks to conservatives since seeing the great outreach work and writup from AE Studios, Making a conservative case for alignment.
My fervant hope is that we somehow avoid making this a politically polarized issue. I fear that polarization easily overwhelms reason, and is one of the few ways the public could fail to appreciate the dreadful, simple logic in time to be of any help.
Seth—thanks for sharing that link; I hadn’t seen it, and I’ll read it.
I agree that we should avoid making AI safety either liberal-coded or conservative-coded.
But, we should not hesitate to use different messaging, emphasis, talking points, and verbal styles when addressing liberal or conservative audiences. That’s just good persuasion strategy, and it can be done with epistemic and ethical integrity.