I’ve actually been doing research on the Semmelweiss reflex, ethics, and the problems of unbearable knowledge and defensive unknowing in the context of making decisions about existential threats, with an eye on how to solve the problem of giving people more tools and cognitive resources. AI alignment is one of the existential threats of interest. (Essentially, I have been quietly working the same problem you are publicly working towards solving.)
I think you have some great points, but I have a lot to add.
Unfortunately, I lack a lot of the jargon of LW, and operate from different heuristics, so accidentally alienate people easily. Within the social mores of Less Wrong, what’s appropriate? To message you privately, to write a very long comment, to create a separate post? What would you prefer? I would really love to talk about this without accidentally blowing anything up.
What you are saying about persuasion is important in both directions. Have you every encountered agnotology? It’s part of sociology, and it looks at the creation of unknowing. In the cases you list above, including tobacco and lead, there has been research into the ways that industries marshalled money, resources, and persuasion to create doubt and prevent regulation.
So there’s maybe an additional important heuristic, which is that profit will motivate individuals to ignore harm, and if they have power and money, they will use institutions to persuade people not to notice.
It’s not the entirety of the difficulty, but it is something that ethics might help to correct.