[Question] What are the high-level approaches to AI alignment?

I’m writ­ing a post com­par­ing some high-level ap­proaches to AI al­ign­ment in terms of their false pos­i­tive risk. Trou­ble is, there’s no stan­dard agree­ment on what var­i­ous high-level ap­proaches to AI al­ign­ment there are to­day, ei­ther in terms of what con­sti­tutes these high-level ap­proaches or where to draw the line in cat­e­go­riz­ing var­i­ous spe­cific ap­proaches.

So, I’ll open it up as a ques­tion to get some feed­back be­fore I get too far along. What do you con­sider to be the high-level ap­proaches to AI al­ign­ment?

(I’ll sup­ply my own par­tial an­swer be­low.)