Which of these five AI alignment research projects ideas are no good?

I’ll post five AI al­ign­ment re­search pro­ject ideas as com­ments. It would be great if you could ap­proval-vote on them by us­ing up­votes. Ie. when you think the pro­ject idea isn’t good, you leave the com­ment as is; oth­er­wise you give it a sin­gle up­vote.

The pro­ject ideas fol­low this for­mat (cf. The Craft of Re­search):

I’m study­ing <topic>,
    be­cause I want to <ques­tion that guides the search>,
        in or­der to help my reader un­der­stand <more sig­nifi­cant
        ques­tion that would be in­formed by an an­swer to the
        pre­vi­ous ques­tion>.

The pro­ject ideas are fixed-width in or­der to pre­serve the in­den­ta­tion. If they get for­mat­ted strangely, you might be able to fix it by in­creas­ing the width of your browser win­dow or zoom­ing out.