[Question] If there was a millennium equivalent prize for AI alignment, what would the problems be?

If someone were to create one or a number of huge (e.g 1 to 10 million dollar) prizes for solving important problems related to AI alignment, it might have a decent chance of focusing a lot of researchers on the topic, which might help advance progress towards a solution/​s, as well as make the field more respectable.

In order to avoid bad feelings, attract serious academics, and get meaningful results, the problems have to be clearly defined and have unambiguous solutions—something like “solve AI alignment” won’t cut it.

The problems wouldn’t have to be complete solutions to AI alignment. Instead they should be initial explorations on which further research and new branches of mathematics/​computer science can be built. We want people to win those prizes, and quickly, so we can iterate with further prizes and attract more momentum.

So what problems can you think of that are concrete, achievable, and would represent useful milestones towards AI alignment?