Regarding 2: So, I am a little surprised that step 2: Valuable goals cannot be directly specified is taken as a given.
If we consider an AI as rational optimizer of the ONE TRUE UTILITY FUNCTION, we might want to look for best available approximations of it short term. The function i have in mind is life expectancy(DALY or QALY), since to me, it is easier to measure than happiness. It also captures a lot of intuition when you ask a person the following hypothetical:
if you could be born in any society on earth today, what one number would be most congruent with your preference? Average life expectancy captures very well which societies are good to be born at.
I am also aware of a ton of problems with this, since one has to be careful to consider humans vs human/cyborg hybrids, time spent in cryo-sleep or normal sleep vs experiential mind-moments. However, i’d rather have an approximate starting point for direct specification, rather than give up on the approach all-together.
Regarding 5: There is an interesting “problem” with “do what i would want if i had more time to think” that happens not in the case of failure, but in the case of success. Let’s say we have our happy go lucky life expectancy maximizing death-defeating FAI. It starts to look at society and sees that some widely accepted acts are totally horrifying from its perspective. It’s “morality” surpasses ours, which is just an obvious consequence of it’s intelligence surpassing ours. Something like the amount of time we make children sit at their desks at school destroys their health to the point of disallowing immortality. This particular example might not be so hard to convince people of, but there could be others. At this point, they would go against a large number of people, to try and create its own schools which teach how bad the other schools are (or something). The governments don’t like this and shut it down because we still can for some reason.
Basically the issue is: this AI behaving in a friendly manner, which we would understand if we had enough time and intelligence. But we don’t. So we don’t have enough intelligence to determine if it is actually friendly or not.
Regarding 6: I feel that you haven’t even begun to approach the problem of a sub-group of people controlling the AI. The issue gets into the question of peaceful transitions of that power over the long term. There is also an issue of if you come up with a scheme of who gets to call the shots around the AI that’s actually a good idea, convincing people that it is a good idea instead of the default “let the government do it” is in itself a problem. It’s similar in principle to 5.
Note: I may be over my head here in math logic world:
For procrastination paradox:
There seems to be a desire to formalize
T proves G ⇒ G, which messes with completeness. Why not straight up try to formalize:
T proves G at time t ⇒ T proves G at time t+1 for all t > 0
That way: G ⇒ button gets pressed at time some time X and wasn’t pressed at X-1
However, If T proves G at X-1, it must also prove G at X, for all X > 1 therefore it won’t press the button, unless X = 1.
Basically instead of reasoning of whether proving something makes it true, reason whether proving something at some point leads to re-proving it again at another point or just formalizing the very intuition that makes us understand the paradox.