Oh, I was unaware this was still an issue within this site. To LW the question of free will is already solved). I encourage you to look further into it.
However, I think our current issue can become a little more clear if we taboo “programming”.
What specific differences in functionality do you expect between “normal” AI and “powerful” AI?
I think this is an example of reasoning analogous to philosophy’s “free will” debate. Human’s don’t have any more non-deterministic “free will” than a rock. The same is true of any AI, because an AI is just programming. It may be intelligent and sophisticated enough to appear different in a fundamental way, but it really isn’t.
It is posible for an optimizing process to make a mistake, and have an AI devolve into a different goal, which is what makes powerful AI look so scary and different. Example: Humans are more subject to each other’s whims than evolutionary pressures these days. Evolution has successfully created an intelligent process that doesn’t aim solely for genetic reproductive fitness. Oops, right?