it would be able to predict it’s own actions, and this is a logical contradiction, just as it is for us.
“Not being able, with 100% accuracy, predict own future actions” is nowhere near the same thing as “Not being able to, with at all useful accuracy, predict own future actions.”
“Not being able, with 100% accuracy, predict own future actions” is nowhere near the same thing as “Not being able to, with at all useful accuracy, predict own future actions.”
I agree, but without 100% accuracy it will not be able to FOOM.
Why? I don’t follow this logic at all.