Of course the game is that we don’t want to prove things about the algorithms in question, we are happy to form justified beliefs about them in whatever way we can, including inductive inference. But the point is that there are things we don’t understand.
And the question is: who cares? The mechanism by which human beings predict their future behavior is not logical inference. Similar ad-hoc Bayesian extrapolation techniques can be used in any general AI without worry about Löbian obstacles. So why is it such a pressing issue?
I don’t wish to take away from the magnitude of your accomplishment. It is an important achievement. But in the long run I don’t think it’s going to be a very useful result in the construction of superhuman AGIs, specifically. And it’s reasonable to ask why MIRI is assigning strategic importance to these issues.
And the question is: who cares? The mechanism by which human beings predict their future behavior is not logical inference. Similar ad-hoc Bayesian extrapolation techniques can be used in any general AI without worry about Löbian obstacles. So why is it such a pressing issue?
I don’t wish to take away from the magnitude of your accomplishment. It is an important achievement. But in the long run I don’t think it’s going to be a very useful result in the construction of superhuman AGIs, specifically. And it’s reasonable to ask why MIRI is assigning strategic importance to these issues.