Good observations. The more general problem is modeling. Models break and ‘hope for the best expecting the worst’ generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
Good observations. The more general problem is modeling. Models break and ‘hope for the best expecting the worst’ generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.