I’m just a little leery of calling things “wrong” when it makes the same predictions about observations as being “right.” I don’t want people to think that we can avoid “wrong ontologies” by starting with some reasonable-sounding universal prior and then updating on lots of observational data. Or that something “wrong” will be doing something systematically stupid, probably due to some mistake or limitation that of course the reader would never program into their AI.
I’m just a little leery of calling things “wrong” when it makes the same predictions about observations as being “right.” I don’t want people to think that we can avoid “wrong ontologies” by starting with some reasonable-sounding universal prior and then updating on lots of observational data. Or that something “wrong” will be doing something systematically stupid, probably due to some mistake or limitation that of course the reader would never program into their AI.