Just to focus on the underlying tension, does this differ from noting “all models are wrong, some models are useful”?
an AI designer from a more competent civilization would use a principled understanding of vision to come up with something much better than what we get by shoveling compute into SGD
How sure are you that there can be a “principled understanding of vision” that leads to perfect modeling, as opposed to just different tradeoffs (of domain, precision, recall, cost, and error cases)? The human brain is pretty susceptible to adversarial (both generated illusion and evolved camoflage) inputs as well, though they’re different enough that the specific failures aren’t comparable.
Just to focus on the underlying tension, does this differ from noting “all models are wrong, some models are useful”?
How sure are you that there can be a “principled understanding of vision” that leads to perfect modeling, as opposed to just different tradeoffs (of domain, precision, recall, cost, and error cases)? The human brain is pretty susceptible to adversarial (both generated illusion and evolved camoflage) inputs as well, though they’re different enough that the specific failures aren’t comparable.