The difference between deception and other types of error is the adversary. All modeling is lossy—our beliefs about the world don’t completely match the world, and never can. In the case of inputs trigger our matching algorithms for one thing but actually are another, we can learn wrong things.
For natural environments, these are generally pretty lightweight—a cloud that looks like a teapot isn’t going to fool us. A broken teapot might, but might not—it’d depend on the tests we try.
For adversarial cases, where someone/something is TRYING to fool us, a whole lot depends on the level of sophistication in their model of us, and in our model of the world (including what types of deception we should be on the lookout for). A much smarter entity than you can model whether you’ll actually try to use the teapot, and only make a good fake if needed, using just flat images for background and things you won’t bother to check.
The difference between deception and other types of error is the adversary. All modeling is lossy—our beliefs about the world don’t completely match the world, and never can. In the case of inputs trigger our matching algorithms for one thing but actually are another, we can learn wrong things.
For natural environments, these are generally pretty lightweight—a cloud that looks like a teapot isn’t going to fool us. A broken teapot might, but might not—it’d depend on the tests we try.
For adversarial cases, where someone/something is TRYING to fool us, a whole lot depends on the level of sophistication in their model of us, and in our model of the world (including what types of deception we should be on the lookout for). A much smarter entity than you can model whether you’ll actually try to use the teapot, and only make a good fake if needed, using just flat images for background and things you won’t bother to check.