Wild guesses here. I’ve done work in optical product identification, but I don’t know how well those challenges translate. Also, it’s an obvious enough idea that I expect there are teams working on it.
Lens and CCD technology is not trivial at those speeds and insane angular resolution. It’s not just about counting pixels, it’s about how to get light to the exact right place on the sensor, for long enough to register. I honestly don’t know if that’s solvable.
More boringly, clouds and nighttime would make this much less useful, especially as enemies can plan missions around the expected detection capabilities. I haven’t done the math, but even on clear days in daytime, dust and haze likely interfere too much for even a few KM distance.
Lens and CCD technology is not trivial at those speeds and insane angular resolution.
But we can easily capture a picture of a fighter jet when it’s close. And the further it is the higher the angular resolution required, but also the lower the angular speed, so do those cancel out to make it not much harder, or it doesn’t work like that?
You can’t just trivially scale up the angular resolution by bolting more sensors together (or similar methods). It gets more difficult to engineer the lenses and sensors to meet super-high specs.
And aside from that, the problem behaves nonlinearly with the amount of atmosphere between you and the plane. Each bit of distortion in the air along the way will combine, potentially pretty harshly limiting how far away you can get any useful image. This may be able to be worked around with AI to reconstruct from highly distorted images, but it’s far from trivial on the face of it.
Wild guesses here. I’ve done work in optical product identification, but I don’t know how well those challenges translate. Also, it’s an obvious enough idea that I expect there are teams working on it.
Lens and CCD technology is not trivial at those speeds and insane angular resolution. It’s not just about counting pixels, it’s about how to get light to the exact right place on the sensor, for long enough to register. I honestly don’t know if that’s solvable.
More boringly, clouds and nighttime would make this much less useful, especially as enemies can plan missions around the expected detection capabilities. I haven’t done the math, but even on clear days in daytime, dust and haze likely interfere too much for even a few KM distance.
But we can easily capture a picture of a fighter jet when it’s close. And the further it is the higher the angular resolution required, but also the lower the angular speed, so do those cancel out to make it not much harder, or it doesn’t work like that?
You can’t just trivially scale up the angular resolution by bolting more sensors together (or similar methods). It gets more difficult to engineer the lenses and sensors to meet super-high specs.
And aside from that, the problem behaves nonlinearly with the amount of atmosphere between you and the plane. Each bit of distortion in the air along the way will combine, potentially pretty harshly limiting how far away you can get any useful image. This may be able to be worked around with AI to reconstruct from highly distorted images, but it’s far from trivial on the face of it.