I understand all this. I am assuming a constant angular field of view, let’s just assume 120 degrees for now. I assuming a single photo from a single camera placed at a single location covering 10^15 pixels. I am not talking about multiple photos stitched together, or moving the camera around, and so on.
(Yes, the camera will necessarily have multiple sensor arrays and internally stitch the data together anyway)
And yes a petapixel camera with 120 degree (or some other large field of view) could cover the United States at 3 mm resolution.
I am not sure if we are actually disagreeing.
I am saying someone should be able to place a camera outside US borders and yet be able to do facial recognition of people inside from thousands of kilometres away.
Suppose it’s a bright day, the surface 3x3 mm reflects 5% of light. How many photons it will direct to a square of (10*10 / 10^15) m^2 in area, 40 kilometers away? Per second.
This math assumes a raw pixel with no optics, which is an absurd way to build a camera. With a 1m lens at 40km, you could get ~10⁵ photons per second (13 OOM better).
The problem here is the diffraction limit. At the 2,500 km ranges discussed, 3mm resolution requires a single aperture of ~500m or a constellation of ~7,500 JWST-scale telescopes tiling the coverage. Optical interferometry could theoretically reduce the count, but requires maintaining satellite relative positions to within a wavelength of light.
maintaining satellite relative positions to within a wavelength of light
There’s no law of physics that prevents humanity from building either of these things. I’m just pessimistic about the engineering advancing to the point that we can build this in next 10 years. (without help of superhuman intelligence, that is)
The constant angular field of view is the disagreement. A camera in the mid-gigapixel to low-terapixel range could cover one city by using an appropriate lens at an arbitrary distance (including space).
Any sensor finer than that would either cover substantial amounts of “boring” area (e.g. nature preserves, agricultural areas), or increase the resolution beyond your target.
I am still not clear where we are disagreeing, sorry.
What do you think is the bottleneck to building a petapixel camera that lets you do facial recognition from outside national borders? I don’t think you can simply stitch a bunch of gigapixel cameras together and achieve this.
A camera that can do facial recognition from outside of national borders doesn’t need to be a petapixel one. A mid-gigapixel camera with good optics can cover an entire city at once (or at least it could if it wasn’t for all the buildings in the way).
The main barrier to petapixel cameras is that they don’t serve your goal of full public monitoring (regardless of whether it’s by the government or by everyone individually).
A camera that can do facial recognition from outside of national borders doesn’t need to be a petapixel one. A mid-gigapixel camera with good optics can cover an entire city at once (or at least it could if it wasn’t for all the buildings in the way).
This is technically true. But yes, if you had the tech to build this it would also become trivial to built a petapixel camera too (for someone who can afford it). The hard part is doing 0.1 metre resolution from a 10,000 kilometre.
Thanks for this exchange btw, I guess in future I could be more precise.
The main barrier to petapixel cameras is that they don’t serve your goal of full public monitoring (regardless of whether it’s by the government or by everyone individually).
Why?
Assume we had the tech to manufacture petapixel cameras, and individuals worldwide could purchase them (i.e. a govt couldn’t just lock down the supply chain). Why does this not eventually to a world with zero privacy for everyone?
I understand all this. I am assuming a constant angular field of view, let’s just assume 120 degrees for now. I assuming a single photo from a single camera placed at a single location covering 10^15 pixels. I am not talking about multiple photos stitched together, or moving the camera around, and so on.
(Yes, the camera will necessarily have multiple sensor arrays and internally stitch the data together anyway)
And yes a petapixel camera with 120 degree (or some other large field of view) could cover the United States at 3 mm resolution.
I am not sure if we are actually disagreeing.
I am saying someone should be able to place a camera outside US borders and yet be able to do facial recognition of people inside from thousands of kilometres away.
I asked this question to Opus
It works out to 1 photon per year.
This math assumes a raw pixel with no optics, which is an absurd way to build a camera. With a 1m lens at 40km, you could get ~10⁵ photons per second (13 OOM better).
The problem here is the diffraction limit. At the 2,500 km ranges discussed, 3mm resolution requires a single aperture of ~500m or a constellation of ~7,500 JWST-scale telescopes tiling the coverage. Optical interferometry could theoretically reduce the count, but requires maintaining satellite relative positions to within a wavelength of light.
Thanks this comment is useful!
There’s no law of physics that prevents humanity from building either of these things. I’m just pessimistic about the engineering advancing to the point that we can build this in next 10 years. (without help of superhuman intelligence, that is)
Yeah, seems like ASI can be achieved well before the monitoring can be built.
Oh right, yeah, that makes sense.
The constant angular field of view is the disagreement. A camera in the mid-gigapixel to low-terapixel range could cover one city by using an appropriate lens at an arbitrary distance (including space).
Any sensor finer than that would either cover substantial amounts of “boring” area (e.g. nature preserves, agricultural areas), or increase the resolution beyond your target.
I am still not clear where we are disagreeing, sorry.
What do you think is the bottleneck to building a petapixel camera that lets you do facial recognition from outside national borders? I don’t think you can simply stitch a bunch of gigapixel cameras together and achieve this.
A camera that can do facial recognition from outside of national borders doesn’t need to be a petapixel one. A mid-gigapixel camera with good optics can cover an entire city at once (or at least it could if it wasn’t for all the buildings in the way).
The main barrier to petapixel cameras is that they don’t serve your goal of full public monitoring (regardless of whether it’s by the government or by everyone individually).
This is technically true. But yes, if you had the tech to build this it would also become trivial to built a petapixel camera too (for someone who can afford it). The hard part is doing 0.1 metre resolution from a 10,000 kilometre.
Thanks for this exchange btw, I guess in future I could be more precise.
Why?
Assume we had the tech to manufacture petapixel cameras, and individuals worldwide could purchase them (i.e. a govt couldn’t just lock down the supply chain). Why does this not eventually to a world with zero privacy for everyone?