There are at least two samples involved. [Y]our original data sampled from reality [...] is fixed—additional computational power will NOT get you more samples from reality.
That’s true for something embodied as Human v1.0 or e.g. in a robot chassis, though the I/O bound even in that case might end up being greatly superhuman -- certainly the most intelligent humans can glean much more information from sensory inputs of basically fixed length than the least intelligent can, which suggests to me that the size of our training set is not our limiting factor. But it’s not necessarily true for something that can generate its own sensors and effectors, suitably generalized; depending on architecture, that could end up being CPU-bound or I/O-bound, and I don’t think we have enough understanding of the problem to say which.
The first thing that comes to mind, scaled up to its initial limits, might look like a botnet running image interpretation over the output of every poorly secured security camera in the world (and there are a lot of them). That would almost certainly be CPU-bound. But there are probably better options out there.
it’s not necessarily true for something that can generate its own sensors and effectors
Yes, but now we’re going beyond the boundaries of the original comment which talked about how pure computing power (FLOPS + memory) can improve things. If you start building physical things (sensors and effectors), it’s an entirely different ball game.
Sensors and effectors in an AI context are not necessarily physical. They’re essentially the AI’s inputs and outputs, with a few constraints that are unimportant here; the terminology is a holdover from the days when everyone expected AI would be used primarily to run robots. We could be talking about web crawlers and Wikipedia edits, for example.
Fair point, though physical reality is still physical reality. If you need a breakthrough in building nanomachines, for example, you don’t get there by crawling the web really really fast.
That’s true for something embodied as Human v1.0 or e.g. in a robot chassis, though the I/O bound even in that case might end up being greatly superhuman -- certainly the most intelligent humans can glean much more information from sensory inputs of basically fixed length than the least intelligent can, which suggests to me that the size of our training set is not our limiting factor. But it’s not necessarily true for something that can generate its own sensors and effectors, suitably generalized; depending on architecture, that could end up being CPU-bound or I/O-bound, and I don’t think we have enough understanding of the problem to say which.
The first thing that comes to mind, scaled up to its initial limits, might look like a botnet running image interpretation over the output of every poorly secured security camera in the world (and there are a lot of them). That would almost certainly be CPU-bound. But there are probably better options out there.
Yes, but now we’re going beyond the boundaries of the original comment which talked about how pure computing power (FLOPS + memory) can improve things. If you start building physical things (sensors and effectors), it’s an entirely different ball game.
Sensors and effectors in an AI context are not necessarily physical. They’re essentially the AI’s inputs and outputs, with a few constraints that are unimportant here; the terminology is a holdover from the days when everyone expected AI would be used primarily to run robots. We could be talking about web crawlers and Wikipedia edits, for example.
Fair point, though physical reality is still physical reality. If you need a breakthrough in building nanomachines, for example, you don’t get there by crawling the web really really fast.