Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.
But if we’re talking about reinforcement learning and sensory experience in themselves, we’re not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.
Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns.
But if we’re talking about reinforcement learning and sensory experience in themselves, we’re not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.