Although I don’t understand what you mean by “conservation of computation”, the distribution of computation, information sources, learning, and representation capacity is important in shaping how and where knowledge is represented.
The idea that general AI capabilities can best be implemented or modeled as “an agent” (an “it” that uses “the search algorithm”) is, I think, both traditional and misguided. A host of tasks require agentic action-in-the-world, but those tasks are diverse and will be performed and learned in parallel (see the CAIS report, www.fhi.ox.ac.uk/reframing). Skill in driving somewhat overlaps with — yet greatly differs from — skill in housecleaning or factory management; learning any of these does not provide deep, state-of-the art knowledge of quantum physics, and can benefit from (but is not a good way to learn) conversational skills that draw on broad human knowledge.
A well-developed QNR store should be thought of as a body of knowledge that potentially approximates the whole of human and AI-learned knowledge, as well as representations of rules/programs/skills/planning strategies for a host of tasks. The architecture of multi-agent systems can provide individual agents with resources that are sufficient for the tasks they perform, but not orders of magnitude more than necessary, shaping how and where knowledge is represented. Difficult problems can be delegated to low-latency AI cloud services. .
There is no “it” in this story, and classic, unitary AI agents don’t seem competitive as service providers — which is to say, don’t seem useful..
I’ve noted the value of potentially opaque neural representations (Transformers, convnets, etc.) in agents that must act skillfully, converse fluently, and so on, but operationalized, localized, task-relevant knowledge and skills complement rather than replace knowledge that is accessible by associative memory over a large, shared store.
But does this terminal goal exist today? The proper (and to some extent actual) goal of firms is widely considered to be maximizing share value, but this is manifestly not the same as maximizing shareholder value — or even benefiting shareholders. For example:
I hold shares in Company A, which maximizes its share value through actions that poison me or the society I live in. My shares gain value, but I suffer net harm.
Company A increases its value by locking its customers into a dependency relationship, then exploits that relationship. I hold shares, but am also a customer, and suffer net harm.
I hold shares in A, but also in competing Company B. Company A gains incremental value by destroying B, my shares in B become worthless, and the value of my stock portfolio decreases. Note that diversified portfolios will typically include holdings of competing firms, each of which takes no account of the value of the other.
Equating share value with shareholder value is obviously wrong (even when considering only share value!) and is potentially lethal. This conceptual error both encourages complacency regarding the alignment of corporate behavior with human interests and undercuts efforts to improve that alignment.