When a bounded agent attempts a task, we observe some degree of success. But the degree of success depends on many factors that are not “part of” the agent—outside the Cartesian boundary that we (the observers) choose to draw for modeling purposes. These factors include things like power, luck, task difficulty, assistance, etc. If we are concerned with the agent as a learner and don’t consider knowledge as part of the agent, factors like knowledge, skills, beliefs, etc. are also externalized. Applied rationality is the result of attempting to distill this big complicated mapping from (agent, power, luck, task, knowledge, skills, beliefs, etc) → success down to just agent → success. This lets us assign each agent a one-dimensional score: “how well do you achieve goals overall?” Note that for no-free-lunch reasons, this already-fuzzy thing is further fuzzified by considering tasks according to the stuff the observer cares about somehow.
Sentence:
Applied rationality is a property of a bounded agent, which attempts to describe how successful that agent tends to be when you throw tasks at it, while controlling for both “environmental” factors such as luck and “epistemic” factors such as beliefs.
Follow-up:
In this framing, it’s pretty easy to define epistemic rationality analogously compressing from everything → prediction loss to just agent → prediction loss.
However, in retrospect I think the definition I gave here is pretty identical to how I would have defined “intelligence”, just without reference to the “mapping broad start distribution to narrow outcome distribution” idea (optimization power) that I usually associate with that term. If anyone could clarify specifically the difference between applied rationality and intelligence, I would be interested.
Maybe you also have to control for “computational factors” like raw processing power, or something? But then what’s left inside the Cartesian boundary? Just the algorithm? That seems like it has potential, but still feels messy.
Paragraph:
When a bounded agent attempts a task, we observe some degree of success. But the degree of success depends on many factors that are not “part of” the agent—outside the Cartesian boundary that we (the observers) choose to draw for modeling purposes. These factors include things like power, luck, task difficulty, assistance, etc. If we are concerned with the agent as a learner and don’t consider knowledge as part of the agent, factors like knowledge, skills, beliefs, etc. are also externalized. Applied rationality is the result of attempting to distill this big complicated mapping from (agent, power, luck, task, knowledge, skills, beliefs, etc) → success down to just agent → success. This lets us assign each agent a one-dimensional score: “how well do you achieve goals overall?” Note that for no-free-lunch reasons, this already-fuzzy thing is further fuzzified by considering tasks according to the stuff the observer cares about somehow.
Sentence:
Applied rationality is a property of a bounded agent, which attempts to describe how successful that agent tends to be when you throw tasks at it, while controlling for both “environmental” factors such as luck and “epistemic” factors such as beliefs.
Follow-up:
In this framing, it’s pretty easy to define epistemic rationality analogously compressing from everything → prediction loss to just agent → prediction loss.
However, in retrospect I think the definition I gave here is pretty identical to how I would have defined “intelligence”, just without reference to the “mapping broad start distribution to narrow outcome distribution” idea (optimization power) that I usually associate with that term. If anyone could clarify specifically the difference between applied rationality and intelligence, I would be interested.
Maybe you also have to control for “computational factors” like raw processing power, or something? But then what’s left inside the Cartesian boundary? Just the algorithm? That seems like it has potential, but still feels messy.