I usually use “rational” in a somewhat jargonish sense to refer to things that have goals and work to achieve them in a broad-context way. This is pretty dang distinct from “optimal,” mainly because calling something optimal implies something to optimize, while goals can be anything they want.
Do you mean the use of “rational” as in the “rational shoe buying” threads? In that case, sure.
If we try to do text substitutions without a semantic understanding of what’s going on, we get nonsense or worse. This should not be surprising. I’m not actually proposing a regexp search-and-replace, I’m proposing a lexical shift.
What we frequently refer to here as a “rational agent” isn’t an optimized agent, it’s an optimizing agent—one that makes the decisions that most effectively implement its goals.
What we frequently refer to here as a “rational choice” is both an optimizing choice (that is, one which when implemented effects the chooser’s goals) and an optimized choice (that is, of the set of available choices, the one which has the highest chance of effecting the chooser’s goals). It might also be an optimal choice (that is, the one that actually best effects the chooser’s goals).
A chooser might pick an option at random which turns out to be (by sheer dumb luck) the optimal choice. Their choice would still be optimized, though the process they used to select it was not a reliable optimizing process.
This seems pretty straightforward and useful to me, which is why I’m adopting this language.
I endorse other people similarly adopting language that seems straightforward and useful to them.
What we frequently refer to here as a “rational agent” isn’t an optimized agent, it’s an optimizing agent—one that makes the decisions that most effectively implement its goals.
I am reminded of one of the early videos in Norvig and Thrun’s recent online AI class, where “optimal” was used in two different senses in rapid succession — to mean “the algorithm yields the shortest route” and “the algorithm executes in the best time”. This yielded some confusion for a friend of mine, who assumed that the speaker meant that these were both aspects of some deeper definition of “optimal” which would then be explained. No such explanation was forthcoming.
I usually use “rational” in a somewhat jargonish sense to refer to things that have goals and work to achieve them in a broad-context way. This is pretty dang distinct from “optimal,” mainly because calling something optimal implies something to optimize, while goals can be anything they want.
Do you mean the use of “rational” as in the “rational shoe buying” threads? In that case, sure.
I would quite happily call a system that has a goal and works to achieve that goal in a broad-context way an optimizer.
Right. So if we replace “rational” with “optimized,” our “rational agent” becomes an “optimized agent.”
RRRrrnt. :P
(shrug) Sure.
If we try to do text substitutions without a semantic understanding of what’s going on, we get nonsense or worse. This should not be surprising. I’m not actually proposing a regexp search-and-replace, I’m proposing a lexical shift.
What we frequently refer to here as a “rational agent” isn’t an optimized agent, it’s an optimizing agent—one that makes the decisions that most effectively implement its goals.
What we frequently refer to here as a “rational choice” is both an optimizing choice (that is, one which when implemented effects the chooser’s goals) and an optimized choice (that is, of the set of available choices, the one which has the highest chance of effecting the chooser’s goals). It might also be an optimal choice (that is, the one that actually best effects the chooser’s goals).
A chooser might pick an option at random which turns out to be (by sheer dumb luck) the optimal choice. Their choice would still be optimized, though the process they used to select it was not a reliable optimizing process.
This seems pretty straightforward and useful to me, which is why I’m adopting this language.
I endorse other people similarly adopting language that seems straightforward and useful to them.
I am reminded of one of the early videos in Norvig and Thrun’s recent online AI class, where “optimal” was used in two different senses in rapid succession — to mean “the algorithm yields the shortest route” and “the algorithm executes in the best time”. This yielded some confusion for a friend of mine, who assumed that the speaker meant that these were both aspects of some deeper definition of “optimal” which would then be explained. No such explanation was forthcoming.