Maximizers don’t take the proven optimal path, they take action when the EV of analyzing actions becomes lower than the current most valuable path. There are no guarantees that there is such a thing as an optimal path in many given situations, and spending resources and opportunities on proving that you will take the best path is not how you maximize at all. The situation changes as you search for the optimal path to that situation.
Maximizers don’t take the proven optimal path, they take action when the EV of analyzing actions becomes lower than the current most valuable path.
This is a conception of maximizers that I generally like, and is true if “cost of analysis” is part of the objective function, but it’s important to note that this is not the most generic class of maximizers, but a subset of that class. Note that any maximizer that comes up with a proof that it’s found an optimal solution implicitly knows that the EV of continuing to analyze actions is lower than going ahead with that solution.
I think what you have in mind is more typically referred to as an “optimizer,” like in “metaheuristic optimization.” Tabu search isn’t guaranteed to find you a globally optimal solution, but it’ll get you a better solution than you started with faster than other approaches, and that’s what people generally want. There’s no use taking five years to produce an absolute best plan for assigning packages to trucks going out for delivery tomorrow morning.
But the distinction that Stuart_Armstrong cares about holds: maximizers (as I defined them, without taking analysis costs into consideration) seem easy to analyze and optimizers seem hard to analyze: I can figure out the properties that an absolute best solution has, and there’s a fairly small set of those, but I might have a much harder time figuring out the properties that a solution returned by running tabu search overnight will have. But that might just be a perspective thing; I can actually run tabu search overnight a bunch of times, but I might not be able to actually figure out the set of absolute best solutions.
My intuition is telling me that resource costs are relevant to an agent whether it has a term in the objective function or not. Omohundro’s instrumental goal of efficiency...?
My intuition is telling me that resource costs are relevant to an agent whether it has a term in the objective function or not. Omohundro’s instrumental goal of efficiency...?
Ah; I’m not requiring a maximizer to be a general intelligence, and my intuitions are honed on things like CPLEX.
Maximizers don’t take the proven optimal path, they take action when the EV of analyzing actions becomes lower than the current most valuable path. There are no guarantees that there is such a thing as an optimal path in many given situations, and spending resources and opportunities on proving that you will take the best path is not how you maximize at all. The situation changes as you search for the optimal path to that situation.
This is a conception of maximizers that I generally like, and is true if “cost of analysis” is part of the objective function, but it’s important to note that this is not the most generic class of maximizers, but a subset of that class. Note that any maximizer that comes up with a proof that it’s found an optimal solution implicitly knows that the EV of continuing to analyze actions is lower than going ahead with that solution.
I think what you have in mind is more typically referred to as an “optimizer,” like in “metaheuristic optimization.” Tabu search isn’t guaranteed to find you a globally optimal solution, but it’ll get you a better solution than you started with faster than other approaches, and that’s what people generally want. There’s no use taking five years to produce an absolute best plan for assigning packages to trucks going out for delivery tomorrow morning.
But the distinction that Stuart_Armstrong cares about holds: maximizers (as I defined them, without taking analysis costs into consideration) seem easy to analyze and optimizers seem hard to analyze: I can figure out the properties that an absolute best solution has, and there’s a fairly small set of those, but I might have a much harder time figuring out the properties that a solution returned by running tabu search overnight will have. But that might just be a perspective thing; I can actually run tabu search overnight a bunch of times, but I might not be able to actually figure out the set of absolute best solutions.
My intuition is telling me that resource costs are relevant to an agent whether it has a term in the objective function or not. Omohundro’s instrumental goal of efficiency...?
Ah; I’m not requiring a maximizer to be a general intelligence, and my intuitions are honed on things like CPLEX.