I’m not sure what you mean. What is “best” is easily arrived at. If you’re a financier and your goal is to make money, then any formal statement about your decision will maximize money. If you’re a swimmer and your goal is to win an Olympic gold medal, then a formal statement of your decision will obviously include “win gold medal”—part of the plan to execute it may include “beat the current world record for swimming in my category” but “best” isn’t doing the heavy lifting here—the actual formal statement that encapsulates all the factors is—such as what are the milestones.
And if someone doesn’t know what they mean when they think of what is best—then the statement holds true. If you don’t know what is “best” then you don’t know what practical heuristics will deliver you “good enough”.
To put it another way—what are the situations where not defining in clear terms what is best still leads to well constructed heuristics to find the best decision in practice? (I will undercut myself—there is something to be said for exploration [1]and “F*** Around and Find Out” with no particular goal in mind. )
and if they are, how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
that’s where decision theories seem to me to come in. the test cases of decision theories are situations where maxing out, eg, CDT, does not in fact produce the highest-goal-score world. that seems to me to be where the difference Cole is raising comes up: if you’re merely moving in the direction of good worlds you can have more complex strategies that potentially make less sense but get closer to the best world, without having properly defined a single mathematical statement whose maxima is that best world. argmax(CDT(money)) may be less than genetic_algo(policy, money, iters=1b) even though argmax is a strict superlative, if the genetic algo finds something closer to, eg, argmax(FDT(money)).
edit: in other words, I’m saying “best” as opposed to “good”. what is good is generally easily arrived at. it’s not hard to find situations where what is best is intractable to calculate, even if you’re sure you’re asking for it correctly.
how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
by using the suffix “-(e)st”. “The fastest” “the richest” “the purple-est” “the highest” “the westernmost”. That’s the easy part—defining theoretically what is best. Mapping that theory to reality is hard.
Can you rephrase that—because you’re mentioning theory and possibility at once which sounds like an oxymoron to me. That which is in theory best implies that which is impossible or at least unlikely. If you can rephrase it I’ll probably be able to understand what you mean.
Also, if you had a ‘magic wand’ and could change a whole raft of things at once, do you have a vision of your “best” life that you preference? Not necessarily a likely or even possible one. But one that of all fantasies you can imagine is preeminent? That seems to me to be a very easy way to define the “best”—it’s the one that the agent wants most. I assume most people have their visions of their own “best” lives, am I a rarity in this? Or do most people just kind of never think about what-ifs and have fantasies? And isn’t that, or the model of the self and your own preferences that influences that fantasy going to similarly be part of the model that dictates what you “know” would improve your life significantly.
Because if you consider it an improvement, then you see it as being better. It’s basic English: Good, Better, Best.
best seems to do a lot of the work there.
I’m not sure what you mean. What is “best” is easily arrived at. If you’re a financier and your goal is to make money, then any formal statement about your decision will maximize money. If you’re a swimmer and your goal is to win an Olympic gold medal, then a formal statement of your decision will obviously include “win gold medal”—part of the plan to execute it may include “beat the current world record for swimming in my category” but “best” isn’t doing the heavy lifting here—the actual formal statement that encapsulates all the factors is—such as what are the milestones.
And if someone doesn’t know what they mean when they think of what is best—then the statement holds true. If you don’t know what is “best” then you don’t know what practical heuristics will deliver you “good enough”.
To put it another way—what are the situations where not defining in clear terms what is best still leads to well constructed heuristics to find the best decision in practice? (I will undercut myself—there is something to be said for exploration [1]and “F*** Around and Find Out” with no particular goal in mind. )
Bosh! Stephen said rudely. A man of genius makes no mistakes. His errors are volitional and are the portals of discovery. - Ulysses, James Joyce
is your only goal in life to make money?
is your only goal in life to win a gold medal?
and if they are, how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
that’s where decision theories seem to me to come in. the test cases of decision theories are situations where maxing out, eg, CDT, does not in fact produce the highest-goal-score world. that seems to me to be where the difference Cole is raising comes up: if you’re merely moving in the direction of good worlds you can have more complex strategies that potentially make less sense but get closer to the best world, without having properly defined a single mathematical statement whose maxima is that best world. argmax(CDT(money)) may be less than genetic_algo(policy, money, iters=1b) even though argmax is a strict superlative, if the genetic algo finds something closer to, eg, argmax(FDT(money)).
edit: in other words, I’m saying “best” as opposed to “good”. what is good is generally easily arrived at. it’s not hard to find situations where what is best is intractable to calculate, even if you’re sure you’re asking for it correctly.
by using the suffix “-(e)st”. “The fastest” “the richest” “the purple-est” “the highest” “the westernmost”. That’s the easy part—defining theoretically what is best. Mapping that theory to reality is hard.
I don’t know what is in theory the best possible life I can live, but I do know ways that I can improve my life significantly.
Can you rephrase that—because you’re mentioning theory and possibility at once which sounds like an oxymoron to me. That which is in theory best implies that which is impossible or at least unlikely. If you can rephrase it I’ll probably be able to understand what you mean.
Also, if you had a ‘magic wand’ and could change a whole raft of things at once, do you have a vision of your “best” life that you preference? Not necessarily a likely or even possible one. But one that of all fantasies you can imagine is preeminent? That seems to me to be a very easy way to define the “best”—it’s the one that the agent wants most. I assume most people have their visions of their own “best” lives, am I a rarity in this? Or do most people just kind of never think about what-ifs and have fantasies? And isn’t that, or the model of the self and your own preferences that influences that fantasy going to similarly be part of the model that dictates what you “know” would improve your life significantly.
Because if you consider it an improvement, then you see it as being better. It’s basic English: Good, Better, Best.