and if they are, how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
that’s where decision theories seem to me to come in. the test cases of decision theories are situations where maxing out, eg, CDT, does not in fact produce the highest-goal-score world. that seems to me to be where the difference Cole is raising comes up: if you’re merely moving in the direction of good worlds you can have more complex strategies that potentially make less sense but get closer to the best world, without having properly defined a single mathematical statement whose maxima is that best world. argmax(CDT(money)) may be less than genetic_algo(policy, money, iters=1b) even though argmax is a strict superlative, if the genetic algo finds something closer to, eg, argmax(FDT(money)).
edit: in other words, I’m saying “best” as opposed to “good”. what is good is generally easily arrived at. it’s not hard to find situations where what is best is intractable to calculate, even if you’re sure you’re asking for it correctly.
how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
by using the suffix “-(e)st”. “The fastest” “the richest” “the purple-est” “the highest” “the westernmost”. That’s the easy part—defining theoretically what is best. Mapping that theory to reality is hard.
is your only goal in life to make money?
is your only goal in life to win a gold medal?
and if they are, how do you define the direction such that you’re sure that among all possible worlds, maximizing this statement actually produces the world that maxes out goal-achievingness?
that’s where decision theories seem to me to come in. the test cases of decision theories are situations where maxing out, eg, CDT, does not in fact produce the highest-goal-score world. that seems to me to be where the difference Cole is raising comes up: if you’re merely moving in the direction of good worlds you can have more complex strategies that potentially make less sense but get closer to the best world, without having properly defined a single mathematical statement whose maxima is that best world. argmax(CDT(money)) may be less than genetic_algo(policy, money, iters=1b) even though argmax is a strict superlative, if the genetic algo finds something closer to, eg, argmax(FDT(money)).
edit: in other words, I’m saying “best” as opposed to “good”. what is good is generally easily arrived at. it’s not hard to find situations where what is best is intractable to calculate, even if you’re sure you’re asking for it correctly.
by using the suffix “-(e)st”. “The fastest” “the richest” “the purple-est” “the highest” “the westernmost”. That’s the easy part—defining theoretically what is best. Mapping that theory to reality is hard.