Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.
I’m not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don’t-compute-evil rule is pretty efficient even if it were arbitrarily chosen.
the don’t-compute-evil rule is pretty efficient even if it were arbitrarily chosen.
What if it’s more general—say, a prior to first employ actions you’ve used before that have worked well? (I don’t have a go to example of something good to do that people usually don’t. Just ‘most people don’t go skydiving, and most people don’t think about going skydiving.’)
Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.
I’m not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don’t-compute-evil rule is pretty efficient even if it were arbitrarily chosen.
What if it’s more general—say, a prior to first employ actions you’ve used before that have worked well? (I don’t have a go to example of something good to do that people usually don’t. Just ‘most people don’t go skydiving, and most people don’t think about going skydiving.’)