It’s interesting that we came upon the same idea from different directions. For me it fell out of Tegmark’s multiverse. What could consequences be, except logical consequences, if all mathematical structures exist? The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations. I thought I might be offering a different solution. (I spent days polishing the article in the expectation that I might have to defend it fiercely.)
And yet you found a reflectively consistent decision algorithm long before you found a decision-system-algorithm-finding algorithm. That’s not coincidence. The latter problem is much harder.
Umm, I haven’t actually found a reflectively consistent decision algorithm yet, since the proposal has huge gaps that need to be filled. I have little idea how to handle logical uncertainty in a systematic way, or whether expected utility maximization makes sense in that context.
The rest of your paragraph makes good points. But I’m not sure what you mean by “metaethics, a solved problem”. Can you give a link?
One way to approach the meta problem may be to consider the meta-meta problem: why did evolution create us with so much “common sense” on these types of problems? Why do we have the meta algorithm apparently “built in” when it doesn’t seem like it would have offered much advantage in the ancestral environment?
(Observe that this page was created after you asked the question. And I’m quite aware that it needs a better summary—maybe “A Natural Explanation of Metaethics” or the like.)
The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations
“Decide as though your decision is about the output of a Platonic computation” is the key insight that started me off—not the only idea—and considering how long philosophers have wrangled this, there’s the whole edifice of justification that would be needed for a serious exposition. Maybe come Aug 26th or thereabouts I’ll post a very quick summary of e.g. integration with Pearl’s causality.
The usual reason for building things in is that it reduces trial-and-error learning. That’s good if the errors are expensive and have a negative impact on fitness.
Is there something wrong with that explanation in this context?
It’s interesting that we came upon the same idea from different directions. For me it fell out of Tegmark’s multiverse. What could consequences be, except logical consequences, if all mathematical structures exist? The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations. I thought I might be offering a different solution. (I spent days polishing the article in the expectation that I might have to defend it fiercely.)
Umm, I haven’t actually found a reflectively consistent decision algorithm yet, since the proposal has huge gaps that need to be filled. I have little idea how to handle logical uncertainty in a systematic way, or whether expected utility maximization makes sense in that context.
The rest of your paragraph makes good points. But I’m not sure what you mean by “metaethics, a solved problem”. Can you give a link?
One way to approach the meta problem may be to consider the meta-meta problem: why did evolution create us with so much “common sense” on these types of problems? Why do we have the meta algorithm apparently “built in” when it doesn’t seem like it would have offered much advantage in the ancestral environment?
http://wiki.lesswrong.com/wiki/Metaethics_sequence
(Observe that this page was created after you asked the question. And I’m quite aware that it needs a better summary—maybe “A Natural Explanation of Metaethics” or the like.)
“Decide as though your decision is about the output of a Platonic computation” is the key insight that started me off—not the only idea—and considering how long philosophers have wrangled this, there’s the whole edifice of justification that would be needed for a serious exposition. Maybe come Aug 26th or thereabouts I’ll post a very quick summary of e.g. integration with Pearl’s causality.
The usual reason for building things in is that it reduces trial-and-error learning. That’s good if the errors are expensive and have a negative impact on fitness.
Is there something wrong with that explanation in this context?