Why didn’t you mention earlier that your timeless decision theory mainly had to do with logical uncertainty?

Because I was thinking in terms of saving it for a PhD thesis or some other publication, and if you get that insight the rest follows pretty fast—did for me at least. Also I was using it as a test for would-be AI researchers: “Here’s Newcomblike problems, here’s why the classical solution doesn’t work for self-modifying AI, can you solve this FAI problem which I know to be solvable?”

I still think (B) is true, BTW. We should devote some time and resources to thinking about how we are solving these problems (and coming up with questions in the first place). Finding that algorithm is perhaps more important than finding a reflectively consistent decision algorithm, if we don’t want an AI to be stuck with whatever mistakes we might make.

And yet you found a reflectively consistent decision algorithm long before you found a decision-system-algorithm-finding algorithm. That’s not coincidence. The latter problem is much harder. I suspect that even an informal understanding of parts of it would mean that you could find timeless decision theory as easily as falling backward off a tree—you just run the algorithm in your own head. So with vey high probability you are going to start seeing through the object-level problems before you see through the meta ones. Conversely I am EXTREMELY skeptical of people who claim they have an algorithm to solve meta problems but who still seem confused about object problems. Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.

The meta algorithms are important but by their very nature, knowing even a little about the meta-problem tends to make the object problem much less confusing, and you will progress on the object problem faster than on the meta problem. Again, that’s not saying the meta problem is important. It’s just saying that it’s really hard to end up in a state where meta has really truly run ahead of object, though it’s easy to get illusions of having done so.

It’s interesting that we came upon the same idea from different directions. For me it fell out of Tegmark’s multiverse. What could consequences be, except logical consequences, if all mathematical structures exist? The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations. I thought I might be offering a different solution. (I spent days polishing the article in the expectation that I might have to defend it fiercely.)

And yet you found a reflectively consistent decision algorithm long before you found a decision-system-algorithm-finding algorithm. That’s not coincidence. The latter problem is much harder.

Umm, I haven’t actually found a reflectively consistent decision algorithm yet, since the proposal has huge gaps that need to be filled. I have little idea how to handle logical uncertainty in a systematic way, or whether expected utility maximization makes sense in that context.

The rest of your paragraph makes good points. But I’m not sure what you mean by “metaethics, a solved problem”. Can you give a link?

One way to approach the meta problem may be to consider the meta-meta problem: why did evolution create us with so much “common sense” on these types of problems? Why do we have the meta algorithm apparently “built in” when it doesn’t seem like it would have offered much advantage in the ancestral environment?

(Observe that this page was created after you asked the question. And I’m quite aware that it needs a better summary—maybe “A Natural Explanation of Metaethics” or the like.)

The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations

“Decide as though your decision is about the output of a Platonic computation” is the key insight that started me off—not the only idea—and considering how long philosophers have wrangled this, there’s the whole edifice of justification that would be needed for a serious exposition. Maybe come Aug 26th or thereabouts I’ll post a very quick summary of e.g. integration with Pearl’s causality.

The usual reason for building things in is that it reduces trial-and-error learning. That’s good if the errors are expensive and have a negative impact on fitness.

Is there something wrong with that explanation in this context?

Because I was thinking in terms of saving it for a PhD thesis or some other publication, and if you get that insight the rest follows pretty fast—did for me at least. Also I was using it as a test for would-be AI researchers: “Here’s Newcomblike problems, here’s why the classical solution doesn’t work for self-modifying AI, can you solve this FAI problem which I know to be solvable?”

And yet you found a reflectively consistent decision algorithm long before you found a decision-system-algorithm-finding algorithm. That’s not coincidence. The latter problem is much harder. I suspect that even an

informalunderstanding ofpartsof it would mean that you could find timeless decision theory as easily as falling backward off a tree—you just run the algorithm in your own head. So with vey high probability you are going to start seeing through the object-level problems before you see through the meta ones. Conversely I am EXTREMELY skeptical of people who claim they have an algorithm to solve meta problems but who still seemconfusedabout object problems. Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.The meta algorithms

areimportant but by their very nature, knowing even a little about the meta-problem tends to make the object problem much less confusing, and you will progress on the object problem faster than on the meta problem. Again, that’s not saying the meta problem is important. It’s just saying that it’s reallyhardto end up in a state where meta hasreally trulyrun ahead of object, though it’s easy to get illusions of having done so.It’s interesting that we came upon the same idea from different directions. For me it fell out of Tegmark’s multiverse. What could consequences be, except logical consequences, if all mathematical structures exist? The fact that you said it would take a long series of posts to explain your idea threw me off, and I was kind of surprised when you said congratulations. I thought I might be offering a different solution. (I spent days polishing the article in the expectation that I might have to defend it fiercely.)

Umm, I haven’t actually found a reflectively consistent decision algorithm yet, since the proposal has huge gaps that need to be filled. I have little idea how to handle logical uncertainty in a systematic way, or whether expected utility maximization makes sense in that context.

The rest of your paragraph makes good points. But I’m not sure what you mean by “metaethics, a solved problem”. Can you give a link?

One way to approach the meta problem may be to consider the meta-meta problem: why did evolution create us with so much “common sense” on

thesetypes of problems? Why do we have the meta algorithm apparently “built in” when it doesn’t seem like it would have offered much advantage in the ancestral environment?http://wiki.lesswrong.com/wiki/Metaethics_sequence

(Observe that this page was created after you asked the question. And I’m quite aware that it needs a better summary—maybe “A Natural Explanation of Metaethics” or the like.)

“Decide as though your decision is about the output of a Platonic computation” is the

key insightthat started me off—not the only idea—and considering how long philosophers have wrangled this, there’s the whole edifice of justification that would be needed for a serious exposition. Maybe come Aug 26th or thereabouts I’ll post a very quick summary of e.g. integration with Pearl’s causality.The usual reason for building things in is that it reduces trial-and-error learning. That’s good if the errors are expensive and have a negative impact on fitness.

Is there something wrong with that explanation in this context?