BTW, general question about decision theory. There appears to have been an academic study of decision theory for over a century, and causal and evidential decision theory were set out in 1981. Newcomb’s paradox was set out in 1969. Yet it seems as though no-one thought to explore the space beyond these two decision theories until Eliezer proposed TDT, and it seems as if there is a 100% disconnect between the community exploring new theories (which is centered around LW) and the academic decision theory community. This seems really, really odd—what’s going on?
Yet it seems as though no-one thought to explore the space beyond these two decision theories until Eliezer proposed TDT...
This is simply not true. Robert Nozick (who introduced Newcomb’s problem to philosophers) compared/contrasted EDT and CDT at least as far back as 1993. Even back then, he noted their inadequacy on several decision-theoretic problems and proposed some alternatives.
Me being ignorant of something seemed like a likely part of the explanation—thanks :) I take it you’re referencing “The Nature of Rationality”? Not read that I’m afraid. If you can spare the time I’d be interested to know what he proposes -thanks!
I haven’t read The Nature of Rationality in quite a long time, so I won’t be of much help. For a very simple and short introduction to Nozick’s work on decision theory, you should read this (PDF).
This paper talks about reflexive decision models and claims to develop a form of CDT which one boxes.
It’s in my to-read list but I haven’t got to it yet so I’m not sure whether it’s of interest but I’m posting it just in case (it could be a while until I have time to read it so I won’t be able to post a more informed comment any time soon).
Though this theory post-dates TDT and so isn’t interesting from that perspective.
It should be noted that Newcomb’s problem was considered interesting in Philosophy in 1969, but decision theories were studied more in other fields—so there’s a disconnect between the sorts of people who usually study formal decision theories and that sort of problem.
Decision Theory is and can be applied to a variety of problems here. It’s just that AI may face Newcomb-like problems and in particular we want to ensure a 1-boxing-like behavior on the part of AI.
The rationale for TDT-like decision theories is even more general, I think. There’s no guarantee that our world contains only one copy of something. We want a decision theory that would let the AI cooperate with its copies or logical correlates, rather than wage pointless wars.
We want a decision theory that would let the AI cooperate with its copies or logical correlates, rather than wage pointless wars.
Constructing rigorous mathematical foundation of decision theory to explain what a decision problem or a decision or a goal are, is potentially more useful than resolving any given informally specified class of decision problems.
Given the week+ delay in this response, it’s probably not going to see much traffic, but I’m not convinced “reading” source code is all that helpful. Omega is posited to have nearly god-like abilities in this regard, but since this is a rationalist discussion, we probably have to rule out actual omnipotence.
If Omega intends to simply run the AI on spare hardware it has, then it has to be prepared to validate (in finite time and memory) that the AI hasn’t so obfuscated its source as to be unintelligible to rational minds. It’s also possible that the source to an AI is rather simple but it is dependent a large amount of input data in the form of a vast sea of numbers. I.e., the AI in question could be encoded as an ODE system integrator that’s reliant on a massive array of parameters to get from one state to the next. I don’t see why we should expect Omega to be better at picking out the relevant, predictive parts of these numbers than we are.
If the AI can hide things in its code or data, then it can hide functionality that tests to determine if it is being run by Omega or on its own protected hardware. In such a case it can lie to Omega just as easily as Omega can lie to the “simulated” version of the AI.
I think it’s time we stopped positing an omniscient Omega in these complications to Newcomb’s problem. They’re like epicycles on Ptolemaic orbital theory in that they continue a dead end line of reasoning. It’s better to recognize that Newcomb’s problem is a red herring. Newcomb’s problem doesn’t demonstrate problems that we should expect AI’s to solve in the real world. It doesn’t tease out meaningful differences between decision theories.
That is, what decisions on real-world problems do we expect to be different between two AIs that come to different conclusions about Newcomb-like problems?
You should note that every problem you list is a special case. Obviously, there are ways of cheating at Newcomb’s problem if you’re aware of salient details beforehand. You could simply allow a piece of plutonium to decay, and do whatever the resulting Geiger counter noise tells you to. That does not, however, support your thesis that Newcomb’s problem is a totally artificial problem with no logical intrusions into reality.
As a real-world example, imagine an off-the-shelf stock market optimizing AI. Not sapient, to make things simpler, but smart. When any given copy begins running, there are already hundreds or thousands of near-identical copies running elsewhere in the market. If it fails to predict their actions from its own, it will do objectively worse than it might otherwise do.
i don’t see how your example is apt or salient. My thesis is that Newcomb-like problems are the wrong place to be testing decision theories because they do not represent realistic or relevant problems. We should focus on formalizing and implementing decision theories and throw real-world problems at them rather than testing them on arcane logic puzzles.
Well… no, actually. A good decision theory ought to be universal. It ought to be correct, and it ought to work. Newcomb’s problem is important, not because it’s ever likely to happen, but because it shows a case in which the normal, commonly accepted approach to decision theory (CDT) failed miserably. This ‘arcane logic puzzle’ is illustrative of a deeper underlying flaw in the model, which needs to be addressed. It’s also a flaw that’d be much harder to pick out by throwing ‘real world’ problems at it over and over again.
Seems unlikely to work out to me. Humans evolved intelligence without Newcomb-like problems. As the only example of intelligence that we know of, it’s clearly possible to develop intelligence without Newcomb-like problems. Furthermore, the general theory seems to be that AIs will start dumber than humans and iteratively improve until they’re smarter. Given that, why are we so interested in problems like these (which humans don’t universally agree about the answers to)?
I’d rather AIs be able to help us with problems like “what should we do about the economy?” or even “what should I have for dinner?” instead of worrying about what we should do in the face of something godlike.
Additionally, human minds aren’t universal (assuming that universal means that they give the “right” solutions to all problems), so why should we expect AIs to be? We certainly shouldn’t expect this if we plan on iteratively improving our AIs.
It might be nice to be able to see the voting history (not the voters’ names, but the number of up and down votes) on a comment. I can’t tell if my comments are controversial or just down-voted by two people. Perhaps even just the number of votes would be sufficient (e.g. −2/100 vs. −2/2).
If it helps: it’s a fairly common belief in this community that a general-purpose optimization tool is both far superior to, and more interesting to talk about, than a variety of special-purpose tools.
Of course, that doesn’t mean you have to be interested in general-purpose optimization tools; if you’re more interested in decision theory for dinner-menu or economic planners, by all means post about that if you have something to say.
But I suspect there are relatively few communities in which “why are you all so interested in such a stupid and uninteresting topic?” will get you much community approval, and this isn’t one of them.
I’m interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! “questions”.
Also, there’s no reason that I’ve seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?
Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.
I’m having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don’t seem to obviously “solve” questions like “what should we have for lunch?”. I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let’s do the hard ones. And, I think evidence for this assumption is very lacking.
Again, if you are interested in more discussion about automated optimization on the level of “what should we have for lunch?” I encourage you to post about it; I suspect a lot of other people are interested as well.
What you’d expect? The usual: half educated people making basic errors, not making sure their decision theories work on ‘trivial’ problems, not doing due work to find flaws in own ideas, hence announcing solutions to hard problems that others don’t announce. Same as asking why only some coldfusion community solved world’s energy problems.
edit: actually, in all fairness, I think there can be not bad ideas to explore in work you see on LW. It is just that what you see normally published as ‘decision theory’ is pretty well formalized and structured in such a way that one wouldn’t have to search enormous space of possible flaws and possible steel-man and possible flaws in steel-man etc, to declare something invalid (that is the point of writing things formally and making mathematical proofs, that you can expect to see if its wrong). I don’t see any to-the-point formal papers on TDT here.
Crackpot Decision Theories popular around here do not solve any real problem arising from laws of causality operating normally, so there’s no point studying them seriously.
Your question is like asking why there’s no academic interest in Harry Potter Physics or Geography of Westros.
I think taw’s point was that Newcomb’s Problem has no practical applications, and would answer your question by saying that engineers are very interested in gravity. My answer to taw would be that Newcomb’s Problem is just an abstraction of Prisoner’s Dilemma, which is studied by economists, behavior biologists, evolutionary psychologists, and AI researchers.
Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality.
The contents of Newcomb’s boxes are caused by the kind of agent you are—which are (effectively by definition of what ‘kind of agent’ means) mapped directly to what decision you will take.
Newcomb’s paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore “the kind of agent you are” must be opposed to “the decisions you make”—instead of absolutely correlating to them.
In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
I actually have some sympathy for your position that Prisoner’s Dilemma is useful to study, but Newcomb’s Paradox isn’t. The way I would put it is, as the problems we study increase in abstraction from real world problems, there’s the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)
Given that we have little more than intuition to guide on us on “how much abstraction is too much?”, it doesn’t seems unreasonable for people to disagree on this topic and and pursue different approaches, as long as the the possibility of real-world irrelevance isn’t completely overlooked.
Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality.
So, you consider this notion of “causality” more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating?
Also, dammit, karma toll. Sorry, anyone who wants to answer me.
I think Newcomb introduced it as a simplification of the prisoner’s dilemma. The game theory party line is that you should 2-box and defect. But the same logic says that you should defect in iterated PD, if the number of rounds is known. This third problem is popular in academia, outside of philosophy. It is not so popular in game theory, but the game theorists admit that it is problematic.
Crackpot Decision Theories popular around here do not solve any real problem arising from laws of causality operating normally, so there’s no point studying them seriously.
Yeah, assuming an universe where causality only goes forward in time and where your decision processes are completely hidden from outside, CDT works; but humans are not perfect liars, so they leak out information about the decision they’re about to make before they start to consciously act upon it, so the assumptions of CDT are only approximately true, and in some cases TDT may return better results.
TDT says “I shouldn’t eat donuts” and does not get fat.
The deontological agent might say that. The TDT agent just decides “I will not eat this particular donut now” and it so happens that it would also to make decisions not to eat other donuts in similar circumstances.
The use of the term TDT or “timeless” is something that gets massively inflated to mean anything noble sounding. All because there is one class of contrived circumstance in which the difference between CDT and TDT is that TDT will cooperate.
It might not be rigorous, but it’s still a good analogy IMO. Akrasia can be seen as you and your future self playing a non-zero-sum game, which in some cases has PD-like payoffs.
The TDT agent just decides “I will not eat this particular donut now” and it so happens that it would also to make decisions not to eat other donuts in similar circumstances.
right. I was being a bit messy with describing the TDT thought process. The point is that TDT considers all donut-decisions as a single decision.
BTW, general question about decision theory. There appears to have been an academic study of decision theory for over a century, and causal and evidential decision theory were set out in 1981. Newcomb’s paradox was set out in 1969. Yet it seems as though no-one thought to explore the space beyond these two decision theories until Eliezer proposed TDT, and it seems as if there is a 100% disconnect between the community exploring new theories (which is centered around LW) and the academic decision theory community. This seems really, really odd—what’s going on?
This is simply not true. Robert Nozick (who introduced Newcomb’s problem to philosophers) compared/contrasted EDT and CDT at least as far back as 1993. Even back then, he noted their inadequacy on several decision-theoretic problems and proposed some alternatives.
Me being ignorant of something seemed like a likely part of the explanation—thanks :) I take it you’re referencing “The Nature of Rationality”? Not read that I’m afraid. If you can spare the time I’d be interested to know what he proposes -thanks!
I haven’t read The Nature of Rationality in quite a long time, so I won’t be of much help. For a very simple and short introduction to Nozick’s work on decision theory, you should read this (PDF).
There were plenty of previous theories trying to go beyond CDT or EDT, they just weren’t satisfactory.
This paper talks about reflexive decision models and claims to develop a form of CDT which one boxes.
It’s in my to-read list but I haven’t got to it yet so I’m not sure whether it’s of interest but I’m posting it just in case (it could be a while until I have time to read it so I won’t be able to post a more informed comment any time soon).
Though this theory post-dates TDT and so isn’t interesting from that perspective.
Dispositional decision theory :P
… which I cannot find a link to the paper for, now. Hm. But basically it was just TDT, with less awareness of why.
EDIT: Ah, here it was. Credit to Tim Tyler.
I checked it. Not the same thing.
It should be noted that Newcomb’s problem was considered interesting in Philosophy in 1969, but decision theories were studied more in other fields—so there’s a disconnect between the sorts of people who usually study formal decision theories and that sort of problem.
(Deleting comments seems not to be working. Consider this a manual delete.)
Decision Theory is and can be applied to a variety of problems here. It’s just that AI may face Newcomb-like problems and in particular we want to ensure a 1-boxing-like behavior on the part of AI.
The rationale for TDT-like decision theories is even more general, I think. There’s no guarantee that our world contains only one copy of something. We want a decision theory that would let the AI cooperate with its copies or logical correlates, rather than wage pointless wars.
Constructing rigorous mathematical foundation of decision theory to explain what a decision problem or a decision or a goal are, is potentially more useful than resolving any given informally specified class of decision problems.
What is an example of such a real-world problem?
Negotiations with entities who can read the AI’s source code.
Given the week+ delay in this response, it’s probably not going to see much traffic, but I’m not convinced “reading” source code is all that helpful. Omega is posited to have nearly god-like abilities in this regard, but since this is a rationalist discussion, we probably have to rule out actual omnipotence.
If Omega intends to simply run the AI on spare hardware it has, then it has to be prepared to validate (in finite time and memory) that the AI hasn’t so obfuscated its source as to be unintelligible to rational minds. It’s also possible that the source to an AI is rather simple but it is dependent a large amount of input data in the form of a vast sea of numbers. I.e., the AI in question could be encoded as an ODE system integrator that’s reliant on a massive array of parameters to get from one state to the next. I don’t see why we should expect Omega to be better at picking out the relevant, predictive parts of these numbers than we are.
If the AI can hide things in its code or data, then it can hide functionality that tests to determine if it is being run by Omega or on its own protected hardware. In such a case it can lie to Omega just as easily as Omega can lie to the “simulated” version of the AI.
I think it’s time we stopped positing an omniscient Omega in these complications to Newcomb’s problem. They’re like epicycles on Ptolemaic orbital theory in that they continue a dead end line of reasoning. It’s better to recognize that Newcomb’s problem is a red herring. Newcomb’s problem doesn’t demonstrate problems that we should expect AI’s to solve in the real world. It doesn’t tease out meaningful differences between decision theories.
That is, what decisions on real-world problems do we expect to be different between two AIs that come to different conclusions about Newcomb-like problems?
You should note that every problem you list is a special case. Obviously, there are ways of cheating at Newcomb’s problem if you’re aware of salient details beforehand. You could simply allow a piece of plutonium to decay, and do whatever the resulting Geiger counter noise tells you to. That does not, however, support your thesis that Newcomb’s problem is a totally artificial problem with no logical intrusions into reality.
As a real-world example, imagine an off-the-shelf stock market optimizing AI. Not sapient, to make things simpler, but smart. When any given copy begins running, there are already hundreds or thousands of near-identical copies running elsewhere in the market. If it fails to predict their actions from its own, it will do objectively worse than it might otherwise do.
i don’t see how your example is apt or salient. My thesis is that Newcomb-like problems are the wrong place to be testing decision theories because they do not represent realistic or relevant problems. We should focus on formalizing and implementing decision theories and throw real-world problems at them rather than testing them on arcane logic puzzles.
Well… no, actually. A good decision theory ought to be universal. It ought to be correct, and it ought to work. Newcomb’s problem is important, not because it’s ever likely to happen, but because it shows a case in which the normal, commonly accepted approach to decision theory (CDT) failed miserably. This ‘arcane logic puzzle’ is illustrative of a deeper underlying flaw in the model, which needs to be addressed. It’s also a flaw that’d be much harder to pick out by throwing ‘real world’ problems at it over and over again.
Seems unlikely to work out to me. Humans evolved intelligence without Newcomb-like problems. As the only example of intelligence that we know of, it’s clearly possible to develop intelligence without Newcomb-like problems. Furthermore, the general theory seems to be that AIs will start dumber than humans and iteratively improve until they’re smarter. Given that, why are we so interested in problems like these (which humans don’t universally agree about the answers to)?
I’d rather AIs be able to help us with problems like “what should we do about the economy?” or even “what should I have for dinner?” instead of worrying about what we should do in the face of something godlike.
Additionally, human minds aren’t universal (assuming that universal means that they give the “right” solutions to all problems), so why should we expect AIs to be? We certainly shouldn’t expect this if we plan on iteratively improving our AIs.
Harsh crowd.
It might be nice to be able to see the voting history (not the voters’ names, but the number of up and down votes) on a comment. I can’t tell if my comments are controversial or just down-voted by two people. Perhaps even just the number of votes would be sufficient (e.g. −2/100 vs. −2/2).
If it helps: it’s a fairly common belief in this community that a general-purpose optimization tool is both far superior to, and more interesting to talk about, than a variety of special-purpose tools.
Of course, that doesn’t mean you have to be interested in general-purpose optimization tools; if you’re more interested in decision theory for dinner-menu or economic planners, by all means post about that if you have something to say.
But I suspect there are relatively few communities in which “why are you all so interested in such a stupid and uninteresting topic?” will get you much community approval, and this isn’t one of them.
I’m interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! “questions”.
Also, there’s no reason that I’ve seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?
Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.
I’m having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don’t seem to obviously “solve” questions like “what should we have for lunch?”. I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let’s do the hard ones. And, I think evidence for this assumption is very lacking.
That’s fair.
Again, if you are interested in more discussion about automated optimization on the level of “what should we have for lunch?” I encourage you to post about it; I suspect a lot of other people are interested as well.
Yeah, I might, but here I was just surprised by the down-voting for contrary opinion. It seems like the thing we ought to foster not hide.
As I tried to express in the first place, I suspect what elicited the disapproval was not the contrary opinion, but the rudeness.
Sorry. It didn’t seem rude to me. I’m just frustrated with where I see folks spending their time.
My apologies to anyone who was offended.
What you’d expect? The usual: half educated people making basic errors, not making sure their decision theories work on ‘trivial’ problems, not doing due work to find flaws in own ideas, hence announcing solutions to hard problems that others don’t announce. Same as asking why only some coldfusion community solved world’s energy problems.
edit: actually, in all fairness, I think there can be not bad ideas to explore in work you see on LW. It is just that what you see normally published as ‘decision theory’ is pretty well formalized and structured in such a way that one wouldn’t have to search enormous space of possible flaws and possible steel-man and possible flaws in steel-man etc, to declare something invalid (that is the point of writing things formally and making mathematical proofs, that you can expect to see if its wrong). I don’t see any to-the-point formal papers on TDT here.
Crackpot Decision Theories popular around here do not solve any real problem arising from laws of causality operating normally, so there’s no point studying them seriously.
Your question is like asking why there’s no academic interest in Harry Potter Physics or Geography of Westros.
Err, this would also predict no academic interest in Newcomb’s Problem, and that isn’t so.
Not counting philosophers, where’s this academic interest in Newcomb’s paradox?
Why are we not counting philosophers? Isn’t that like saying, “Not counting physicists, where’s this supposed interest in gravity?”
I think taw’s point was that Newcomb’s Problem has no practical applications, and would answer your question by saying that engineers are very interested in gravity. My answer to taw would be that Newcomb’s Problem is just an abstraction of Prisoner’s Dilemma, which is studied by economists, behavior biologists, evolutionary psychologists, and AI researchers.
Prisoner’s Dilemma relies on causality, Newcomb’s Paradox is anti-causality. They’re as close to each other as astronomy and astrology.
The contents of Newcomb’s boxes are caused by the kind of agent you are—which are (effectively by definition of what ‘kind of agent’ means) mapped directly to what decision you will take.
Newcomb’s paradox can only be called anti-causality only in some confused anti-compatibilist sense in which determinism is opposed to free will and therefore “the kind of agent you are” must be opposed to “the decisions you make”—instead of absolutely correlating to them.
In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
EDIT: Grammar.
This example is much like Parfit’s Hitchhiker in less extreme form.
I actually have some sympathy for your position that Prisoner’s Dilemma is useful to study, but Newcomb’s Paradox isn’t. The way I would put it is, as the problems we study increase in abstraction from real world problems, there’s the benefit of isolating particular difficulties and insights, and making it easier to make theoretical progress, but also the danger that the problems we pay attention to are no longer relevant to the actual problems we face. (See another recent comment of mine making a similar point.)
Given that we have little more than intuition to guide on us on “how much abstraction is too much?”, it doesn’t seems unreasonable for people to disagree on this topic and and pursue different approaches, as long as the the possibility of real-world irrelevance isn’t completely overlooked.
So, you consider this notion of “causality” more important than actually succeeding? If I showed up in a time machine, would you complain I was cheating?
Also, dammit, karma toll. Sorry, anyone who wants to answer me.
Engineering.
Philosophy contains some useful parts, but it also contains massive amounts of bullshit. Starting let’s say here.
Decision theory is studied very seriously by mathematicians and others, and they don’t care at all for Newcomb’s Paradox.
Newcomb himself was not a philosopher.
I think Newcomb introduced it as a simplification of the prisoner’s dilemma. The game theory party line is that you should 2-box and defect. But the same logic says that you should defect in iterated PD, if the number of rounds is known. This third problem is popular in academia, outside of philosophy. It is not so popular in game theory, but the game theorists admit that it is problematic.
Yeah, assuming an universe where causality only goes forward in time and where your decision processes are completely hidden from outside, CDT works; but humans are not perfect liars, so they leak out information about the decision they’re about to make before they start to consciously act upon it, so the assumptions of CDT are only approximately true, and in some cases TDT may return better results.
CDT eats the donut “just this once” every time and gets fat. TDT says “I shouldn’t eat donuts” and does not get fat.
The deontological agent might say that. The TDT agent just decides “I will not eat this particular donut now” and it so happens that it would also to make decisions not to eat other donuts in similar circumstances.
The use of the term TDT or “timeless” is something that gets massively inflated to mean anything noble sounding. All because there is one class of contrived circumstance in which the difference between CDT and TDT is that TDT will cooperate.
It might not be rigorous, but it’s still a good analogy IMO. Akrasia can be seen as you and your future self playing a non-zero-sum game, which in some cases has PD-like payoffs.
right. I was being a bit messy with describing the TDT thought process. The point is that TDT considers all donut-decisions as a single decision.
You might want to link to http://lesswrong.com/lw/4sh/how_i_lost_100_pounds_using_tdt/.
Or I can just lazily allude to it and then upvote you for linking it.
Yeah, I guessed that you were alluding to it, but I thought that people who hadn’t read it wouldn’t get the allusion.