A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory.
Why exactly? I mean, my intuition also tells me it’s wrong… but my intuition has a few assumptions that disagree with the proposed scenario. Let’s make sure the intuition does not react to a strawman.
For example, when in real life people “work like slaves for a future paradise”, the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader’s personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.
The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
In other words, is your objection “in situation X the decision D is wrong”, or is it “the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong”?
When Omega enters a discussion, my interest in it leaves.
To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.
And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.
I intended the general claim as stated. I don’t know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.
For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.
For practical purposes pronouncements like this are best interpreted as indications
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
It is counterintuitive that you should slave for people you don’t know, perhaps because you can’t be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn’t fair. the simple
version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
I believe the answer is “yes”, but I had to think about that for a moment. I’m not sure how that’s relevant to the current discussion, though.
I think your real point might be closer to something like, “thought experiments are useless at best, and should thus be avoided”, but I don’t want to put words into anyone’s mouth.
My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn’t yield much of an insight about the real world”.
That makes sense, but I don’t think it’s what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.
Why exactly? I mean, my intuition also tells me it’s wrong… but my intuition has a few assumptions that disagree with the proposed scenario. Let’s make sure the intuition does not react to a strawman.
For example, when in real life people “work like slaves for a future paradise”, the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader’s personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.
The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
In other words, is your objection “in situation X the decision D is wrong”, or is it “the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong”?
When Omega enters a discussion, my interest in it leaves.
To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.
Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such.
And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.
I intended the general claim as stated. I don’t know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.
For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
This is evidently not a behavior you practice.
It is counterintuitive that you should slave for people you don’t know, perhaps because you can’t be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn’t fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
I believe the answer is “yes”, but I had to think about that for a moment. I’m not sure how that’s relevant to the current discussion, though.
I think your real point might be closer to something like, “thought experiments are useless at best, and should thus be avoided”, but I don’t want to put words into anyone’s mouth.
My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn’t yield much of an insight about the real world”.
That makes sense, but I don’t think it’s what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.