Alright, firstly, thank you so much for taking the time to reply!
I think you may have misunderstood my main point. (But I also think that there’s a chance that you have correctly understood my point and disproven it but that I’m too stupid or uninformed to have noticed.)
My basic point:
Total utilitarianism, average utilitarianism, and Rawlsianism are all normative theories. They are concerned with how moral agents ought to act.
SSA and SIA are positive theories. They make actual predictions about the future. This means that in the fullness of time, we should know whether SSA or SIA is correct.
Rawlsianism, though a normative theory, seeks to justify itself rationally through the original position thought experiment. This thought experiment requires a good degree of “non-parochiality,” which makes me believe that to accept it one also would have to accept SIA. However, since SIA is a positive theory, this means that Rawlsianism must also be a positive, not a normative, theory. Take this as a paradox, if you will.
As for total and average utilitarianism, I don’t think that they necessitate either SSA or SIA being true. I believe I kind of vaguely understand what you mean by “SIA corresponds to total utilitarianism”: in the case of SIA, our reference class is all possible observers, and in the case of total utilitarianism, we care about the expected value over all possible futures. However, it seems to me that this conflates the positive concept of beliefs about the future population with the normative concept of caring more about universes with larger populations. In other words, someone who believes in total utilitarianism need not necessarily believe that SIA is true because of their belief in total utilitarianism. However, I fear that the vagueness I alluded to previously is due to my lack of understanding, not due to the vagueness of your point. Please enlighten me with regards to any more concrete meaning of the word “corresponds” as you use it.
As for Heideggerianism, I agree that it does not necessarily “correspond” to SSA in any way. However, I do feel that it is likely incompatible with SIA. As noted in my post, I am a bit uncertain about the consequences of Heideggerianism, so I am happy to change my argument to a more general “parochial” vs. “non-parochial” form, using the language you have suggested.
Finally, you referred to
the overall utilitarian-ish tradition that produces anthropic theories
This leads me to suspect that there is a well-established connection between moral philosophy and anthropic reasoning that flows from the former to the latter. Please let me know if that is the case.
However, since SIA is a positive theory, this means that Rawlsianism must also be a positive, not a normative, theory. Take this as a paradox, if you will.
I see what you mean where you’re testing a key assumption of liberalism or Heideggerianism, not the theory as a whole. Rawlsianism, however, includes min-maxing as well, which seems more normative.
This leads me to suspect that there is a well-established connection between moral philosophy and anthropic reasoning that flows from the former to the latter. Please let me know if that is the case.
If you are behind a veil of ignorance and optimize expected utility under SSA, then you will want the average utility in your universe to be as high as possible.
SIA is a bit more complicated. But we can observe that SIA gives the same posterior predictions if we add extra observers to each universe so they have the same population, and these observers observe something special that indicates they’re an extra observer, and their quality of life is 0 (i.e. they’re indifferent to existing). In this case, someone reasoning behind the veil of ignorance and maximizing expected utility will want total utility to be as high as possible; it’s better for someone to exist iff their personal utility exceeds 0, since they’re replacing an empty observer.
As for Heideggerianism, I agree that it does not necessarily “correspond” to SSA in any way. However, I do feel that it is likely incompatible with SIA. As noted in my post, I am a bit uncertain about the consequences of Heideggerianism, so I am happy to change my argument to a more general “parochial” vs. “non-parochial” form, using the language you have suggested.
I see that Rawlsianism requires an original position. But such an original position is required for both SSA and SIA. To my mind, the difference is that the SSA original position is just a prior over universes, while the SIA original position includes both a prior over universes and an assumption of subjective existence, which is more likely to be true of universes with high population. Both SSA and SIA agree that you aren’t in an empty universe a priori, but that’s an edge case; SIA scales continuously with population while SSA just has a step from 0 to 1.
Heidegerrianism doesn’t seem to believe in an objective universe in a direct sense, e.g. the being of tools is not the being of the dynamical physical system that a physicalist would take to correspond to the tool, because the tool also has the affordance of using it. So it’s unclear how to reconcile Heideggerianism with anthropics as a whole. I’m speculating on various political correspondences but I don’t see how to get them from Heideggerianism and anthropics, just noting possible similarities in conclusions.
Rawlsianism, however, includes min-maxing as well, which seems more normative.
I agree that the min-maxing of Rawlsianism is purely normative. What I was getting at was the veil of ignorance itself. Perhaps, it is worth explicitly saying, “oops, I forgot about that,” for this point.
If you are behind a veil of ignorance and optimize expected utility under SSA, then you will want the average utility in your universe to be as high as possible.
Yes, I agree. However, I still feel that if you are willing to believe in the veil of ignorance, you should also believe in SIA.
But such an original position is required for both SSA and SIA. To my mind, the difference is that the SSA original position is just a prior over universes, while the SIA original position includes both a prior over universes and an assumption of subjective existence, which is more likely to be true of universes with high population.
Again, I agree. However, I feel that the veil of ignorance needs both the prior over universes and the prior before the assumption of subjective existence since it is willing to modify existential properties of the observer, without which the observer would not have the same subjective existence.
Heidegerrianism doesn’t seem to believe in an objective universe in a direct sense
This is the very reason I believe that it should be OK with the “prior over universes” present in SSA. If reality is not objective, then it is easier to understand this prior as “uncertainty regarding the population of this universe” rather than “the potential of being in another universe which has a different population.” The potential universes and the actual universe become ontologically more similar since they are both non-objective. I have to admit that this is the point I am least certain of, though.
SSA and SIA are positive theories. They make actual predictions about the future. This means that in the fullness of time, we should know whether SSA or SIA is correct.
Sounds like a crux. I think this is obviously not the case, though I fail to formulate a sense of “positive theories” that would turn this impression into a clear argument.
What I meant by “positive theories” is “theories that can be falsified.” I think it would be fine to literally call them “scientific theories.” (I don’t think there is anything particularly deep here; just Karl Popper’s thoughts on science.) For example, if the total human population ends up being 1020 then I would consider that as having falsified SSA. In a sense, the future of human history becomes a science experiment that tests SSA and SIA has hypotheses. Perhaps I should have relabeled them SSH and SIH.
This stands in contrast with normative statements like “murder is wrong,” which cannot be falsified by experiment.
We can do a ritual that falsifies, but that doesn’t by itself explain what’s going on, as the shape of justification for knowledge is funny in this case. So merely obtaining some knowledge is not enough, it’s also necessary to know a theory that grounds the event of apparently obtaining such knowledge to some other meaningful fact, justifying or explaining the knowledge. As I understand them, SSA vs. SIA are not about facts at all, they are variants of a ritual for assigning credence to statements that normally have no business having credence assigned to them.
Just as bayesian prior for even unique conspicuously non-frequentist events can be reconstructed from preference, there might be some frame where anthropic credences are decision relevant, and that grounds them in something other than their arbitrary definitions. The comment by jessicata makes sense in that way, finding a role for anthropic credences in various ways of calculating preference. But it’s less clear than for either updateful bayesian credences or utilities, and I expect that there is no answer that gives them robust meaning beyond their role in informal discussion of toy systems of preference.
Alright, firstly, thank you so much for taking the time to reply!
I think you may have misunderstood my main point. (But I also think that there’s a chance that you have correctly understood my point and disproven it but that I’m too stupid or uninformed to have noticed.)
My basic point:
Total utilitarianism, average utilitarianism, and Rawlsianism are all normative theories. They are concerned with how moral agents ought to act.
SSA and SIA are positive theories. They make actual predictions about the future. This means that in the fullness of time, we should know whether SSA or SIA is correct.
Rawlsianism, though a normative theory, seeks to justify itself rationally through the original position thought experiment. This thought experiment requires a good degree of “non-parochiality,” which makes me believe that to accept it one also would have to accept SIA. However, since SIA is a positive theory, this means that Rawlsianism must also be a positive, not a normative, theory. Take this as a paradox, if you will.
As for total and average utilitarianism, I don’t think that they necessitate either SSA or SIA being true. I believe I kind of vaguely understand what you mean by “SIA corresponds to total utilitarianism”: in the case of SIA, our reference class is all possible observers, and in the case of total utilitarianism, we care about the expected value over all possible futures. However, it seems to me that this conflates the positive concept of beliefs about the future population with the normative concept of caring more about universes with larger populations. In other words, someone who believes in total utilitarianism need not necessarily believe that SIA is true because of their belief in total utilitarianism. However, I fear that the vagueness I alluded to previously is due to my lack of understanding, not due to the vagueness of your point. Please enlighten me with regards to any more concrete meaning of the word “corresponds” as you use it.
As for Heideggerianism, I agree that it does not necessarily “correspond” to SSA in any way. However, I do feel that it is likely incompatible with SIA. As noted in my post, I am a bit uncertain about the consequences of Heideggerianism, so I am happy to change my argument to a more general “parochial” vs. “non-parochial” form, using the language you have suggested.
Finally, you referred to
This leads me to suspect that there is a well-established connection between moral philosophy and anthropic reasoning that flows from the former to the latter. Please let me know if that is the case.
I see what you mean where you’re testing a key assumption of liberalism or Heideggerianism, not the theory as a whole. Rawlsianism, however, includes min-maxing as well, which seems more normative.
If you are behind a veil of ignorance and optimize expected utility under SSA, then you will want the average utility in your universe to be as high as possible.
SIA is a bit more complicated. But we can observe that SIA gives the same posterior predictions if we add extra observers to each universe so they have the same population, and these observers observe something special that indicates they’re an extra observer, and their quality of life is 0 (i.e. they’re indifferent to existing). In this case, someone reasoning behind the veil of ignorance and maximizing expected utility will want total utility to be as high as possible; it’s better for someone to exist iff their personal utility exceeds 0, since they’re replacing an empty observer.
I see that Rawlsianism requires an original position. But such an original position is required for both SSA and SIA. To my mind, the difference is that the SSA original position is just a prior over universes, while the SIA original position includes both a prior over universes and an assumption of subjective existence, which is more likely to be true of universes with high population. Both SSA and SIA agree that you aren’t in an empty universe a priori, but that’s an edge case; SIA scales continuously with population while SSA just has a step from 0 to 1.
Heidegerrianism doesn’t seem to believe in an objective universe in a direct sense, e.g. the being of tools is not the being of the dynamical physical system that a physicalist would take to correspond to the tool, because the tool also has the affordance of using it. So it’s unclear how to reconcile Heideggerianism with anthropics as a whole. I’m speculating on various political correspondences but I don’t see how to get them from Heideggerianism and anthropics, just noting possible similarities in conclusions.
I agree that the min-maxing of Rawlsianism is purely normative. What I was getting at was the veil of ignorance itself. Perhaps, it is worth explicitly saying, “oops, I forgot about that,” for this point.
Yes, I agree. However, I still feel that if you are willing to believe in the veil of ignorance, you should also believe in SIA.
Again, I agree. However, I feel that the veil of ignorance needs both the prior over universes and the prior before the assumption of subjective existence since it is willing to modify existential properties of the observer, without which the observer would not have the same subjective existence.
This is the very reason I believe that it should be OK with the “prior over universes” present in SSA. If reality is not objective, then it is easier to understand this prior as “uncertainty regarding the population of this universe” rather than “the potential of being in another universe which has a different population.” The potential universes and the actual universe become ontologically more similar since they are both non-objective. I have to admit that this is the point I am least certain of, though.
Sounds like a crux. I think this is obviously not the case, though I fail to formulate a sense of “positive theories” that would turn this impression into a clear argument.
What I meant by “positive theories” is “theories that can be falsified.” I think it would be fine to literally call them “scientific theories.” (I don’t think there is anything particularly deep here; just Karl Popper’s thoughts on science.) For example, if the total human population ends up being 1020 then I would consider that as having falsified SSA. In a sense, the future of human history becomes a science experiment that tests SSA and SIA has hypotheses. Perhaps I should have relabeled them SSH and SIH.
This stands in contrast with normative statements like “murder is wrong,” which cannot be falsified by experiment.
We can do a ritual that falsifies, but that doesn’t by itself explain what’s going on, as the shape of justification for knowledge is funny in this case. So merely obtaining some knowledge is not enough, it’s also necessary to know a theory that grounds the event of apparently obtaining such knowledge to some other meaningful fact, justifying or explaining the knowledge. As I understand them, SSA vs. SIA are not about facts at all, they are variants of a ritual for assigning credence to statements that normally have no business having credence assigned to them.
Just as bayesian prior for even unique conspicuously non-frequentist events can be reconstructed from preference, there might be some frame where anthropic credences are decision relevant, and that grounds them in something other than their arbitrary definitions. The comment by jessicata makes sense in that way, finding a role for anthropic credences in various ways of calculating preference. But it’s less clear than for either updateful bayesian credences or utilities, and I expect that there is no answer that gives them robust meaning beyond their role in informal discussion of toy systems of preference.
Yes, I think you are right. It might be best for me to abandon the idea entirely.
Sorry for wasting everybody’s time.