I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.