It’s a cute metaphor; and for anyone versed in RPG lore, it is (it seems to me) likely to be helpful, descriptively, in conceptualizing the facts of the matter (the evolutionary origins of morality, etc.).
But the substantive conclusions in this post are unsupported (and, I think, unsupportable). Namely:
Some game-theoretic strategies (what Nietzsche would call “tables of values”) are more survival-promoting than others. That’s the sense in which you can get from “is” to “ought.”
To the contrary, this does not get you one iota closer to “ought”.
Sure, some strategies are more survival-promoting. But does that make them morally right? Are you identifying “right” with “survival-promoting”, or even claiming that “right”, as a concept, must contain “survival-promoting”? But that’s an “ought” claim, and without making such a claim, you cannot get to “it is right to execute this strategy” from “this strategy is survival-promoting”.
(Now, you might say that acting on any moral view other than “what is survival-promoting is right” will make you fail to survive, and then your views on morality will become irrelevant. This may be true! But does that make those other moral views wrong? No, unless you, once again, adopt an “ought” claim like “moral views which lead to failure to survive are wrong”, etc. In short, the is-ought gap is not so easily bridged.)
The way I think the intellect plays into “metaprogramming” the player is indirect; you can infer what the player is doing, do some formal analysis about how that will play out, comprehend (again at the “merely” intellectual level) if there’s an error or something that’s no longer relevant/adaptive, plug that new understanding into some change that the intellect can affect (maybe “let’s try this experiment”), and maybe somewhere down the chain of causality the “player”’s strategy changes.
Any “character” who does such a thing is, ultimately, still executing the strategy selected by the “player”. “Characters” cannot go meta. (“Character” actions can end up altering the population of “players”—though this is not quite yet within our power. But in such a case, it is still the “players” that end up selecting strategies.)
I strongly agree with these comments regarding is-ought. To add a little, talking about winning/losing, effective strategies or game theory assumes a specific utility function. To say Maria Teresa “lost” we need to first agree that death and pain are bad. And even the concept of “survival” is not really well-defined. What does it mean to survive? If humanity is replaced by “descendants” which are completely alien or even monstrous from our point of view, did humanity “survive”? Surviving means little without thriving and both concepts are subjective and require already having some kind of value system to specify.
I’m raising a question more than making an argument. Are there futures that would seem to present-day people completely alien or even monstrous, that nevertheless its inhabitants would consider a vast improvement over our present, their past? Would these hypothetical descendants regard as mere paperclipping, an ambition to fill the universe forever with nothing more than people comfortably like us?
“Of Life only is there no end; and though of its million starry mansions many are empty and many still unbuilt, and though its vast domain is as yet unbearably desert, my seed shall one day fill it and master its matter to its uttermost confines. And for what may be beyond, the eyesight of Lilith is too short. It is enough that there is a beyond.”
To the contrary, this does not get you one iota closer to “ought”.
This is true, but I do think there’s something being pointed at that deserves acknowledging.
I think I’d describe it as: you don’t get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)
That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don’t expect to see many agents with suicide as an ought.
And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).
And for someone who acknowledges suicide as an ought, this can’t convince them not to; and for someone who doesn’t acknowledge cooperation, it doesn’t convince them to. So I wouldn’t describe it as “getting an ought from an is”. But I’d say you’re at least getting something of the same type as an ought?
First of all, there isn’t anything that’s “of the the same type as an ought” except an ought. So no, you’re not getting any oughts, nor anything “of the same type”. It’s “is” all the way through, here.
More to the point, I think you’re missing a critical layer of abstraction/indirection: namely, that what you can predict, via the adaptive/game-theoretic perspective, isn’t “what oughts are likely to be acknowledged”, but “what oughts will the agent act as if it follows”. Those will usually not be the same as what oughts the agent acknowledges, or finds persuasive, etc.
This is related to “Adaptation-Executers, Not Fitness-Maximizers”. An agent who commits suicide is unlikely (though not entirely unable!) to propagate, this is true, but who says that an agent who doesn’t commit suicide can’t believe that suicide is good, can’t advocate for suicide, etc.? In fact, such agents—actual people, alive today—can, and do, all these things!
It’s a cute metaphor; and for anyone versed in RPG lore, it is (it seems to me) likely to be helpful, descriptively, in conceptualizing the facts of the matter (the evolutionary origins of morality, etc.).
But the substantive conclusions in this post are unsupported (and, I think, unsupportable). Namely:
To the contrary, this does not get you one iota closer to “ought”.
Sure, some strategies are more survival-promoting. But does that make them morally right? Are you identifying “right” with “survival-promoting”, or even claiming that “right”, as a concept, must contain “survival-promoting”? But that’s an “ought” claim, and without making such a claim, you cannot get to “it is right to execute this strategy” from “this strategy is survival-promoting”.
(Now, you might say that acting on any moral view other than “what is survival-promoting is right” will make you fail to survive, and then your views on morality will become irrelevant. This may be true! But does that make those other moral views wrong? No, unless you, once again, adopt an “ought” claim like “moral views which lead to failure to survive are wrong”, etc. In short, the is-ought gap is not so easily bridged.)
Any “character” who does such a thing is, ultimately, still executing the strategy selected by the “player”. “Characters” cannot go meta. (“Character” actions can end up altering the population of “players”—though this is not quite yet within our power. But in such a case, it is still the “players” that end up selecting strategies.)
I strongly agree with these comments regarding is-ought. To add a little, talking about winning/losing, effective strategies or game theory assumes a specific utility function. To say Maria Teresa “lost” we need to first agree that death and pain are bad. And even the concept of “survival” is not really well-defined. What does it mean to survive? If humanity is replaced by “descendants” which are completely alien or even monstrous from our point of view, did humanity “survive”? Surviving means little without thriving and both concepts are subjective and require already having some kind of value system to specify.
Og see 21st century. Og say, “Where is caveman?”
3-year-old you sees present-day you...
Present you sees 90-year-old you...
90-year-old you sees your 300-year-old great great grandchildren...
“After, therefore the fulfillment of.” Is this your argument, or is there something more implied that I’m not seeing?
As it is, this seems to Prove Too Much.
I’m raising a question more than making an argument. Are there futures that would seem to present-day people completely alien or even monstrous, that nevertheless its inhabitants would consider a vast improvement over our present, their past? Would these hypothetical descendants regard as mere paperclipping, an ambition to fill the universe forever with nothing more than people comfortably like us?
“Of Life only is there no end; and though of its million starry mansions many are empty and many still unbuilt, and though its vast domain is as yet unbearably desert, my seed shall one day fill it and master its matter to its uttermost confines. And for what may be beyond, the eyesight of Lilith is too short. It is enough that there is a beyond.”
This is true, but I do think there’s something being pointed at that deserves acknowledging.
I think I’d describe it as: you don’t get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)
That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don’t expect to see many agents with suicide as an ought.
And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).
And for someone who acknowledges suicide as an ought, this can’t convince them not to; and for someone who doesn’t acknowledge cooperation, it doesn’t convince them to. So I wouldn’t describe it as “getting an ought from an is”. But I’d say you’re at least getting something of the same type as an ought?
First of all, there isn’t anything that’s “of the the same type as an ought” except an ought. So no, you’re not getting any oughts, nor anything “of the same type”. It’s “is” all the way through, here.
More to the point, I think you’re missing a critical layer of abstraction/indirection: namely, that what you can predict, via the adaptive/game-theoretic perspective, isn’t “what oughts are likely to be acknowledged”, but “what oughts will the agent act as if it follows”. Those will usually not be the same as what oughts the agent acknowledges, or finds persuasive, etc.
This is related to “Adaptation-Executers, Not Fitness-Maximizers”. An agent who commits suicide is unlikely (though not entirely unable!) to propagate, this is true, but who says that an agent who doesn’t commit suicide can’t believe that suicide is good, can’t advocate for suicide, etc.? In fact, such agents—actual people, alive today—can, and do, all these things!