I am not fully committed to eliminative materialism, just trying to push it as far as possible, as I see it as the best chance at clarifying what consciousness does.
As for the last paragraph, if your analysis is correct, then it just means that a classical hedonic utilitarian + eliminative materialist would be a rare occurrence in this world, since such agents are unlikely to behave in a way that keeps itself existing.
If the project of eliminative materialism is fully finished, it would completely remove value judgments from human language. In the past, human languages refer to the values of many things, like the values of animals, plants, mountains, rivers, and some other things. This has progressively narrowed, and now in Western human language, only the values of biological neural networks that are carried in animal bodies are referred to. If this continues, this could lead to a language that does not refer to any value, but I don’t know what it would be like.
The Heptapod language seems to be value-free, and describes the past and the future in the same factual way. The human languages describes only the past factually, but the future valuefully. A value-free human language could be like the Heptapod language. In the story Story of Your Life, the human linguist protagonist who struggled to communicate with the Heptapods underwent a partial transformation of mind, and sometimes sees the past and future in the same descriptive, value-free way. She mated with her spouse and conceived a child, who she knew would die in an accident. She did it not because of a value calculation. An explanation of “why she did it” must instead be like
On a physical level, because of atoms and stuff.
On a conscious level, because that’s the way the world is. To see the future and then “decide” whether to play it out or not, is not physically possible.
In a language consistent with deterministic eliminative materialism, value judgments don’t do anything, because there are no alternative scenarios to judge about.
I am not sure about nondeterministic eliminative materialism. Still, if consciousness and free will can be eliminated, even with true randomness in this world, value judgments still seem to not do anything.
Suppose I build a deterministic agent which has a value function in the most literal sense, ie. it has to call the function to get the values of various alternative actions in order to make a decision about which to perform. Would you still say it has no use for value judgements?
An agent, an entity that acts, cannot say “what will be, will be”, because it makes decisions, and because the decisions it makes are a component of the future. If it does not know the decision it will make before it makes it, it will be in a state of subjective uncertainty about the future. Subjective uncertainty and objective deyetminism are quite compatible.
I think it is possible that you are being misled by fictional evidence. In Arrival, the Heptapods knowledge of the future is a straightforward extension of a fixed future, but everything we know indicates considerable barriers between determinism and foreknowledge
I am not fully committed to eliminative materialism, just trying to push it as far as possible, as I see it as the best chance at clarifying what consciousness does.
As for the last paragraph, if your analysis is correct, then it just means that a classical hedonic utilitarian + eliminative materialist would be a rare occurrence in this world, since such agents are unlikely to behave in a way that keeps itself existing.
If the project of eliminative materialism is fully finished, it would completely remove value judgments from human language. In the past, human languages refer to the values of many things, like the values of animals, plants, mountains, rivers, and some other things. This has progressively narrowed, and now in Western human language, only the values of biological neural networks that are carried in animal bodies are referred to. If this continues, this could lead to a language that does not refer to any value, but I don’t know what it would be like.
The Heptapod language seems to be value-free, and describes the past and the future in the same factual way. The human languages describes only the past factually, but the future valuefully. A value-free human language could be like the Heptapod language. In the story Story of Your Life, the human linguist protagonist who struggled to communicate with the Heptapods underwent a partial transformation of mind, and sometimes sees the past and future in the same descriptive, value-free way. She mated with her spouse and conceived a child, who she knew would die in an accident. She did it not because of a value calculation. An explanation of “why she did it” must instead be like
On a physical level, because of atoms and stuff.
On a conscious level, because that’s the way the world is. To see the future and then “decide” whether to play it out or not, is not physically possible.
Because values are intrinsically non physical? Because agents dont have preferences? Because agents dont want to talk about preferences?
In a language consistent with deterministic eliminative materialism, value judgments don’t do anything, because there are no alternative scenarios to judge about.
I am not sure about nondeterministic eliminative materialism. Still, if consciousness and free will can be eliminated, even with true randomness in this world, value judgments still seem to not do anything.
Suppose I build a deterministic agent which has a value function in the most literal sense, ie. it has to call the function to get the values of various alternative actions in order to make a decision about which to perform. Would you still say it has no use for value judgements?
An agent, an entity that acts, cannot say “what will be, will be”, because it makes decisions, and because the decisions it makes are a component of the future. If it does not know the decision it will make before it makes it, it will be in a state of subjective uncertainty about the future. Subjective uncertainty and objective deyetminism are quite compatible.
I think it is possible that you are being misled by fictional evidence. In Arrival, the Heptapods knowledge of the future is a straightforward extension of a fixed future, but everything we know indicates considerable barriers between determinism and foreknowledge