I don’t know why you would want to say you have an explanation of morality when you are an error theorist.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
I also don’t know why you are an error theorist. U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,)
I’m not sure at all what those mean. If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express (I’m about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence “It is immoral to coat children in burning napalm” returns an error for me.
You could say I consider the function “isMoral?” to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function “whichAreMoral?” exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
he is precisely asking you what it would mean for U or D to have truth values.
Yes.
In the example above, my “isMoral?” function can only return a truth-value when you give it inputs and run the algorithm. You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless. My current understanding of U and D is that they’re fairly similar to this function.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations. So clearly the other programmers are using different linked libraries that I don’t have access to (or they forgot that “Right” doesn’t have a declaration!)
If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”.
That isn’t a true statement about harpies.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations.
It’s worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory “right”. That’s “right” in one context. In this context we want a “right” theory of morality, that is a theoretically-right theory of the morally-right.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
Yes.
I have a standard library in my own brain that determines what I think looks like a “good” or “useful” morality function, and I only send morality functions that I’ve approved into my “isMoral?” function. But “isMoral?” can take any properly-formatted function of the right type as input.
And I have no idea yet what it is that makes certain morality functions look “good” or “useful” to me. Sometimes, to try and clear things up, I try to recurse “isMoral?” on different parameters.
e.g.: “isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)” would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”. That isn’t a true statement about harpies.
I’m not sure what you mean by “it isn’t really a statement about morality, it is about belief.”
Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. “I consider it immoral to coat children in napalm” certainly sounds like a statement about my morality though.
“isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False” would be a good way to put it.
It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of “better” here are inside the source code of DaFranker_IdealMoralFunction, and I don’t have access to that source code (it’s probably not even written yet).
Also note that “isMoral? MoralIntuition w a” =/= “”isMoral? [MoralFunctionsInBrain] w a” =/= “”isMoral? DominantMoralFunctionInBrain w a” =/= “”isMoral? CurrentMaxMoralFunctionInBrain w a” =/= “isMoral? IdealMoralFunction w a”.
In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one’s moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
Not quite, but those are different questions. Is the trading software itself “true” or “false”? No. Is my approximate model of how the trading software works “true” or “false”? No.
Is it “true” or “false” that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it “true” or “false” that the trading software returns a profit? Yes, it is.
See, there’s an element of context that lets us ask true/false questions about things. “Politics is true” is meaningless. “Politics is the most efficient method of managing a society” is certainly not meaningless, and with more formal definitions of “efficient” and “managing” one could even produce experimental tests to determine by observations whether that is true or false.
However, when one says “utilitarianism is true”, I just don’t know what observations to make. “utilitarianism accurately models DaFranker’s ideal moral function” is much better—I can compare the two, I can try to refine what is meant by “utilitarianism” here exactly, and I could in principle determine whether this is true or false.
“as per utilitarianism’s claim, what is morally best is to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is “morally best” here? According to what principle? It seems this “morally best” depends on the reader, or myself, or some other point of reference.
We could decide that this “morally best” means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.
We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don’t even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.
At any rate, I don’t think “as per utilitarianism’s claim, it is pareto-optimal across all humans to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” is what you meant by “utilitarianism is true”.
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is.
The question of what is right is also about the most important question there is.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
My main point is that I haven’t the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That’s why I was asking you, since you seem to know.
I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I’ll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
I never said belief in “objective morality” was silly. I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
That woudl be the case of “right way” meant “morally-right way”. But metaethical theories aren’t compared
by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
That woudl be the case of “right way” meant “morally-right way”.
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
If metaethics were just obviously unsolveable, someone would have noticed.
Remind me what it would look like for metaethics to be solved?
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right.
Remind me what it would look like for metaethics to be solved?
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of “right”. Hence “the right theory of what is right to do” is not circular, so long as the two “rights” mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively).
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can’t say what they are for?
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values.
When they are both trying to give accounts of what it would mean for something to be “right”, it seems this question becomes pretty silly.
I’m not sure at all what those mean. If they mean that I think there doesn’t exist any sentences about morality that can have truth values, that is false. “DaFranker finds it immoral to coat children in burning napalm” is true, with more confidence than I can reasonably express (I’m about as certain of this belief about my moral system as I am in things like 2 + 2 = 4).
However, the sentence “It is immoral to coat children in burning napalm” returns an error for me.
You could say I consider the function “isMoral?” to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function “whichAreMoral?” exists to check more complicated scenarios with multiple possible actions and other fun things.
See, if the “morality function” input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality.
Yes.
In the example above, my “isMoral?” function can only return a truth-value when you give it inputs and run the algorithm. You can’t look at the overall code defining the function and give it a truth-value. That’s just completely meaningless. My current understanding of U and D is that they’re fairly similar to this function.
I agree somewhat. To use another code analogy, here I’ve stumbled upon the symbol “Right”, and then I look back across the code for this discussion and I can’t find any declarations or “Right = XXXXX” assignment operations. So clearly the other programmers are using different linked libraries that I don’t have access to (or they forgot that “Right” doesn’t have a declaration!)
An error theorist could agree with that. it isn’t really a statement about morality, it is about belief. Consider “Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies”. That isn’t a true statement about harpies.
And it doesn’t matter what the morality function is? Any mapping from input to output will do?
So is it meaninless that
some simulations do (not) correctly model the simulated system
some commercial software does (not) fulfil a real-world business requirment
some algorithms do (not) correctly computer mathematical functions
some games are (not) entertaining
some trading software does (not) return a profit
It’s worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory “right”. That’s “right” in one context. In this context we want a “right” theory of morality, that is a theoretically-right theory of the morally-right.
Yes.
I have a standard library in my own brain that determines what I think looks like a “good” or “useful” morality function, and I only send morality functions that I’ve approved into my “isMoral?” function. But “isMoral?” can take any properly-formatted function of the right type as input.
And I have no idea yet what it is that makes certain morality functions look “good” or “useful” to me. Sometimes, to try and clear things up, I try to recurse “isMoral?” on different parameters.
e.g.: “isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)” would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not.
I’m not sure what you mean by “it isn’t really a statement about morality, it is about belief.”
Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. “I consider it immoral to coat children in napalm” certainly sounds like a statement about my morality though.
“isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False” would be a good way to put it.
It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of “better” here are inside the source code of DaFranker_IdealMoralFunction, and I don’t have access to that source code (it’s probably not even written yet).
Also note that “isMoral? MoralIntuition w a” =/= “”isMoral? [MoralFunctionsInBrain] w a” =/= “”isMoral? DominantMoralFunctionInBrain w a” =/= “”isMoral? CurrentMaxMoralFunctionInBrain w a” =/= “isMoral? IdealMoralFunction w a”.
In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes what the person actually does when faced with that situation (this dominance is dynamically computed at runtime, so at each evaluation the result may be different if, for instance, one’s moral intuitions have changed the internal power balance within the brain), but one could in theory make up a function that represents the pareto-optimal compromise of all those functions, and all of this is reviewed in very synthesized form by the conscious mind to generate a Moral Intuition. All of which are very different from what would happen if the conscious mind could read the source code for the set of moral functions in the brain and edit things to be the way it prefers recursively towards a unique ideal moral function.
Not quite, but those are different questions. Is the trading software itself “true” or “false”? No. Is my approximate model of how the trading software works “true” or “false”? No.
Is it “true” or “false” that my approximate model of how the trading software works is better than competing alternatives? Yes, it is true (or false). Is it “true” or “false” that the trading software returns a profit? Yes, it is.
See, there’s an element of context that lets us ask true/false questions about things. “Politics is true” is meaningless. “Politics is the most efficient method of managing a society” is certainly not meaningless, and with more formal definitions of “efficient” and “managing” one could even produce experimental tests to determine by observations whether that is true or false.
However, when one says “utilitarianism is true”, I just don’t know what observations to make. “utilitarianism accurately models DaFranker’s ideal moral function” is much better—I can compare the two, I can try to refine what is meant by “utilitarianism” here exactly, and I could in principle determine whether this is true or false.
“as per utilitarianism’s claim, what is morally best is to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” sounds like it also makes sense. But then you run into a snag while trying to evaluate the truth-value of this. What is “morally best” here? According to what principle? It seems this “morally best” depends on the reader, or myself, or some other point of reference.
We could decide that this “morally best” means that it is the optimal compromise between all of our morality functions, the optimal way to resolve conflicts of interest with the least total loss in utility and highest total gain.
We could assign a truth-value to that; compute all possible forms of social agreement about morality, all possible rule systems, and if the above utilitarian claim is among the pareto-optimal choices on the game payoff matrix, then the statement is true, if it is strictly dominated by some other outcome, then it is false. Of course, actually running this computation would require solving all kinds of problems and getting various sorts of information that I don’t even know how to find ways to solve or get. And might requite a Halting Oracle or some form of hypercomputer.
At any rate, I don’t think “as per utilitarianism’s claim, it is pareto-optimal across all humans to maximize the sum of x where each x is a measure u() of each agent’s ideal morality function” is what you meant by “utilitarianism is true”.
By comparing them to abstract formulas, which don’t have truth values … as opposed to equations, which, do, and to applied maths,which does, and theories, which do...
I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
My main point is that I haven’t the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That’s why I was asking you, since you seem to know.
I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?
I’ll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this.
I never said belief in “objective morality” was silly. I said that trying to decide whether to use U or D by asking “which one of these is the right way to resolve conflicts of interest?” when accepting one or the other necessarily changes variables in what you mean by the word ‘right’ and also, maybe even, the word ‘resolve’, sounds silly.
That woudl be the case of “right way” meant “morally-right way”. But metaethical theories aren’t compared by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
That’s just how I understand that word. ‘Right for me to do’ and ‘moral for me to do’ refer to the same things, to me. What differs in your understanding of the terms?
Remind me what it would look like for metaethics to be solved?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn’t morally-right.
Unsolved-at-time-T doesn’t mean unsolvable. Ask Andrew Wyles.
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn’t refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error.
I’m no good at math, but it’s my understanding that there was an idea of what it would look like for someone to solve Fermat’s Problem even before someone actually did so. I’m skeptical that ‘solving metaethics’ is similar in this respect.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of “right”. Hence “the right theory of what is right to do” is not circular, so long as the two “rights” mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively).
No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can’t say what they are for?