Actually I think grounding morality can be performed on a god-as-a-mathematical-like-entity if you wanted to. For certain settings of God you even get interesting and neat properties, which can be pretty useful (in a sense similar to this) if FAI is not near or possible and you question moral progress.
You can also use God to avoid certain kinds of blackmail and do other neat superrational tricks. Who knows it may even be the best implementation for this that we can currently build on some human brains.
Who knows it may even be the best implementation for this that we can currently build on some human brains.
Though in practice the reason we have Jesus is so we can ask “What would Jesus do?”, which is easier to answer than “What would the ideal rational agent with unlimited computational resources do?”.
’Course, Jesus says “Be ye therefore perfect, even as your Father which is in heaven is perfect.”; I think we still have a moral obligation to figure out the theoretical foundations of justification for perfect agents.
Though in practice the reason we have Jesus is so we can ask “What would Jesus do?”, which is easier to answer than “What would the ideal rational agent with unlimited computational resources do?”.
In Stoicism, we call this type of person a sage. It is actually a very practical concept to make use of. During before-sleep meditation, I’ll playback my entire day in fast-forward mentally, but alongside me I imagine a semi-transparent sage-me and I “watch” as our two paths diverge (with the sage-me living a perfectly virtuous life and me falling far short).
Interesting; I am annoyed and relieved that no Stoic seems to have nominated any particular historical person as a sage.
I don’t think I could pull off that kind of meditation, due to my having too much structural uncertainty about ethics and meta-ethics. What’s that Borges quote? “I have known that thing the Greeks knew not—uncertainty.”
I notice that like LessWrong the Stoics are big on Logos and instrumental rationality and related ethics but their (meta-)physics and theology strike me as fuzzy and underdeveloped.
Why so? How well-defined? I find it useful to base normative epistemic arguments off of the existence of Chaitin’s omega, even though there isn’t a unique omega and even though we barely know any bits of any of them. Similarly one could base moral arguments off of just the knowledge of the existence of a normative standard against which moral agents could be compared or by which moral agents could in theory be judged; postulating such a standard is itself a non-trivial meta-ethical position.
I’m not sure exactly what point you wish to illustrate with the Chaitin’s omega example. Yes, its value depends on the TM coding. But when a specific one is chosen, the value is unique.
Well, I can certainly ask the number two for goods and services, so if that’s a useful operation to perform, there ya go.
My chances of receiving those goods and services won’t increase if I do so, but that’s something else again.
Similarly, I can ascribe mysterious feelings and special mind states to the influence of a mathematical structure, thank it for the blessings of existence, base my feelings of security and purpose in life on it, and (as you note) invoke it to avert metaphysical questions.
I admit, I don’t quite understand how to ground morality on a mathematical structure, but then I don’t quite understand how to ground morality on a traditional god, either. (I recognize that many people claim to do this.)
I’ve never quite understood how grounding morality on a traditional god is supposed to work.
Well, yes, I understand that various commands, preferences, etc. are ascribed to gods, and that followers of those gods attempt to obey those commands and satisfy those preferences.
I’ve just never understood what morality has to do with any of that.
I mean, sure, presumably a suitably knowledgable (let alone omniscient) god is capable of giving moral commands, in that it would know what the moral things to do are, in the same sense that it is capable of telling me what stocks to purchase in order to maximize my earnings, or how most efficiently to breed cows. But to conclude that therefore wealth, morality, or cow-breeding is grounded on god (in a way that poverty, immorality, and cow-genocide, for example, are not) has always seemed odd to me.
(Divine command theory, where you obey God because He’s God as such (and not because He’s God and He commands things because they’re good), is not the most popular way to tie God into your meta-ethics, and it has various semantic problems. In better-justified meta-ethics God is useful as a necessary final cause of existence but it’s not immediately derivable what properties He has that make Him a justified final cause, nor how we as creatures should orient our actions towards Him—these are matters of ethics that are somewhat decoupled from “grounding” morality in God in a higher level sense. God is used in such meta-ethics in a way similar to how an oracle machine is used in theoretical computer science, that is, He’s an important part of a larger interconnected framework. One can’t evaluate theistic meta-ethics without knowing what the other parts are.)
Yeah, the better-justified version you describe strikes me as, if not necessarily better justified, at least more intelligible.
That said, now that I think about it a bit more, I’m enough of a consequentialist to have serious difficulty thinking straight about what it even means for a choice to be moral in the presence of a force capable of, in practical terms, divorcing my actions from their consequences. (Of course, not every theistic theory posits such a force, and it is possible to be in that position in a nontheistic context as well.)
I might quibble about your use of “popular” above, though, unless you really do mean it advisedly.
That is, it seems likely to me that Divine command theory is indeed the most popular approach, in the same sense that the most popular theory of ballistics predicts that when I drop a rock as I walk down the sidewalk, it will hit the ground a step or two behind me even though no halfway serious student of ballistics would predict any such thing. (Modulo extreme winds, anyway.)
I don’t know what meta-ethics are held by the Christian masses—does it actually come up very often?—but Catholic doctrine tends strongly towards Thomism, which isn’t divine command theorist, and Catholicism is the largest sect of Christianity. I suspect that most Catholics would be dimly aware that divine command theory isn’t quite right, upon considering the issue. I don’t think that my “average Catholic” friend has ever considered meta-ethics in a detailed enough way such that she could distinguish between divine command theory and some alternative meta-ethical theory. After all, in all theistic meta-ethics morality stems from God in some sense, it’s just the exact way in which it does so that is contentious. The sort of distinctions that are necessary to make are I believe quite beyond the philosophical competencies of your average Christian.
I think it goes something like this: (1) God created morality and cow-breeding, and (2) put the knowledge of it into humans [or, alternatively, the knowledge of morality was the result of eating the apple and knowing Good and Evil, I’m not sure], and (3) one of the important points of morality is that humans should have free will, and so (since God is moral) they do, and thus (4) they are free to practise immorality and cow-genocide.
I admit, I don’t quite understand how to ground morality on a mathematical structure, but then I don’t quite understand how to ground morality on a traditional god, either.
Morality consists of courses of action to achieve a goal or goals, and the goal or goals themselves. Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can’t explain why we “ought” to have a given goal or goals. If you believe that a god created everything except itself, but including morality, then said god presumably can ground morality simply by virtue of having created it.
Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can’t explain why we “ought” to have a given goal or goals.
Yeah, that is the dominant view, but Gauthier actually attempts to answer the question “why be moral?” (not only the question of “what is moral?”) using game-theoretic concepts. In short, his answer is that being moral is rational. I don’t remember whether or not he tries to answer the question “why be rational?”; I haven’t read Morals by Agreement in years.
There are (at least) two meaning for “why ought we be moral”:
“Why should an entity without goals choose to follow goals”, or, more generally, “Why should an entity without goals choose [anything]”,
and, “Why should an entity with a top level goal of X discard this in favor of a top level goal of Y.”
I can imagine answers to the second question (it could be that explicitly replacing X with Y results in achieving X better than if you don’t; this is one driver of extremism in many areas), but it seems clear that the first question admits of no attack.
Though TDT and UDT weren’t designed to be moral as such; it just turns out that non-self-defeating behavior seems to necessitate some degree of something like morality, largely because self-ness is a slippery idea.
What a great question! Traditionally, the operations performed on a god are:
asking for goods and services
grounding morality on
ascribing mysterious feelings and special mind states to the influence of
giving thanks for blessings of existence
basing the feelings of security and purpose in life on
invoking as a semantic stopsign for metaphysical questions
None of these, except the last one, could be performed on a god-as-a-mathematical-like-entity, I think.
Actually I think grounding morality can be performed on a god-as-a-mathematical-like-entity if you wanted to. For certain settings of God you even get interesting and neat properties, which can be pretty useful (in a sense similar to this) if FAI is not near or possible and you question moral progress.
You can also use God to avoid certain kinds of blackmail and do other neat superrational tricks. Who knows it may even be the best implementation for this that we can currently build on some human brains.
Though in practice the reason we have Jesus is so we can ask “What would Jesus do?”, which is easier to answer than “What would the ideal rational agent with unlimited computational resources do?”.
’Course, Jesus says “Be ye therefore perfect, even as your Father which is in heaven is perfect.”; I think we still have a moral obligation to figure out the theoretical foundations of justification for perfect agents.
In Stoicism, we call this type of person a sage. It is actually a very practical concept to make use of. During before-sleep meditation, I’ll playback my entire day in fast-forward mentally, but alongside me I imagine a semi-transparent sage-me and I “watch” as our two paths diverge (with the sage-me living a perfectly virtuous life and me falling far short).
Interesting; I am annoyed and relieved that no Stoic seems to have nominated any particular historical person as a sage.
I don’t think I could pull off that kind of meditation, due to my having too much structural uncertainty about ethics and meta-ethics. What’s that Borges quote? “I have known that thing the Greeks knew not—uncertainty.”
BTW random LW people here is the SEP on Stoicism.
I notice that like LessWrong the Stoics are big on Logos and instrumental rationality and related ethics but their (meta-)physics and theology strike me as fuzzy and underdeveloped.
For this, there’d have to be a well-defined God, provably unique up to isomorphism.
Why so? How well-defined? I find it useful to base normative epistemic arguments off of the existence of Chaitin’s omega, even though there isn’t a unique omega and even though we barely know any bits of any of them. Similarly one could base moral arguments off of just the knowledge of the existence of a normative standard against which moral agents could be compared or by which moral agents could in theory be judged; postulating such a standard is itself a non-trivial meta-ethical position.
I’m not sure exactly what point you wish to illustrate with the Chaitin’s omega example. Yes, its value depends on the TM coding. But when a specific one is chosen, the value is unique.
Well, I can certainly ask the number two for goods and services, so if that’s a useful operation to perform, there ya go.
My chances of receiving those goods and services won’t increase if I do so, but that’s something else again.
Similarly, I can ascribe mysterious feelings and special mind states to the influence of a mathematical structure, thank it for the blessings of existence, base my feelings of security and purpose in life on it, and (as you note) invoke it to avert metaphysical questions.
I admit, I don’t quite understand how to ground morality on a mathematical structure, but then I don’t quite understand how to ground morality on a traditional god, either. (I recognize that many people claim to do this.)
I’ve never quite understood how grounding morality on a traditional god is supposed to work.
Well then, you could also multiply gods by constants and add them together, producing a vector space over a divine basis.
Grounding morality works straightforwardly, I think: God said thou must not kill, love thy neighbour, etc.
Well, yes, I understand that various commands, preferences, etc. are ascribed to gods, and that followers of those gods attempt to obey those commands and satisfy those preferences.
I’ve just never understood what morality has to do with any of that.
I mean, sure, presumably a suitably knowledgable (let alone omniscient) god is capable of giving moral commands, in that it would know what the moral things to do are, in the same sense that it is capable of telling me what stocks to purchase in order to maximize my earnings, or how most efficiently to breed cows. But to conclude that therefore wealth, morality, or cow-breeding is grounded on god (in a way that poverty, immorality, and cow-genocide, for example, are not) has always seemed odd to me.
(Divine command theory, where you obey God because He’s God as such (and not because He’s God and He commands things because they’re good), is not the most popular way to tie God into your meta-ethics, and it has various semantic problems. In better-justified meta-ethics God is useful as a necessary final cause of existence but it’s not immediately derivable what properties He has that make Him a justified final cause, nor how we as creatures should orient our actions towards Him—these are matters of ethics that are somewhat decoupled from “grounding” morality in God in a higher level sense. God is used in such meta-ethics in a way similar to how an oracle machine is used in theoretical computer science, that is, He’s an important part of a larger interconnected framework. One can’t evaluate theistic meta-ethics without knowing what the other parts are.)
Yeah, the better-justified version you describe strikes me as, if not necessarily better justified, at least more intelligible.
That said, now that I think about it a bit more, I’m enough of a consequentialist to have serious difficulty thinking straight about what it even means for a choice to be moral in the presence of a force capable of, in practical terms, divorcing my actions from their consequences. (Of course, not every theistic theory posits such a force, and it is possible to be in that position in a nontheistic context as well.)
I might quibble about your use of “popular” above, though, unless you really do mean it advisedly.
That is, it seems likely to me that Divine command theory is indeed the most popular approach, in the same sense that the most popular theory of ballistics predicts that when I drop a rock as I walk down the sidewalk, it will hit the ground a step or two behind me even though no halfway serious student of ballistics would predict any such thing. (Modulo extreme winds, anyway.)
But I’d love to be wrong about that.
I don’t know what meta-ethics are held by the Christian masses—does it actually come up very often?—but Catholic doctrine tends strongly towards Thomism, which isn’t divine command theorist, and Catholicism is the largest sect of Christianity. I suspect that most Catholics would be dimly aware that divine command theory isn’t quite right, upon considering the issue. I don’t think that my “average Catholic” friend has ever considered meta-ethics in a detailed enough way such that she could distinguish between divine command theory and some alternative meta-ethical theory. After all, in all theistic meta-ethics morality stems from God in some sense, it’s just the exact way in which it does so that is contentious. The sort of distinctions that are necessary to make are I believe quite beyond the philosophical competencies of your average Christian.
I think it goes something like this: (1) God created morality and cow-breeding, and (2) put the knowledge of it into humans [or, alternatively, the knowledge of morality was the result of eating the apple and knowing Good and Evil, I’m not sure], and (3) one of the important points of morality is that humans should have free will, and so (since God is moral) they do, and thus (4) they are free to practise immorality and cow-genocide.
If game-theoretic principles (like Nash-equilibrium) are mathematical structures and contractarianism (such as Gauthier’s ethical theory) is true, then mathematical structures “ground morality”.
Morality consists of courses of action to achieve a goal or goals, and the goal or goals themselves. Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can’t explain why we “ought” to have a given goal or goals. If you believe that a god created everything except itself, but including morality, then said god presumably can ground morality simply by virtue of having created it.
Yeah, that is the dominant view, but Gauthier actually attempts to answer the question “why be moral?” (not only the question of “what is moral?”) using game-theoretic concepts. In short, his answer is that being moral is rational. I don’t remember whether or not he tries to answer the question “why be rational?”; I haven’t read Morals by Agreement in years.
There are (at least) two meaning for “why ought we be moral”:
“Why should an entity without goals choose to follow goals”, or, more generally, “Why should an entity without goals choose [anything]”,
and, “Why should an entity with a top level goal of X discard this in favor of a top level goal of Y.”
I can imagine answers to the second question (it could be that explicitly replacing X with Y results in achieving X better than if you don’t; this is one driver of extremism in many areas), but it seems clear that the first question admits of no attack.
An entity without goals would not be reading Gauthier’s book.
Well, look at things like TDT/UDT for starters.
Though TDT and UDT weren’t designed to be moral as such; it just turns out that non-self-defeating behavior seems to necessitate some degree of something like morality, largely because self-ness is a slippery idea.