I honestly have a difficult time understanding the people (such as your “AI alignment researchers and other LWers, Moral philosophers”) who actually believe in Morality with a capital M. I believe they are misguided at best, potentially dangerous at worst.
I hadn’t heard of the Status Game book you quote, but for a long time now it’s seemed obvious to me that there is no objective true Morality, it’s purely a cultural construct, and mostly a status game. Any deep reading of history, cultures, and religions, leads one to this conclusion.
Humans have complex values, and that is all.
We humans cooperate and compete to optimize the universe according to those values, as we always have, as our posthuman descendants will, even without fully understanding them.
I think you are misunderstanding what Wei_Dai meant by “AI alignment researchers and other LWers, Moral philosophers” perspective on morality. It’s not about capital letters or “objectivity” of our morality. It’s about that exact fact that humans have complex values and whether we can understand them and translate them into one course of action according to which we are going to optimize the universe.
Basically, as I understand it, the difference is between people who try to resolve the confilcts between their different values and generally think about them as an approximation of some coherent utility function, and those who don’t.
It’s about that exact fact that humans have complex values and whether we can understand them and translate them into one course of action according to which we are going to optimize the universe.
If we agree humans have complex subjective values, then optimizing group decisions (for a mix of agents with different utility functions) is firmly a question for economic mechanism design—which is already a reasonably mature field.
A problem here, however, is the Myerson–Satterthwaite result which suggests that auction runners, to enable clean and helpful auctions for others, risk being hurt when they express and seek their own true preferences, or (if they take no such risks) become bad auctioneers for others.
The thing that seems like it might just be True here is that Good Governance requires personal sacrifice by leaders, which I mostly don’t expect to happen, given normal human leaders, unless those leaders are motivated by, essentially: “altruistic” “moral sentiment”.
It could be that I’m misunderstanding some part of the economics or the anthropology or some such?
But it looks to me like if someone says that there is no such thing as moral sentiment, it implies that they themselves do not have such sentiments, and so perhaps those specific people should not be given power or authority or respect in social processes that are voluntary, universal, benevolent, and theoretically coherent.
The reasonableness of this conclusion goes some way to explain to me how there is so much “social signaling” and also goes to explaining why so much of this signaling is fake garbage transmitted into the social environment by power-hungry psychos.
Well, that’s one way to do it. With it’s own terrible consequences, but lets not focus on them for now.
What’s more important is that this solution is very general, while all human values belong to the same cluster. So there may be more preferable, more human-specific solution for the problem.
I honestly have a difficult time understanding the people (such as your “AI alignment researchers and other LWers, Moral philosophers”) who actually believe in Morality with a capital M. I believe they are misguided at best, potentially dangerous at worst.
I hadn’t heard of the Status Game book you quote, but for a long time now it’s seemed obvious to me that there is no objective true Morality, it’s purely a cultural construct, and mostly a status game. Any deep reading of history, cultures, and religions, leads one to this conclusion.
Humans have complex values, and that is all.
We humans cooperate and compete to optimize the universe according to those values, as we always have, as our posthuman descendants will, even without fully understanding them.
I think you are misunderstanding what Wei_Dai meant by “AI alignment researchers and other LWers, Moral philosophers” perspective on morality. It’s not about capital letters or “objectivity” of our morality. It’s about that exact fact that humans have complex values and whether we can understand them and translate them into one course of action according to which we are going to optimize the universe.
Basically, as I understand it, the difference is between people who try to resolve the confilcts between their different values and generally think about them as an approximation of some coherent utility function, and those who don’t.
If we agree humans have complex subjective values, then optimizing group decisions (for a mix of agents with different utility functions) is firmly a question for economic mechanism design—which is already a reasonably mature field.
A problem here, however, is the Myerson–Satterthwaite result which suggests that auction runners, to enable clean and helpful auctions for others, risk being hurt when they express and seek their own true preferences, or (if they take no such risks) become bad auctioneers for others.
The thing that seems like it might just be True here is that Good Governance requires personal sacrifice by leaders, which I mostly don’t expect to happen, given normal human leaders, unless those leaders are motivated by, essentially: “altruistic” “moral sentiment”.
It could be that I’m misunderstanding some part of the economics or the anthropology or some such?
But it looks to me like if someone says that there is no such thing as moral sentiment, it implies that they themselves do not have such sentiments, and so perhaps those specific people should not be given power or authority or respect in social processes that are voluntary, universal, benevolent, and theoretically coherent.
The reasonableness of this conclusion goes some way to explain to me how there is so much “social signaling” and also goes to explaining why so much of this signaling is fake garbage transmitted into the social environment by power-hungry psychos.
Well, that’s one way to do it. With it’s own terrible consequences, but lets not focus on them for now.
What’s more important is that this solution is very general, while all human values belong to the same cluster. So there may be more preferable, more human-specific solution for the problem.