I am confused about what you are trying to say. People at MIRI suspect smart decision theories will look at the source code of players, and so aren’t purely consequentialist in that sense.
“Folk decision theory” that people use in day to day lives echoes “smart decision theories” above, because we think about ‘the kind of person’ someone is. It seems sensible to do so.
Could you clarify what you are getting at? Do you think we should be purely consequentialist? It’s probably a mistake to ignore certain steelmen of virtue ethics if you care about doing “the right thing.”
People at MIRI suspect smart decision theories will look at the source code of players, and so aren’t purely consequentialist in that sense.
First, I appreciate your original analogy between virtue ethics and source code. I would like to understand it better, since it looks to me like any normative ethics requires “looking into the source code”, though pure consequentialism can also be analyzed as a black box. I assume that what you mean is that virtue ethics requires deeper analysis than deontology (because the rules are simple and easy to check against?), and than consequentialism (because one can avoid opening the black box altogether?). Or am I misinterpreting what you said?
Do you think we should be purely consequentialist? It’s probably a mistake to ignore certain steelmen of virtue ethics if you care about doing “the right thing.
Well, no, I am not prescriptive in the OP, only descriptive. And I agree (and mentioned here several times) that virtue ethics and deontological rules are in essence precomputed patterns which provide approximate shortcuts to the full unbounded consequentialism in a wide variety of situations. Of course, they are often a case of lost purposes when people elevate then from the level of shortcuts to the complete description of shouldness.
I don’t know what “the ultimate decision theory” is, but I suspect this decision theory will contain both consequentialist and virtue ethical elements. It will be “consequentialist” in the trivial sense of picking the best alternative. It will be “virtue ethical” in the sense that it will in general do different things depending on what it can infer about other players based on their source code. In this sense I don’t think virtue ethics is a hack approximation to consequentialism, I think it is an orthogonal idea.
That said, I am still confused by what you are trying to say!
I think we are using different definitions. I think of ethics first as applied to choosing one’s own decisions, and only second as a tool to analyze and predict the decisions of others.
If I were to program a decision bot, I would certainly employ a mixture of algorithms. Some of them would have a model of what it means to, say, be fair (a virtue) and generate possible actions based on that. Others would restrict possible actions based on a deontological rule such as “thou shalt not kill” (cf. Asimov’s laws). Yet others would search the space of outcomes of possible actions and pick the ones most in line with the optimization goal, if there is one. Different routines are likely to output different sets of actions, so they are not orthogonal.
“Orthogonal” means you can be a virtue ethicist without a commitment on consequentialism, and vice versa.
A virtue ethicist who is not a consequentialist is the old school Greek ethicist.
A consequentialist who is not a virtue ethicist is one of the modern variety of maximizers.
A virtue ethicist + consequentialist is someone who tries to work out ethics in multiplayer situations where some players are dicks and others are not. So defection/cooperation ought to depend on the ‘ethical character’ of the opponent, in some sense.
You can, but it’s pretty clear that in the examples given there is a tension between these approaches.
I am confused about what you are trying to say. People at MIRI suspect smart decision theories will look at the source code of players, and so aren’t purely consequentialist in that sense.
“Folk decision theory” that people use in day to day lives echoes “smart decision theories” above, because we think about ‘the kind of person’ someone is. It seems sensible to do so.
Could you clarify what you are getting at? Do you think we should be purely consequentialist? It’s probably a mistake to ignore certain steelmen of virtue ethics if you care about doing “the right thing.”
First, I appreciate your original analogy between virtue ethics and source code. I would like to understand it better, since it looks to me like any normative ethics requires “looking into the source code”, though pure consequentialism can also be analyzed as a black box. I assume that what you mean is that virtue ethics requires deeper analysis than deontology (because the rules are simple and easy to check against?), and than consequentialism (because one can avoid opening the black box altogether?). Or am I misinterpreting what you said?
Well, no, I am not prescriptive in the OP, only descriptive. And I agree (and mentioned here several times) that virtue ethics and deontological rules are in essence precomputed patterns which provide approximate shortcuts to the full unbounded consequentialism in a wide variety of situations. Of course, they are often a case of lost purposes when people elevate then from the level of shortcuts to the complete description of shouldness.
I don’t know what “the ultimate decision theory” is, but I suspect this decision theory will contain both consequentialist and virtue ethical elements. It will be “consequentialist” in the trivial sense of picking the best alternative. It will be “virtue ethical” in the sense that it will in general do different things depending on what it can infer about other players based on their source code. In this sense I don’t think virtue ethics is a hack approximation to consequentialism, I think it is an orthogonal idea.
That said, I am still confused by what you are trying to say!
I think we are using different definitions. I think of ethics first as applied to choosing one’s own decisions, and only second as a tool to analyze and predict the decisions of others.
If I were to program a decision bot, I would certainly employ a mixture of algorithms. Some of them would have a model of what it means to, say, be fair (a virtue) and generate possible actions based on that. Others would restrict possible actions based on a deontological rule such as “thou shalt not kill” (cf. Asimov’s laws). Yet others would search the space of outcomes of possible actions and pick the ones most in line with the optimization goal, if there is one. Different routines are likely to output different sets of actions, so they are not orthogonal.
“Orthogonal” means you can be a virtue ethicist without a commitment on consequentialism, and vice versa.
A virtue ethicist who is not a consequentialist is the old school Greek ethicist.
A consequentialist who is not a virtue ethicist is one of the modern variety of maximizers.
A virtue ethicist + consequentialist is someone who tries to work out ethics in multiplayer situations where some players are dicks and others are not. So defection/cooperation ought to depend on the ‘ethical character’ of the opponent, in some sense.