I also thought the robot:s answer missed the point quite badly …because it reduced the ought all the way down to an is—or rather a bunch of isses.
If what one ought to do reduces to what one would do, there are potentially eight billion answers to what one ought to do. That works for a decision theory, because decision theory only needs to guide individual actions—but not for an ethical theory, because ethical theories need to identify bad people who need punishment, and no one is bad by their own values. Predictive accuracy is irrelevant to both, it’s a different ball game.
There’s a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or “brain algorithms”, is just assumed … but it needs to be based on evidence as much as anything else. The Rationalist approach isn’t some deep insight, it is quite flawed
Reducing ethical normativity isn’t bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
In the first comic, the engineers answer is a also a plausible philosophical answer. The “philosopher” is assuming that identity has to be based on intrinsic properties, the “engineer” isnt. But there is actually no philosophical fact that identity should be intrinisc. In Bensinger’s When is a Mind Me he takes the “philosopher’s” perspective.
In the third comic, the philosopher is technically correct. You can’t achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
I also thought the robot:s answer missed the point quite badly …because it reduced the ought all the way down to an is—or rather a bunch of isses.
If you dismiss any reduction of ought to is, you are essentially dogmatically certain that Hume’s guillotine is true. Is it your stance?
If what one ought to do reduces to what one would do
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics. This doesn’t change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module, which is a computation taking place in the real world, which can be observed.
there are potentially eight billion answers to what one ought to do.
Potentially but not actually. Once again, when you look, turns out individual ethical views of people are not *that* different. That said, there is still room for disagreement and how exactly we aggregate individual ethical preferences into morality is still up to debate. But this is the next question, with the somewhat similar direction for a solution.
There’s a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or “brain algorithms”, is just assumed … but it needs to be based on evidence as much as anything else.
Not basically the same, but somewhat similar. And it’s not just assumed, it’s quite observable. Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
Reducing ethical normativity isn’t bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
I think calling it subjectivism is very misleading. The whole subjective/objective duality is quite horrible—I’ll be dedicating a post about it at some point. It’s social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
In the first comic, the engineers answer is a also a plausible philosophical answer.
Yes, this is exactly my point. A lot of things, which are treated as “applied missing the point answers” are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
In the third comic, the philosopher is technically correct. You can’t achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
Therefore it’s not just “by looking” but “pretty much by looking”. I completely agree about the necessity to abandon the notion of certainty. If you want to give philosophers some credit for this—I agree. The irony of the joke stays the same. When the question is refined so that we removed the problematic notion of “certainty”, the naive answer turned out to be basically true.
I also thought the robot:s answer missed the point quite badly …because it reduced the ought all the way down to an is—or rather a bunch of isses.
If what one ought to do reduces to what one would do, there are potentially eight billion answers to what one ought to do. That works for a decision theory, because decision theory only needs to guide individual actions—but not for an ethical theory, because ethical theories need to identify bad people who need punishment, and no one is bad by their own values. Predictive accuracy is irrelevant to both, it’s a different ball game.
There’s a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or “brain algorithms”, is just assumed … but it needs to be based on evidence as much as anything else. The Rationalist approach isn’t some deep insight, it is quite flawed
Reducing ethical normativity isn’t bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
In the first comic, the engineers answer is a also a plausible philosophical answer. The “philosopher” is assuming that identity has to be based on intrinsic properties, the “engineer” isnt. But there is actually no philosophical fact that identity should be intrinisc. In Bensinger’s When is a Mind Me he takes the “philosopher’s” perspective.
In the third comic, the philosopher is technically correct. You can’t achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
If you dismiss any reduction of ought to is, you are essentially dogmatically certain that Hume’s guillotine is true. Is it your stance?
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics. This doesn’t change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module, which is a computation taking place in the real world, which can be observed.
Potentially but not actually. Once again, when you look, turns out individual ethical views of people are not *that* different. That said, there is still room for disagreement and how exactly we aggregate individual ethical preferences into morality is still up to debate. But this is the next question, with the somewhat similar direction for a solution.
Not basically the same, but somewhat similar. And it’s not just assumed, it’s quite observable. Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
I think calling it subjectivism is very misleading. The whole subjective/objective duality is quite horrible—I’ll be dedicating a post about it at some point. It’s social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
Yes, this is exactly my point. A lot of things, which are treated as “applied missing the point answers” are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
Therefore it’s not just “by looking” but “pretty much by looking”. I completely agree about the necessity to abandon the notion of certainty. If you want to give philosophers some credit for this—I agree. The irony of the joke stays the same. When the question is refined so that we removed the problematic notion of “certainty”, the naive answer turned out to be basically true.