I think there are multiple moral worldviews that are rational and based on some values. Likely the whole continuum.
The thing is that we have values that are in conflict in edge cases, and those conflicts need to be taken into account and resolved when building a worldview as a whole. You can resolve them in many ways. Some might be simple like “always prefer X”, some might be more complex like “in such and such circumstances or precoditions prefer X over Y, in some other preconditions prefer Z over Y, in some other …”. It might be threshold-based when you try to measure the levels of things and weight them mathematically or quasi-mathematically.
At the most basic level, it is about how you weigh the values in relation to each other (which is often hard, as we often do not have good measures), and also how important for you it is to you to be right and exact vs being more efficient, quick, being able to spare more of your mental energy or capacity or time for other things than devising exact worldview.
If your values are not simple (which is often the case for humans) and often collide with each other, complex worldviews have the advantage of being closer to applying your values in different situations in a way that is consistent. On the other hand, simple worldviews have the advantage of being easy and fast to follow, and are technically internally consistent, even if not always feeling right. You don’t need as much thinking beforehand, and on the spot when you need to decide.
Now, you can reasonably prefer some rational middle ground. A worldview that isn’t as simple as basic utilitarianism or ethical egoism or others, but is also not as complex as thinking out each possible moral dilemma and possible decision to work out how to weigh and apply own values in each of them. It might be threshold-based or/and patchwork-based, and in such values can be built in a way that different ones have different weights in different subspaces of the whole space of moral situations. You may actually want to zero out some values in some subspaces to simplify and not take in components, that are already too small or would incentivize focus on unimportant progress.
In practical terms to show an example—you may be utilitarian in broad area of circumstances, but in any circumstances when it would make you have relatively high effort for a very small change in lowering total suffering or heightening total happiness, then you might zero out that factor and fall back to choosing in accordance of what is better for yourself (ethical egoism).
BTW I believe it is also a way to devise value systems for AI—by having them purposely only take into account values when the change in the total value function between decissions taken from that value are not too small. If it is very small, it should not care, it should not take it into account about that minuscule change. On the meta-level, it is also based on another value—valuing own time and energy to have a sensible impact.
Yes, I know this comment is a bit off-topic from the article. What is important for the topic—there are people, me included, who have consequentialist quasi-utilitarian beliefs, but won’t see why we would like to have strict value-maximising (even if that value is total happiness) or replace them with entities that are such maximizers.
Also, I don’t value complexity reduction, so I don’t value systems that maximize happiness and reduce the world to simpler forms, where situations when other values matter simply don’t happen. On the contrary, I prefer preserving complexity and the ability for the world to be interesting.
I think there are multiple moral worldviews that are rational and based on some values. Likely the whole continuum.
The thing is that we have values that are in conflict in edge cases, and those conflicts need to be taken into account and resolved when building a worldview as a whole. You can resolve them in many ways. Some might be simple like “always prefer X”, some might be more complex like “in such and such circumstances or precoditions prefer X over Y, in some other preconditions prefer Z over Y, in some other …”. It might be threshold-based when you try to measure the levels of things and weight them mathematically or quasi-mathematically.
At the most basic level, it is about how you weigh the values in relation to each other (which is often hard, as we often do not have good measures), and also how important for you it is to you to be right and exact vs being more efficient, quick, being able to spare more of your mental energy or capacity or time for other things than devising exact worldview.
If your values are not simple (which is often the case for humans) and often collide with each other, complex worldviews have the advantage of being closer to applying your values in different situations in a way that is consistent. On the other hand, simple worldviews have the advantage of being easy and fast to follow, and are technically internally consistent, even if not always feeling right. You don’t need as much thinking beforehand, and on the spot when you need to decide.
Now, you can reasonably prefer some rational middle ground. A worldview that isn’t as simple as basic utilitarianism or ethical egoism or others, but is also not as complex as thinking out each possible moral dilemma and possible decision to work out how to weigh and apply own values in each of them.
It might be threshold-based or/and patchwork-based, and in such values can be built in a way that different ones have different weights in different subspaces of the whole space of moral situations. You may actually want to zero out some values in some subspaces to simplify and not take in components, that are already too small or would incentivize focus on unimportant progress.
In practical terms to show an example—you may be utilitarian in broad area of circumstances, but in any circumstances when it would make you have relatively high effort for a very small change in lowering total suffering or heightening total happiness, then you might zero out that factor and fall back to choosing in accordance of what is better for yourself (ethical egoism).
BTW I believe it is also a way to devise value systems for AI—by having them purposely only take into account values when the change in the total value function between decissions taken from that value are not too small. If it is very small, it should not care, it should not take it into account about that minuscule change. On the meta-level, it is also based on another value—valuing own time and energy to have a sensible impact.
Yes, I know this comment is a bit off-topic from the article. What is important for the topic—there are people, me included, who have consequentialist quasi-utilitarian beliefs, but won’t see why we would like to have strict value-maximising (even if that value is total happiness) or replace them with entities that are such maximizers.
Also, I don’t value complexity reduction, so I don’t value systems that maximize happiness and reduce the world to simpler forms, where situations when other values matter simply don’t happen. On the contrary, I prefer preserving complexity and the ability for the world to be interesting.