This is something I wrote in my (now defunct) blog a while back. It probably isn’t entirely appropriate as either a comment or a top level post here but I want to share it with you anyway, because I think that ‘value-as-profundity’ as I describe below shares much of the spirit of virtue ethics, but has higher aspirations insofar as it isn’t restricted to consideration of one’s own virtue, or even virtue in general.
About two years ago I had a ‘revelation’ - something that’s completely changed the way I think about life, the universe and everything.
This one concerns ethics. I cannot remember whether it was the cause or the effect of my reading of Thus Spoke Zarathustra.
Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how ‘happiness’ is formulated. After all, we don’t want to end up concluding that synthesizing Huxley’s drug “soma” is humanity’s highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart “animal happiness”—a meaningless and capricious flood of neurochemicals—from a higher “rational happiness” which can only be derived from recognition of truth or beauty (ii) Retreat to the view that “in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There’s no reason to assume that our moral intuitions are a ‘window’ onto any larger and more fundamental domain of moral truth.”
(Actually, I still regard a weaker version of (ii) as the ‘ultimate truth of matter’: On the one hand, it’s not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as ‘the golden rule’ are almost guaranteed to arise. On the other, it remains the case that the spectrum of ‘ethical dilemmas’ that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.
To summarise: There may be an ‘objective moral truth’, but it’s more likely to be narrow and ‘wrinkly’ than it is to be complete and systematic. Any single system of ethics will almost certainly yield the ‘wrong’ verdict in some cases.)
My revelation came out of a reflection on the nature of tourism, and the life-destroying shallowness of having, as one’s highest aspirations, the desire to go and look at things—e.g. to go “swimming with dolphins”, or travel into space, climb Everest, do a bungee jump etc.
The world is creaking under the footsteps, tyres and jet engines of people going sight-seeing. They’re going to the most beautiful places in the world, being awed and amazed by what they see, then going home again having contributed nothing but a few more puffs of carbon dioxide.
I saw that the world isn’t here for our amusement. Utilitarianism is blind to the value of the waterfall that no-one has yet discovered.
A giraffe stands majestic among the trees. It is a thing of beauty and fascination, in its remarkable anatomy, its complex behaviour and its evolutionary history.
Now the universe contains rational beings, and when one of these should first stumble upon the giraffe, it stops to take in the beauty of what it sees. The giraffe inspires in it a curiosity to understand how such a thing might have come to be.
Where in this picture is the moral value? Utilitarianism would place it in the person’s feeling of joy upon seeing the giraffe, and thus the value of the giraffe is proportional to the number of its spectators. On the other hand, I would put the value principally in the giraffe itself, irrespective of who observes it. However, this isn’t the whole story because the watcher of the giraffe is herself a wondrous animal, and seeing the giraffe may make the watcher just a little bit more wondrous: She may learn something from it.
Generalizing from this: I believe moral value is inherent in those systems and entities that we describe as ‘fascinating’, ‘richly structured’ and ‘beautiful’. A snappy way of characterising this view is “value-as-profundity”. On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful—their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as ‘good’ (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:
Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict’s behaviour tends to make others in his community less happy. The latter are both incidental—we can hypothetically imagine that the former do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).
The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences—the greater happiness of our improved friend and of those with whom she interacts—yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.
Nature red in tooth and claw: Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to ‘deflect’ the solar system away from the black hole—should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions. N.B. Here I am directly contradicting Greg Egan in his answer to question six in the Dust Theory FAQ.
This is something I wrote in my (now defunct) blog a while back. It probably isn’t entirely appropriate as either a comment or a top level post here but I want to share it with you anyway, because I think that ‘value-as-profundity’ as I describe below shares much of the spirit of virtue ethics, but has higher aspirations insofar as it isn’t restricted to consideration of one’s own virtue, or even virtue in general.
About two years ago I had a ‘revelation’ - something that’s completely changed the way I think about life, the universe and everything.
This one concerns ethics. I cannot remember whether it was the cause or the effect of my reading of Thus Spoke Zarathustra.
Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how ‘happiness’ is formulated. After all, we don’t want to end up concluding that synthesizing Huxley’s drug “soma” is humanity’s highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart “animal happiness”—a meaningless and capricious flood of neurochemicals—from a higher “rational happiness” which can only be derived from recognition of truth or beauty (ii) Retreat to the view that “in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There’s no reason to assume that our moral intuitions are a ‘window’ onto any larger and more fundamental domain of moral truth.”
(Actually, I still regard a weaker version of (ii) as the ‘ultimate truth of matter’: On the one hand, it’s not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as ‘the golden rule’ are almost guaranteed to arise. On the other, it remains the case that the spectrum of ‘ethical dilemmas’ that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.
To summarise: There may be an ‘objective moral truth’, but it’s more likely to be narrow and ‘wrinkly’ than it is to be complete and systematic. Any single system of ethics will almost certainly yield the ‘wrong’ verdict in some cases.)
My revelation came out of a reflection on the nature of tourism, and the life-destroying shallowness of having, as one’s highest aspirations, the desire to go and look at things—e.g. to go “swimming with dolphins”, or travel into space, climb Everest, do a bungee jump etc.
The world is creaking under the footsteps, tyres and jet engines of people going sight-seeing. They’re going to the most beautiful places in the world, being awed and amazed by what they see, then going home again having contributed nothing but a few more puffs of carbon dioxide.
I saw that the world isn’t here for our amusement. Utilitarianism is blind to the value of the waterfall that no-one has yet discovered.
A giraffe stands majestic among the trees. It is a thing of beauty and fascination, in its remarkable anatomy, its complex behaviour and its evolutionary history.
Now the universe contains rational beings, and when one of these should first stumble upon the giraffe, it stops to take in the beauty of what it sees. The giraffe inspires in it a curiosity to understand how such a thing might have come to be.
Where in this picture is the moral value? Utilitarianism would place it in the person’s feeling of joy upon seeing the giraffe, and thus the value of the giraffe is proportional to the number of its spectators. On the other hand, I would put the value principally in the giraffe itself, irrespective of who observes it. However, this isn’t the whole story because the watcher of the giraffe is herself a wondrous animal, and seeing the giraffe may make the watcher just a little bit more wondrous: She may learn something from it.
Generalizing from this: I believe moral value is inherent in those systems and entities that we describe as ‘fascinating’, ‘richly structured’ and ‘beautiful’. A snappy way of characterising this view is “value-as-profundity”. On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful—their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as ‘good’ (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:
Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict’s behaviour tends to make others in his community less happy. The latter are both incidental—we can hypothetically imagine that the former do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).
The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences—the greater happiness of our improved friend and of those with whom she interacts—yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.
Nature red in tooth and claw: Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to ‘deflect’ the solar system away from the black hole—should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions. N.B. Here I am directly contradicting Greg Egan in his answer to question six in the Dust Theory FAQ.