This idea (utilitarism) is old, and fraught with problems. Firstly, there is the question of what the correct thing to optimize really is. Should one optimize total happiness or average happiness? Or would it make more sense, for example, to maximize the happiness of the most unhappy person in the population—a max-min problem, i.e a “worst case” optimization procedure? (note that what this in essense is the difference between considering “human rights” and “total happiness”, which do not always go hand in hand) And even with all these three things to optimize considered, there’s whole spectrum of weighted optimization problems which sit between worst case and average case. Who chooses what is best and most fair? Is the happiness of everybody weighted equally? Or are some people more deserving of happiness than others? Does everybody have an equal capaicty for happiness? Does a higher population equate to more happiness in total? How does time factor into the equation? Do you maximize happiness now? Or do you put effort into developing a perfect society now, for the greater happiness to come?
Not to mention the obvious problem of utility. Let’s be charitable and assume that utility means something, and can be measured—already a leap of faith. But then, ask yourself—why assume utility is one dimensional? And if utility were many dimensional—how will one trade off the different dimensions of utility? Is it more important to minimize suffering than to increase happiness—are two things really numerical values which lie on the same scale? And what if we found a pleasure center in the brain which produces “utility”? Would it be better for us to discard our coperal bodies, and all the rest of these silly and irrational “goals”, “dreams” and “aspirations” in favor of forever pushing and stimulating this part of the brain for a little bit more meaningless satisfaction?
But what I really want to get at, and here I start to get preachy, that existential meaning is not the same as happiness. The human condition has the capacity to be deeply satisfied in suffering for example, or to feel a deeply dissatisfied when the world may appear optimal. And there are deeply embdeeded ideas which feel right—the concepts of “fairness” (that everybody should be treated equally) or “justice” (that each good deed should have it’s reward, and each bad deed it’s punishment—not to be confused with detterence) for example—yet it does not seem to have a place in a blind optimizer such as this. Not to mentin the countless other things we don’t even understand—the emotions like love, anger, hatred, wrath, asthetic preference, what do these things have in the place of a utilitarian society? And sure, you could be a ardent behaviorlist who thinks ideas like “justice” and “meaning” are just silly supersticious constructs better discarded alongside such antiquated concepts such as as “emotions” and “morality”, but I would like to persuade you that there’s something more to life than just maximizing happiness.
You correctly point out problems with classical utilitarianism; nonetheless, downvoted for equating utilitarianism in general with classical utilitarianism in particular, as well as being irrelevant to the comment it was replying to. And a few other things.
This idea (utilitarism) is old, and fraught with problems. Firstly, there is the question of what the correct thing to optimize really is. Should one optimize total happiness or average happiness? Or would it make more sense, for example, to maximize the happiness of the most unhappy person in the population—a max-min problem, i.e a “worst case” optimization procedure? (note that what this in essense is the difference between considering “human rights” and “total happiness”, which do not always go hand in hand) And even with all these three things to optimize considered, there’s whole spectrum of weighted optimization problems which sit between worst case and average case. Who chooses what is best and most fair? Is the happiness of everybody weighted equally? Or are some people more deserving of happiness than others? Does everybody have an equal capaicty for happiness? Does a higher population equate to more happiness in total? How does time factor into the equation? Do you maximize happiness now? Or do you put effort into developing a perfect society now, for the greater happiness to come?
Not to mention the obvious problem of utility. Let’s be charitable and assume that utility means something, and can be measured—already a leap of faith. But then, ask yourself—why assume utility is one dimensional? And if utility were many dimensional—how will one trade off the different dimensions of utility? Is it more important to minimize suffering than to increase happiness—are two things really numerical values which lie on the same scale? And what if we found a pleasure center in the brain which produces “utility”? Would it be better for us to discard our coperal bodies, and all the rest of these silly and irrational “goals”, “dreams” and “aspirations” in favor of forever pushing and stimulating this part of the brain for a little bit more meaningless satisfaction?
But what I really want to get at, and here I start to get preachy, that existential meaning is not the same as happiness. The human condition has the capacity to be deeply satisfied in suffering for example, or to feel a deeply dissatisfied when the world may appear optimal. And there are deeply embdeeded ideas which feel right—the concepts of “fairness” (that everybody should be treated equally) or “justice” (that each good deed should have it’s reward, and each bad deed it’s punishment—not to be confused with detterence) for example—yet it does not seem to have a place in a blind optimizer such as this. Not to mentin the countless other things we don’t even understand—the emotions like love, anger, hatred, wrath, asthetic preference, what do these things have in the place of a utilitarian society? And sure, you could be a ardent behaviorlist who thinks ideas like “justice” and “meaning” are just silly supersticious constructs better discarded alongside such antiquated concepts such as as “emotions” and “morality”, but I would like to persuade you that there’s something more to life than just maximizing happiness.
You correctly point out problems with classical utilitarianism; nonetheless, downvoted for equating utilitarianism in general with classical utilitarianism in particular, as well as being irrelevant to the comment it was replying to. And a few other things.