There is plenty wrong with the nature of existence from a human or a humane perspective. The focus on society, or other people, is partly because so much of human existence is now spent interacting with other human beings (or even with fictions and media created by human beings), and inhabiting environments and circumstances created and managed by human beings, and also because society collectively wields powers which could in principle relieve so much of what any given individual suffers.
But as you say, the existence and nature of humans derive from the nonhuman; and the nonhuman also directly forces itself upon the human in many ways, from natural catastrophe—I think of the recent earthquake in Morocco—to numerous individual causes of death.
Across the Mediterranean from Morocco, there was another earthquake once, the 1755 Lisbon earthquake. That earthquake played a role in the discussion of your question; it led to Voltaire’s satirical attack on Leibniz, who had expounded the philosophy that this is “the best of all possible worlds”.
But it’s worth understanding what Leibniz was on about. For Leibniz, the question arose in the form of a perennial problem of theology, the “problem of evil”. In the modern intellectual milieu, atheism is more common than not, and the debate is more likely to be about whether life is good, not whether God is good. However, in the era before Darwin, it was mostly taken for granted that there must be a First Cause, a supernatural being with agency and choice, which people wanted to regard as good, and so there was anguish and fear about how to view that being’s apparent responsibility for the evil in the world.
“Theodicy” is the word that Leibniz coined, for a philosophy which tries to resolve the problem of evil in this context. (I thank T.L. for many discussions of the problem from this perspective.) Wikipedia says:
Leibniz distinguishes three forms of evil: moral, physical, and metaphysical. Moral evil is sin, physical evil is pain, and metaphysical evil is limitation. God permits moral and physical evil for the sake of greater goods, and metaphysical evil (i.e., limitation) is unavoidable since any created universe must necessarily fall short of God’s absolute perfection.
I think this taxonomy of forms of evil is useful; and the concept that this is the best of all possible worlds, while not one that I endorse, is also useful to know about—since “possible worlds” (another idea essentially deriving from Leibniz) is so much a part of the current discussion. Many replies to your question are framed in terms of whether the nature of the universe could have been different, or was likely to be different. Even in the absence of a notion of God, the idea that this is already as good as it gets, continues to play a role in this naturalistic theodicy.
One part of naturalistic theodical debate is about whether it makes logical sense to blame the universe for anything. But another part turns the discussion back on human psychology, and makes it into a debate about the attitude that one should have to life. Here, something from Adrian Berry’s futurist book The Next Ten Thousand Years stuck with me, an opening passage contrasting the philosophies of Seneca and Francis Bacon. Seneca here stands for stoicism, Bacon for solving problem via invention. Seneca is described as treating all forms of suffering as an opportunity to develop a tougher nobler character, whereas Bacon goes about making life better through medicine, civil engineering, and so forth.
This Seneca-vs-Bacon contrast is especially consequential now, in the age of transhumanism and AI, when one can think about curing the ageing process itself, or otherwise transforming the human condition in any number of ways, and ultimately even transforming the universe itself. Incidentally, stoicism is not the only “un-Baconian” existential response—despair, decadent hedonism, humility are some of the other possibilities. The point is that in an age of transhuman technologies, the problem of evil becomes an instrumental problem rather than just a philosophical problem. It’s not just, why is the world like this, but also, can we make it otherwise, and which other option should we choose.
Though if the truly blackpilled AI doomers are correct, and AI is both beyond control (“alignment”) and beyond stopping, then the era of humanism and transhumanism, the brief Baconian window of time in which it became possible to remake the world in human-friendly fashion, is already passing, and we are once again in the grip of titanic forces beyond human control or understanding.
thanks for the contrast and history to this issue. To transcend suffering or to work around it… I might take a look at that, to see if they had a fruitful conversation about it.
Hm, it is of course possible to argue that relinquishing the control could somehow benefit the greater whole—but how would you strike a balance between the positivity in transhumanism, and the gloom in the ai doomers.
The optimism about A.I. capabilities might not be overestimated, but why the focus to create a Beyond-human technological “solution” to a Human problem. Can’t we just deal with our own shit, and if we eventually have found out what to do—then look at these issues? It seems like an extreme option, similar to nuclear weapons, just possibly much worse, to dabble in this in the current societal and human climate -… maybe that is a view that blackpilled ai doomers have?
There seems to be quite the gap between these two stances, and I wonder what the essence is all about. Do you know?
There is plenty wrong with the nature of existence from a human or a humane perspective. The focus on society, or other people, is partly because so much of human existence is now spent interacting with other human beings (or even with fictions and media created by human beings), and inhabiting environments and circumstances created and managed by human beings, and also because society collectively wields powers which could in principle relieve so much of what any given individual suffers.
But as you say, the existence and nature of humans derive from the nonhuman; and the nonhuman also directly forces itself upon the human in many ways, from natural catastrophe—I think of the recent earthquake in Morocco—to numerous individual causes of death.
Across the Mediterranean from Morocco, there was another earthquake once, the 1755 Lisbon earthquake. That earthquake played a role in the discussion of your question; it led to Voltaire’s satirical attack on Leibniz, who had expounded the philosophy that this is “the best of all possible worlds”.
But it’s worth understanding what Leibniz was on about. For Leibniz, the question arose in the form of a perennial problem of theology, the “problem of evil”. In the modern intellectual milieu, atheism is more common than not, and the debate is more likely to be about whether life is good, not whether God is good. However, in the era before Darwin, it was mostly taken for granted that there must be a First Cause, a supernatural being with agency and choice, which people wanted to regard as good, and so there was anguish and fear about how to view that being’s apparent responsibility for the evil in the world.
“Theodicy” is the word that Leibniz coined, for a philosophy which tries to resolve the problem of evil in this context. (I thank T.L. for many discussions of the problem from this perspective.) Wikipedia says:
I think this taxonomy of forms of evil is useful; and the concept that this is the best of all possible worlds, while not one that I endorse, is also useful to know about—since “possible worlds” (another idea essentially deriving from Leibniz) is so much a part of the current discussion. Many replies to your question are framed in terms of whether the nature of the universe could have been different, or was likely to be different. Even in the absence of a notion of God, the idea that this is already as good as it gets, continues to play a role in this naturalistic theodicy.
One part of naturalistic theodical debate is about whether it makes logical sense to blame the universe for anything. But another part turns the discussion back on human psychology, and makes it into a debate about the attitude that one should have to life. Here, something from Adrian Berry’s futurist book The Next Ten Thousand Years stuck with me, an opening passage contrasting the philosophies of Seneca and Francis Bacon. Seneca here stands for stoicism, Bacon for solving problem via invention. Seneca is described as treating all forms of suffering as an opportunity to develop a tougher nobler character, whereas Bacon goes about making life better through medicine, civil engineering, and so forth.
This Seneca-vs-Bacon contrast is especially consequential now, in the age of transhumanism and AI, when one can think about curing the ageing process itself, or otherwise transforming the human condition in any number of ways, and ultimately even transforming the universe itself. Incidentally, stoicism is not the only “un-Baconian” existential response—despair, decadent hedonism, humility are some of the other possibilities. The point is that in an age of transhuman technologies, the problem of evil becomes an instrumental problem rather than just a philosophical problem. It’s not just, why is the world like this, but also, can we make it otherwise, and which other option should we choose.
Though if the truly blackpilled AI doomers are correct, and AI is both beyond control (“alignment”) and beyond stopping, then the era of humanism and transhumanism, the brief Baconian window of time in which it became possible to remake the world in human-friendly fashion, is already passing, and we are once again in the grip of titanic forces beyond human control or understanding.
Hello Mitchell_Porter,
thanks for the contrast and history to this issue. To transcend suffering or to work around it… I might take a look at that, to see if they had a fruitful conversation about it.
Hm, it is of course possible to argue that relinquishing the control could somehow benefit the greater whole—but how would you strike a balance between the positivity in transhumanism, and the gloom in the ai doomers.
The optimism about A.I. capabilities might not be overestimated, but why the focus to create a Beyond-human technological “solution” to a Human problem. Can’t we just deal with our own shit, and if we eventually have found out what to do—then look at these issues? It seems like an extreme option, similar to nuclear weapons, just possibly much worse, to dabble in this in the current societal and human climate -… maybe that is a view that blackpilled ai doomers have?
There seems to be quite the gap between these two stances, and I wonder what the essence is all about. Do you know?
Kindly,
Caerulea-Lawrence