Anyone have a logical solution to exactly why we should act altruistically? I know it makes sense evolutionarily through game theory and statistics, but human decision making is still controlled by emotions, and it’s still most advantageous for an individual actor to follow their own self-interest to a degree in a social community. I know how altruistic actors develop, but not why unconstrained intelligences should choose to do so.
Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can’t if you start from some other presimes.
I guess most arguments would need to start from Cogito, ergo sum to make much sense, and you couldn’t do much of anything without accepting that our observations of the world exist. But is there a set of premises that is generally accepted that can determine what one’s actions should be without stating them outright?
By “should” I mean any currently accepted model that you can derive alturism from, of which the only one I know of so far is evolution or stems from evolution.
Anyone have a logical solution to exactly why we should act altruistically?
“Logical … should” sounds like a type error, setting things up for a contradiction. While there are adherents of moral naturalism, I doubt there are many moral naturalists around here. Even given moral naturalism, I believe it would still be true that any amount of intelligence can coexist with any goals. So no, there is no reason why unconstrained intelligences should be altruistic, or even be the sort of thing that “altruism” could meaningfully be asserted or denied of them.
I know it makes sense evolutionarily through game theory and statistics, but human decision making is still controlled by emotions
...which came about through evolution, so what work is the “but” doing? The urge to do good for others is what the game theory feels like from inside.
it’s still most advantageous for an individual actor to follow their own self-interest to a degree in a social community.
Each knows their own needs and desires better than anyone else, so it’s primarily up to each person to ensure their own are fulfilled. Ensuring this often involves working with others. We do things for each other that we may individually prosper.
So, what type of altruism are you asking about? I expect Peter Singer would dismiss reciprocal altruism as weak sauce, a pale and perverted imitation of what he preaches. The EA variety inspired by Singer? Utilitarianism that values all equally to oneself, and feels another’s pain as intensely as one’s own? Saintliness that values everyone else above oneself who am nothing? There’s a long spectrum there, and people inhabiting all parts of it.
My confusion about this subject is that without moral naturalism, it seems moral philosophy can be derived from a psychological or sociological basis, which seems to me a much better model for producing results than philosophical arguments.
Anyone have a logical solution to exactly why we should act altruistically? I know it makes sense evolutionarily through game theory and statistics, but human decision making is still controlled by emotions, and it’s still most advantageous for an individual actor to follow their own self-interest to a degree in a social community. I know how altruistic actors develop, but not why unconstrained intelligences should choose to do so.
Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can’t if you start from some other presimes.
What are the premises you start from?
I guess most arguments would need to start from Cogito, ergo sum to make much sense, and you couldn’t do much of anything without accepting that our observations of the world exist. But is there a set of premises that is generally accepted that can determine what one’s actions should be without stating them outright?
What is this “should” thingy you are talking about? Do you by chance have some definition of “shouldness” or are you open to suggestions?
Yes, I’m open to any framework that describes altruism in a way other than an evolutionary process.
I don’t think you got the question.
You see, if we define “shouldness” as optimization of human values. Then it does indeed logically follows that people should act altruistically:
People should do what they should
Should = Optimization of human values
People should do what optimizes human values
Altruism ∈ Human Values
People should do altruism
Is it what you were looking for?
By “should” I mean any currently accepted model that you can derive alturism from, of which the only one I know of so far is evolution or stems from evolution.
“Logical … should” sounds like a type error, setting things up for a contradiction. While there are adherents of moral naturalism, I doubt there are many moral naturalists around here. Even given moral naturalism, I believe it would still be true that any amount of intelligence can coexist with any goals. So no, there is no reason why unconstrained intelligences should be altruistic, or even be the sort of thing that “altruism” could meaningfully be asserted or denied of them.
...which came about through evolution, so what work is the “but” doing? The urge to do good for others is what the game theory feels like from inside.
Each knows their own needs and desires better than anyone else, so it’s primarily up to each person to ensure their own are fulfilled. Ensuring this often involves working with others. We do things for each other that we may individually prosper.
So, what type of altruism are you asking about? I expect Peter Singer would dismiss reciprocal altruism as weak sauce, a pale and perverted imitation of what he preaches. The EA variety inspired by Singer? Utilitarianism that values all equally to oneself, and feels another’s pain as intensely as one’s own? Saintliness that values everyone else above oneself who am nothing? There’s a long spectrum there, and people inhabiting all parts of it.
My confusion about this subject is that without moral naturalism, it seems moral philosophy can be derived from a psychological or sociological basis, which seems to me a much better model for producing results than philosophical arguments.
No, nobody has a logical solution to that (though there have been many claimed solutions). It is almost certainly not true.
self-interest is often aligned with expanding your self boundaries to include others