Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can’t if you start from some other presimes.
I guess most arguments would need to start from Cogito, ergo sum to make much sense, and you couldn’t do much of anything without accepting that our observations of the world exist. But is there a set of premises that is generally accepted that can determine what one’s actions should be without stating them outright?
By “should” I mean any currently accepted model that you can derive alturism from, of which the only one I know of so far is evolution or stems from evolution.
Logic simply preserves truth. You can arrive to a valid conclusion that one should act altruistically if you start from some specific premises, and can’t if you start from some other presimes.
What are the premises you start from?
I guess most arguments would need to start from Cogito, ergo sum to make much sense, and you couldn’t do much of anything without accepting that our observations of the world exist. But is there a set of premises that is generally accepted that can determine what one’s actions should be without stating them outright?
What is this “should” thingy you are talking about? Do you by chance have some definition of “shouldness” or are you open to suggestions?
Yes, I’m open to any framework that describes altruism in a way other than an evolutionary process.
I don’t think you got the question.
You see, if we define “shouldness” as optimization of human values. Then it does indeed logically follows that people should act altruistically:
People should do what they should
Should = Optimization of human values
People should do what optimizes human values
Altruism ∈ Human Values
People should do altruism
Is it what you were looking for?
By “should” I mean any currently accepted model that you can derive alturism from, of which the only one I know of so far is evolution or stems from evolution.