Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Some thoughts on reconciling physical determinism with morality —
The brains of agents are where those agents’ actions are calculated. Although agents are physically determined, they can be arbitrarily computationally intractable, so there is no general shortcut to predict their actions with physics-level accuracy. If you want to predict what agent Alice does in situation X, you have to actually put Alice in situation X and observe. (This differentiates agents from things like billiard-balls, which are computationally tractable and can be predicted using simple physics equations.)
And yet, one input to an agent’s decision process is its prediction of other agents’ responses to the actions the agent is considering. Since agents are hard to predict, a lot of computation has been spent on doing this! And although Alice cannot in general and with physics-level accuracy predict Bob’s responses to her actions, there are a lot of common regularities in the pattern of agents’ responses to other agents’ actions.
Some of these regularities have to do with things like “this agent supports or opposes that agent’s actions” or “these agents join together to support or oppose that agent’s actions” or “this agent alters the incentive structure under which another agent decides its actions” or “this group of agents are cooperating on achieving a common goal” or “this agent aims to stop that agent from existing, while that agent aims to keep existing” and other relatively compactly-describable sorts of things.
Even though “Alice wants to live” is not a physics-level description of Alice, it is still useful for predicting Alice’s actions at a more abstract level. Alice is not made of wanting-to-live particles, but Alice reliably refrains from jumping off cliffs or picking fights with tigers; instead she cooperates with other agents towards common goals of supporting one another’s continued living, and so on.
And things like morality make sense at that level, describing regularities in inter-agent behavior at a much higher level than physical determinism; much as an operating system’s scheduler operates at a much higher level than logic gates.
Things like morality, such as economics describe behaviour. Morality, however, is normative.
It should not come as a surprise that reductionism doesn’t require you to abandon all high level concepts.