Since reading Wei Dai’s comment, I’ve been thinking about some intuitions about what my coordination strategy implicitly is for various radically different types of life, and, to what degree for example:
modern day LLMs
modern day LLM-agent-scaffolds
6 month-from-now-LLM-agent-scaffolds
grass
trees
redwoods
ants
tigers
chimpanzees
chickens
pigs
dogs
cats
And to what degree I really expect the multiverse to have norms that stop massively powerful things from just steamrolling dramatically less powerful things. Coordination norms make obvious sense between things that are within a few orders-of-magnitude of power, and maybe, that can communicate or meaningfully change strategy in response to each other.
I find the “Boundaries are Schelling” argument somewhat compelling and “you should respect boundaries of things at least somewhat less and more agentic than you”, but I’d need a more actively compelling reason to think it applied to grass.
(In the acausal multiverse seems like there’s some filter of “you have to be able to model the acausal economy in order to show up at the table in the first place.”)
My answer to the Wei Dai “I make and kill and [steal?] from many AIs per day, how are those supposed to fit into this schema” is “well, for this particular definition of ‘steal’, this doesn’t really make it unpredictably costly for them to defend their resource boundaries because they don’t have resource boundaries or resource rights at all atm”. But, there are nearby worlds where that’s a harder question.
“well, for this particular definition of ‘steal’, this doesn’t really make it unpredictably costly for them to defend their resource boundaries because they don’t have resource boundaries or resource rights at all atm”.
Yes, this is an important point which I didn’t get into that deeply in my reply.
I’d need a more actively compelling reason to think it applied to grass.
Question: By “applied to grass”, do you mean “applied at all to grass”, or “applied as much to grass as to humans”, or something else?
I’m asking because I agree boundary protection norms apply less strongly to protecting grass than to protecting humans (both terrestrially and cosmically), but conflating small numbers with zero is quite fraught in terms of the moral implications as things scale up. Even saying “I round the value of grass to zero for attentional cost reasons” is different from saying “there is literally zero moral value in protecting grass”, and they take about the same number of words.
Even saying “I round the value of grass to zero for attentional cost reasons” is different from saying “there is literally zero moral value in protecting grass”, and they take about the same number of words.
Nice. Apropos, I’ve found words like “almost” or “approximately” are useful for saying something has relatively low moral worth without the fraught implication that the worth is literally zero. (Equalling precisely zero is a rare event with strong logical consequences.)
E.g.:
“grass has almost no moral value”, versus just “grass has no moral value”
“grass boundaries are nearly worthless”, versus just “grass boundaries are worthless”.
My sense is that people don’t acknowledge these caveats out of fear that someone will try to force them to debate about the magnitude of the near-zero value of something like grass. I think the key is to feel/be secure and ready to say, if they try to force debate, “Sorry, I don’t want to debate the magnitude; it’s positive and near zero.” and then just move on from the topic.
The problem with “grass has almost no moral value” is not that you then need to argue the magnitude of that moral value. It’s that no matter what that magnitude is, there will be some point where enough grass becomes the most important thing morally. If you want to believe that a human always outweighs grass morally, and you believe that moral comparisons are even possible, you must believe that grass has zero morality, or your beliefs contradict themselves.
You seem to be assuming the value of grass aggregates without bound as a function of the amount of grass. Why wouldn’t there be diminishing marginal value to grass, as the amount of grass increased?
Since reading Wei Dai’s comment, I’ve been thinking about some intuitions about what my coordination strategy implicitly is for various radically different types of life, and, to what degree for example:
modern day LLMs
modern day LLM-agent-scaffolds
6 month-from-now-LLM-agent-scaffolds
grass
trees
redwoods
ants
tigers
chimpanzees
chickens
pigs
dogs
cats
And to what degree I really expect the multiverse to have norms that stop massively powerful things from just steamrolling dramatically less powerful things. Coordination norms make obvious sense between things that are within a few orders-of-magnitude of power, and maybe, that can communicate or meaningfully change strategy in response to each other.
I find the “Boundaries are Schelling” argument somewhat compelling and “you should respect boundaries of things at least somewhat less and more agentic than you”, but I’d need a more actively compelling reason to think it applied to grass.
(In the acausal multiverse seems like there’s some filter of “you have to be able to model the acausal economy in order to show up at the table in the first place.”)
My answer to the Wei Dai “I make and kill and [steal?] from many AIs per day, how are those supposed to fit into this schema” is “well, for this particular definition of ‘steal’, this doesn’t really make it unpredictably costly for them to defend their resource boundaries because they don’t have resource boundaries or resource rights at all atm”. But, there are nearby worlds where that’s a harder question.
Yes, this is an important point which I didn’t get into that deeply in my reply.
Question: By “applied to grass”, do you mean “applied at all to grass”, or “applied as much to grass as to humans”, or something else?
I’m asking because I agree boundary protection norms apply less strongly to protecting grass than to protecting humans (both terrestrially and cosmically), but conflating small numbers with zero is quite fraught in terms of the moral implications as things scale up. Even saying “I round the value of grass to zero for attentional cost reasons” is different from saying “there is literally zero moral value in protecting grass”, and they take about the same number of words.
Mmm. That is reasonably compelling to me.
Nice. Apropos, I’ve found words like “almost” or “approximately” are useful for saying something has relatively low moral worth without the fraught implication that the worth is literally zero. (Equalling precisely zero is a rare event with strong logical consequences.)
E.g.:
“grass has almost no moral value”, versus just “grass has no moral value”
“grass boundaries are nearly worthless”, versus just “grass boundaries are worthless”.
My sense is that people don’t acknowledge these caveats out of fear that someone will try to force them to debate about the magnitude of the near-zero value of something like grass. I think the key is to feel/be secure and ready to say, if they try to force debate, “Sorry, I don’t want to debate the magnitude; it’s positive and near zero.” and then just move on from the topic.
The problem with “grass has almost no moral value” is not that you then need to argue the magnitude of that moral value. It’s that no matter what that magnitude is, there will be some point where enough grass becomes the most important thing morally. If you want to believe that a human always outweighs grass morally, and you believe that moral comparisons are even possible, you must believe that grass has zero morality, or your beliefs contradict themselves.
You seem to be assuming the value of grass aggregates without bound as a function of the amount of grass. Why wouldn’t there be diminishing marginal value to grass, as the amount of grass increased?
Because that’s a variable value principle and it’s also been proposed as a solution to the Repugnant Conclusion, and it doesn’t work.