I’m generally a fan of pursuing this sort of moral realism of the ideals, but I want to point out one very hazardous amoral hole in the world that I don’t think it will ever be able to bridge over for us, lest anyone assume otherwise, and fall into the hole by being lax and building unaligned AGI because they think it will be kinder than it will. (I don’t say this lightly: Confidently assuming kindness that we wont get as a result of overextended faith in moral realism, and thus taking on catastrophically bad alignment strategies, is a pattern I see shockingly often in abstract thinkers I have known. It’s a very real thing.)
There seems to be a rule that inevitable power differentials actually have to be allowed to play out.
It only seems to apply to inevitable power differentials, it’s interesting that it doesn’t seem to apply to situations where power differentials emerge due to happenstance (for instance, differentials favoring whichever tribe unknowingly took residence in up on copper-rich geographies before anyone knew about smelting). In those situations, FDT agents might choose to essentially redistribute, to consummate an old insurance policy against ending up on the bad end of colonization, to swap land, to send metal tools, to generally treat the less fortunate tribes equitably, to share their power. They certainly will if their utility function gives diminishing returns to power, and wealth in humans often seems to be that way, maybe relating to benefits from trade or something, (when the utility function gives increasing returns, on the other hand… well, let’s not talk about that.)
But the insurance policy can’t apply in every situation, consider: it seems obviously wrong to extend moral equity to, for example, a hypothetical or fictional species that can’t possibly emerge naturally, which you’d then have to abiogenerate. And this seems to apply to descendents of non-fictional extinct species too. You have to accept that the species who evolved to strongly select themselves for, say, over-exploiting their environment to irrecoverable degrees, and starved, even if they existed, their descendants now don’t, and couldn’t have, so you don’t owe them anything now.
It’s obvious with chosen differentials (true neartermists, for instance, choose to continuously sell their power, because power over the future is less valuable to them than flourishing in the present). But I don’t really know how to draw the line as crisply as we need it, between accidental and inevitable differentials. I’ll keep thinking about it.
I’m generally a fan of pursuing this sort of moral realism of the ideals, but I want to point out one very hazardous amoral hole in the world that I don’t think it will ever be able to bridge over for us, lest anyone assume otherwise, and fall into the hole by being lax and building unaligned AGI because they think it will be kinder than it will.
(I don’t say this lightly: Confidently assuming kindness that we wont get as a result of overextended faith in moral realism, and thus taking on catastrophically bad alignment strategies, is a pattern I see shockingly often in abstract thinkers I have known. It’s a very real thing.)
There seems to be a rule that inevitable power differentials actually have to be allowed to play out.
It only seems to apply to inevitable power differentials, it’s interesting that it doesn’t seem to apply to situations where power differentials emerge due to happenstance (for instance, differentials favoring whichever tribe unknowingly took residence in up on copper-rich geographies before anyone knew about smelting). In those situations, FDT agents might choose to essentially redistribute, to consummate an old insurance policy against ending up on the bad end of colonization, to swap land, to send metal tools, to generally treat the less fortunate tribes equitably, to share their power. They certainly will if their utility function gives diminishing returns to power, and wealth in humans often seems to be that way, maybe relating to benefits from trade or something, (when the utility function gives increasing returns, on the other hand… well, let’s not talk about that.)
But the insurance policy can’t apply in every situation, consider: it seems obviously wrong to extend moral equity to, for example, a hypothetical or fictional species that can’t possibly emerge naturally, which you’d then have to abiogenerate.
And this seems to apply to descendents of non-fictional extinct species too. You have to accept that the species who evolved to strongly select themselves for, say, over-exploiting their environment to irrecoverable degrees, and starved, even if they existed, their descendants now don’t, and couldn’t have, so you don’t owe them anything now.
It’s obvious with chosen differentials (true neartermists, for instance, choose to continuously sell their power, because power over the future is less valuable to them than flourishing in the present). But I don’t really know how to draw the line as crisply as we need it, between accidental and inevitable differentials.
I’ll keep thinking about it.