Maybe you’re just jokingly pointing out that there’s an apparent tension in the sentiment, which is fine.
But someone strong-downvoted my above comment, which suggests that at least one person thinks I have said something that is bad or shouldn’t be said?
Is it the inclusion of animal rights (btw I should have said rights for sentient AIs too) or would people react the same way if I pointed out that an interpretation of a democratic process where every person alive at the Singularity gets one planet to themselves if they want it wouldn’t be ideal if it means that some sadists could choose to create new sentient minds so they can torture them? I’m just saying, “can we please prevent that?” (Or, maybe, if that were this sadistic person’s genuine greatest wish, could we at least compromise around it somehow so that the minds only appear to be sentient but aren’t, and maybe, if it’s absolutely necessary, once every year, on the sadist’s birthday, a handful of the minds actually become sentient for a few hours, but only for levels of torment that are like a strong headache, and not anything anywhere close to mind-breaking torture?)
Liberty is not the only moral dimension that matters with a global scope, there’s also care/harm prevention at the very least, so we shouldn’t be surprised if we got a weird result if we try to optimize “do the most liberty thing” without paying any attention at all to care/harm prevention.
That said, if someone insisted on seeing it that way, I certainly wouldn’t object that people who actually save the lightcone (not that I’m one of them, and not that I think we are currently on track of getting much control over outcomes anyway—unfortunately I’m not encouraged by Dario Amodei repeatedly strawmanning opposing arguments) should get some kind of benefit or extra perk out of it if they really want that. If someone brings about a utopia-worthy future with a well-crafted process with democratic spirit, that’s awesome and for all I care, if they want to add some idiosyncratic thing like that we should use the color green a lot in the future or whatever, they should get it because it’s nice of them to not have gone (more) control mode on everyone else when they had the chance. (Of course, in reality I object that “let’s respect animal rights” is at all like imposing extra bits of the color green on people. In our current world, not harming animals is quite hard because of the way things are set up and where we get food from, but in a future world, people may not even need food anymore, and if they do still need it, one could create it artificially. But more importantly, it’s not in the spirit of “liberty” if you use it to impose on someone else’s freedom.)
Taking a step back, I wonder if people really care about the moral object level here (like they would actually pay a lot of their precious resources for the difference between my democratic proposal with added safeguards and their own 100% democratic proposal?), or whether this is more about just taking down people who seem to have strong moral commitments, because of maybe an inner impulse to take down virtue signallers? Maybe I just don’t empathize enough with people whose moral foundations are very different from mine, but to me, it’s strange to be very invested about the maximum democraticness of a process, but then care not much about the prospect of torture of of innocents. Why have moral motivation and involvement for one but not the other?
Sure, maybe you could ask, why do you (Lukas) care about only liberty and harm prevention, but not about, say, authority or purity (other moral foundations according to Haidt)? Well, I genuinely think that authority or purity are more “narrow-scope” and more “personal” possible moral concerns that people can have for themselves and their smaller communities. In a utopia I would want anyone who cares about these things get them in their local surroundings, but it would be too imposing to put them on everyone and everything. By contrast, the logic of harm prevention works the other way because it’s a concern that every moral patient benefits from.
That makes sense. I was never assuming a context where having to bargain for anything is the default, so the coalition doesn’t have to be fair to everyone, since it’s not a “coalition” at all but rather most people would be given stuff for free because the group that builds aligned AI has democracy as one of their values.
Sure, it’s not 100% for free because there are certain expectations, and the public can put pressure on companies that appear to be planning things that are unilateral and selfish. Legally, I would hope companies are at least bound to the values in their country’s constitution. More importantly, morally, it would be quite bad to not share what you have and try to make things nice for everyone (worldwide), with constraints/safeguards. Still, as I’ve said, I think it would be really strange and irresponsible if someone thought that a group or coalition that brought about a Singularity that actually goes well somehow owes a share of influence to every person on the planet without any vetting or safeguards.