My personal opinion: this text is crazy. So many words about the risk of building a “country of geniuses”, but he never once questions the assumption that it should be built by a company for commercial purposes (with him as CEO, of course). Never once mentions the option of building this thing publicly owned and under democratic control.
No, and even if the US was in better shape, I wouldn’t want one country to control AI. Ideally I’d want ownership and control of AI to be spread among all people everywhere, somehow.
Giving everyone a say could lead to some terrible things because there are a lot of messed up people and messed up ideologies. At a minimum, there should be some safeguards imposed from top down. For instance, “give everyone a say but only if their say complies with human and animal rights.” Someone has to make sure those safeguards are in there, so the vision cannot be 100% spread out to everyone.
Still, this is very far from the vision in the essay, which is “AI should be run by for-profit megacorps like mine and I can’t even imagine questioning that”.
Maybe you’re just jokingly pointing out that there’s an apparent tension in the sentiment, which is fine.
But someone strong-downvoted my above comment, which suggests that at least one person thinks I have said something that is bad or shouldn’t be said?
Is it the inclusion of animal rights (btw I should have said rights for sentient AIs too) or would people react the same way if I pointed out that an interpretation of a democratic process where every person alive at the Singularity gets one planet to themselves if they want it wouldn’t be ideal if it means that some sadists could choose to create new sentient minds so they can torture them? I’m just saying, “can we please prevent that?” (Or, maybe, if that were this sadistic person’s genuine greatest wish, could we at least compromise around it somehow so that the minds only appear to be sentient but aren’t, and maybe, if it’s absolutely necessary, once every year, on the sadist’s birthday, a handful of the minds actually become sentient for a few hours, but only for levels of torment that are like a strong headache, and not anything anywhere close to mind-breaking torture?)
Liberty is not the only moral dimension that matters with a global scope, there’s also care/harm prevention at the very least, so we shouldn’t be surprised if we got a weird result if we try to optimize “do the most liberty thing” without paying any attention at all to care/harm prevention.
That said, if someone insisted on seeing it that way, I certainly wouldn’t object that people who actually save the lightcone (not that I’m one of them, and not that I think we are currently on track of getting much control over outcomes anyway—unfortunately I’m not encouraged by Dario Amodei repeatedly strawmanning opposing arguments) should get some kind of benefit or extra perk out of it if they really want that. If someone brings about a utopia-worthy future with a well-crafted process with democratic spirit, that’s awesome and for all I care, if they want to add some idiosyncratic thing like that we should use the color green a lot in the future or whatever, they should get it because it’s nice of them to not have gone (more) control mode on everyone else when they had the chance. (Of course, in reality I object that “let’s respect animal rights” is at all like imposing extra bits of the color green on people. In our current world, not harming animals is quite hard because of the way things are set up and where we get food from, but in a future world, people may not even need food anymore, and if they do still need it, one could create it artificially. But more importantly, it’s not in the spirit of “liberty” if you use it to impose on someone else’s freedom.)
Taking a step back, I wonder if people really care about the moral object level here (like they would actually pay a lot of their precious resources for the difference between my democratic proposal with added safeguards and their own 100% democratic proposal?), or whether this is more about just taking down people who seem to have strong moral commitments, because of maybe an inner impulse to take down virtue signallers? Maybe I just don’t empathize enough with people whose moral foundations are very different from mine, but to me, it’s strange to be very invested about the maximum democraticness of a process, but then care not much about the prospect of torture of of innocents. Why have moral motivation and involvement for one but not the other?
Sure, maybe you could ask, why do you (Lukas) care about only liberty and harm prevention, but not about, say, authority or purity (other moral foundations according to Haidt)? Well, I genuinely think that authority or purity are more “narrow-scope” and more “personal” possible moral concerns that people can have for themselves and their smaller communities. In a utopia I would want anyone who cares about these things get them in their local surroundings, but it would be too imposing to put them on everyone and everything. By contrast, the logic of harm prevention works the other way because it’s a concern that every moral patient benefits from.
I think you’re reading more into what I said than is there. I don’t want people torturing sentient minds, would endorse forcibly preventing everyone from doing that anywhere in the universe, and I also didn’t strong downvote (in fact downvote at all) your post.
My point is just that people make what in my view is a mistake, when they say “lets optimize for the values of everyone in a coalition, subject to obvious safeguards like no torture”. Because in a fair coalition those safeguards are something you should have to bargain for.
I think no-torture is a rule a supermajority’d agree with, so it should be very cheap to bargain for. But if people disagreed you’d have to bargain harder.
And if enough people just want torture, the solution is not to pretend like you’re giving them a fair deal “well include you in a democratic process that determines the values the AI optimizes for! (but no torture, sorry!)”.
Its telling them “No, I think your values are garbage, and making the world nice to you costs so much to me I’d rather spend my efforts trying to lock you out of the coalition entirely.”.
That makes sense. I was never assuming a context where having to bargain for anything is the default, so the coalition doesn’t have to be fair to everyone, since it’s not a “coalition” at all but rather most people would be given stuff for free because the group that builds aligned AI has democracy as one of their values.
Sure, it’s not 100% for free because there are certain expectations, and the public can put pressure on companies that appear to be planning things that are unilateral and selfish. Legally, I would hope companies are at least bound to the values in their country’s constitution. More importantly, morally, it would be quite bad to not share what you have and try to make things nice for everyone (worldwide), with constraints/safeguards. Still, as I’ve said, I think it would be really strange and irresponsible if someone thought that a group or coalition that brought about a Singularity that actually goes well somehow owes a share of influence to every person on the planet without any vetting or safeguards.
You right that you could vote on whether to have any safeguards (and their contents if yes) instead of installing them top-down. But then who is it that frames the matter in that way (the question of safeguards getting voted on first before everyone gets some resources/influence allocated, versus just starting with the second part without the safeguards)? Who sets up the voting mechanism (e.g., if there’s disagreement, is it just majority wins or should there be some Archipelago-style split in case a significant minority wants things some other way)?
My point is that terms like “democratic” (or “libertarian,” for the Archipelago vision) are under-defined. To specify processes that capture the spirit behind these terms as ideals, we have to make some judgment calls. You might think that having democratic ideals also means everyone voting democratically on all these judgment calls, but I don’t think that this changes the dynamic because there’s an infinite regress where you need certain judgement calls for that, too.
And at this point I feel like asking, if we have to lock in some decisions anyway to get any democratic process off the ground, we may as well pick a setup top-down where the most terrible outcomes (involuntary torture) are less likely to happen for “accidental” reasons that weren’t even necessarily “the will of the people.” Sure, maybe you could have a phase where you gather inputs and objections to the initial setup, and vote on changes if there’s a concrete counterproposal that gains enough traction via legitimate channels. Still, I’d very much would want to start by setting a well-thought-out default top-down rather than leaving everything up to chance.
It’s not “more democratic” to leave the process underspecified. If you just put 8 billion people in a chat forum without too many rules hooked up to the AGI sovereign that controls the future, it’ll get really messy and the result, whatever it is, may not reflect “the will of the people” any better than if we had started out with something already more guided and structured.
Read the sections related to defense from economic concentration of power. For example, we had Amodei claim the following:
Fifth, while all the above private actions can be helpful, ultimately a macroeconomic problem this large will require government intervention (italics mine—S.K.). The natural policy response to an enormous economic pie coupled with high inequality (due to a lack of jobs, or poorly paid jobs, for many) is progressive taxation. The tax could be general or could be targeted against AI companies in particular. Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on basic moral grounds, but I can also make a pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of it: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.
I’ve read the text. What the text is talking about (taxation, philanthropy, Carnegie foundation whatever) is a million miles away from what I’m talking about (“building this thing publicly owned and under democratic control”).
Could you suggest a strategy which Amodei could use so that the ASI is created and publicly owned under democratic control as you hope? Amodei would be unlikely to sell the idea to ~any investors except for the governments. Additionally, Anthropic wrote into Claude’s Constitution clauses like these:
We’re especially concerned about the use of AI to help individual humans or small groups gain unprecedented and illegitimate forms of concentrated power. In order to avoid this, Claude should generally try to preserve functioning societal structures, democratic institutions (italics mine—S.K.), and human oversight mechanisms, and to avoid taking actions that would concentrate power inappropriately or undermine checks and balances.
or these:
The current hard constraints on Claude’s behavior are as follows. Claude should never: <...>
Engage or assist any individual or group with an attempt to seize unprecedented and illegitimate degrees of absolute societal, military, or economic control(italics mine—S.K.);
Thank you for reposting this here.
My personal opinion: this text is crazy. So many words about the risk of building a “country of geniuses”, but he never once questions the assumption that it should be built by a company for commercial purposes (with him as CEO, of course). Never once mentions the option of building this thing publicly owned and under democratic control.
Do you feel good about current democratic institutions in the US making wise choices, or confident they will make wiser choices than Dario Amodei?
No, and even if the US was in better shape, I wouldn’t want one country to control AI. Ideally I’d want ownership and control of AI to be spread among all people everywhere, somehow.
Giving everyone a say could lead to some terrible things because there are a lot of messed up people and messed up ideologies. At a minimum, there should be some safeguards imposed from top down. For instance, “give everyone a say but only if their say complies with human and animal rights.” Someone has to make sure those safeguards are in there, so the vision cannot be 100% spread out to everyone.
Still, this is very far from the vision in the essay, which is “AI should be run by for-profit megacorps like mine and I can’t even imagine questioning that”.
What would you do and say if you were in Amodei’s position?
I wouldn’t be in his position. I wouldn’t have made promises to investors that now make de-commercializing AI an impossible path for him.
“Give everyone a say, but make sure my sacred values are given extra say <3”
Maybe you’re just jokingly pointing out that there’s an apparent tension in the sentiment, which is fine.
But someone strong-downvoted my above comment, which suggests that at least one person thinks I have said something that is bad or shouldn’t be said?
Is it the inclusion of animal rights (btw I should have said rights for sentient AIs too) or would people react the same way if I pointed out that an interpretation of a democratic process where every person alive at the Singularity gets one planet to themselves if they want it wouldn’t be ideal if it means that some sadists could choose to create new sentient minds so they can torture them? I’m just saying, “can we please prevent that?” (Or, maybe, if that were this sadistic person’s genuine greatest wish, could we at least compromise around it somehow so that the minds only appear to be sentient but aren’t, and maybe, if it’s absolutely necessary, once every year, on the sadist’s birthday, a handful of the minds actually become sentient for a few hours, but only for levels of torment that are like a strong headache, and not anything anywhere close to mind-breaking torture?)
Liberty is not the only moral dimension that matters with a global scope, there’s also care/harm prevention at the very least, so we shouldn’t be surprised if we got a weird result if we try to optimize “do the most liberty thing” without paying any attention at all to care/harm prevention.
That said, if someone insisted on seeing it that way, I certainly wouldn’t object that people who actually save the lightcone (not that I’m one of them, and not that I think we are currently on track of getting much control over outcomes anyway—unfortunately I’m not encouraged by Dario Amodei repeatedly strawmanning opposing arguments) should get some kind of benefit or extra perk out of it if they really want that. If someone brings about a utopia-worthy future with a well-crafted process with democratic spirit, that’s awesome and for all I care, if they want to add some idiosyncratic thing like that we should use the color green a lot in the future or whatever, they should get it because it’s nice of them to not have gone (more) control mode on everyone else when they had the chance. (Of course, in reality I object that “let’s respect animal rights” is at all like imposing extra bits of the color green on people. In our current world, not harming animals is quite hard because of the way things are set up and where we get food from, but in a future world, people may not even need food anymore, and if they do still need it, one could create it artificially. But more importantly, it’s not in the spirit of “liberty” if you use it to impose on someone else’s freedom.)
Taking a step back, I wonder if people really care about the moral object level here (like they would actually pay a lot of their precious resources for the difference between my democratic proposal with added safeguards and their own 100% democratic proposal?), or whether this is more about just taking down people who seem to have strong moral commitments, because of maybe an inner impulse to take down virtue signallers? Maybe I just don’t empathize enough with people whose moral foundations are very different from mine, but to me, it’s strange to be very invested about the maximum democraticness of a process, but then care not much about the prospect of torture of of innocents. Why have moral motivation and involvement for one but not the other?
Sure, maybe you could ask, why do you (Lukas) care about only liberty and harm prevention, but not about, say, authority or purity (other moral foundations according to Haidt)? Well, I genuinely think that authority or purity are more “narrow-scope” and more “personal” possible moral concerns that people can have for themselves and their smaller communities. In a utopia I would want anyone who cares about these things get them in their local surroundings, but it would be too imposing to put them on everyone and everything. By contrast, the logic of harm prevention works the other way because it’s a concern that every moral patient benefits from.
I think you’re reading more into what I said than is there. I don’t want people torturing sentient minds, would endorse forcibly preventing everyone from doing that anywhere in the universe, and I also didn’t strong downvote (in fact downvote at all) your post.
My point is just that people make what in my view is a mistake, when they say “lets optimize for the values of everyone in a coalition, subject to obvious safeguards like no torture”. Because in a fair coalition those safeguards are something you should have to bargain for.
I think no-torture is a rule a supermajority’d agree with, so it should be very cheap to bargain for. But if people disagreed you’d have to bargain harder.
And if enough people just want torture, the solution is not to pretend like you’re giving them a fair deal “well include you in a democratic process that determines the values the AI optimizes for! (but no torture, sorry!)”.
Its telling them “No, I think your values are garbage, and making the world nice to you costs so much to me I’d rather spend my efforts trying to lock you out of the coalition entirely.”.
That makes sense. I was never assuming a context where having to bargain for anything is the default, so the coalition doesn’t have to be fair to everyone, since it’s not a “coalition” at all but rather most people would be given stuff for free because the group that builds aligned AI has democracy as one of their values.
Sure, it’s not 100% for free because there are certain expectations, and the public can put pressure on companies that appear to be planning things that are unilateral and selfish. Legally, I would hope companies are at least bound to the values in their country’s constitution. More importantly, morally, it would be quite bad to not share what you have and try to make things nice for everyone (worldwide), with constraints/safeguards. Still, as I’ve said, I think it would be really strange and irresponsible if someone thought that a group or coalition that brought about a Singularity that actually goes well somehow owes a share of influence to every person on the planet without any vetting or safeguards.
Why couldn’t a democratic system of ownership and control implement those safeguards bottom up?
You right that you could vote on whether to have any safeguards (and their contents if yes) instead of installing them top-down. But then who is it that frames the matter in that way (the question of safeguards getting voted on first before everyone gets some resources/influence allocated, versus just starting with the second part without the safeguards)? Who sets up the voting mechanism (e.g., if there’s disagreement, is it just majority wins or should there be some Archipelago-style split in case a significant minority wants things some other way)?
My point is that terms like “democratic” (or “libertarian,” for the Archipelago vision) are under-defined. To specify processes that capture the spirit behind these terms as ideals, we have to make some judgment calls. You might think that having democratic ideals also means everyone voting democratically on all these judgment calls, but I don’t think that this changes the dynamic because there’s an infinite regress where you need certain judgement calls for that, too.
And at this point I feel like asking, if we have to lock in some decisions anyway to get any democratic process off the ground, we may as well pick a setup top-down where the most terrible outcomes (involuntary torture) are less likely to happen for “accidental” reasons that weren’t even necessarily “the will of the people.” Sure, maybe you could have a phase where you gather inputs and objections to the initial setup, and vote on changes if there’s a concrete counterproposal that gains enough traction via legitimate channels. Still, I’d very much would want to start by setting a well-thought-out default top-down rather than leaving everything up to chance.
It’s not “more democratic” to leave the process underspecified. If you just put 8 billion people in a chat forum without too many rules hooked up to the AGI sovereign that controls the future, it’ll get really messy and the result, whatever it is, may not reflect “the will of the people” any better than if we had started out with something already more guided and structured.
UN control. A Baruch plan for AGI.
Read the sections related to defense from economic concentration of power. For example, we had Amodei claim the following:
I’ve read the text. What the text is talking about (taxation, philanthropy, Carnegie foundation whatever) is a million miles away from what I’m talking about (“building this thing publicly owned and under democratic control”).
Could you suggest a strategy which Amodei could use so that the ASI is created and publicly owned under democratic control as you hope? Amodei would be unlikely to sell the idea to ~any investors except for the governments. Additionally, Anthropic wrote into Claude’s Constitution clauses like these:
or these: