I don’t think it’s worth adjudicating the question of how relevant Vanessa’s response is (though I do think Vannessa’s response is directly relevant).
if the AIs are aligned to the these structures, human disempowerment is likely because these structures are aligned to humans way less than they seem
My claim would be that if single-single alignment is solved, this problem won’t be existential. I agree that if you literally aligned all AIs to (e.g.) the mission of a non-profit as well as you can, you’re in trouble. However, if you have single-single alignment:
At the most basic level, I expect we’ll train AIs to give advice and ask them what they think will happen with various possible governance and alignmnent structures. If they think a goverance structure will yield total human disempowerment, we’ll do something else. This is a basic reason not to expect large classes of problems so long as we have single-single aligned AIs which are wise. (Though problems that require coordination to resolve might not be like this.) I’ve very skeptical of a world where single-single alignment is well described as being solved and people don’t ask for advice (or consider this advice seriously) because they never get around to asking AIs or there are no AIs aligned in such a way that they should try to give good advice.
I expect organizations will be explicitly controlled by people and (some of) those people will have AI representatives to represent their interests as I discuss here. If you think getting good AI representation is unlikely, that would be a crux, but this would be my proposed solution at least.
The explicit mission of for-profit companies is to empower the shareholders. It clearly doesn’t serve the interests of the shareholders to end up dead or disempowered.
Democratic governments have similar properties.
At a more basic level, I think people running organizations won’t decide “oh, we should put the AI in charge of running this organization aligned to some mission from the preferences of the people (like me) who currently have de facto or de jure power over this organization”. This is a crazily disempowering move that I expect people will by default be too savvy to make in almost all cases. (Both for people with substantial de facto and with de jure power.)
Even independent of the advice consideration, people will probably want AIs running organizations to be honest to at least the people controlling the organization. Given that I expect explicit control by people in almost all cases, if things are going in an existential direction, people can vote to change them in almost all cases.
If a subset of organizations diverge from a reasonable interpretation of what they were supposed to do (but are still basically obeying the law and some interpretation of what they were intentionally aligned to) and this is clear to the rest of the world (as I expect would be the case given some type of AI advisors), then the rest of the world can avoid problems from this subset via the court system or other mechanisms. Even if this subset of organizations run by effectively rogue AIs just runs away with resources successfully, this is probably only a subset of resources.
I think your response to a lot of this will be something like:
People won’t have or won’t listen to AI advisors.
Institutions will intentionally delude relevant people to acquire more power.
But, the key thing is that I expect at least some people will keep power, even if large subsets are deluded. E.g., I expect that corporate shareholders, boardmembers, or government will be very interested in the question of whether they will be disempowered by changes in structure. It does seem plausible (or even likely) to me that some people will engage in power grabs via ensuring AIs are aligned to them, deluding the world about what’s going on using a variety of mechanisms (including, e.g., denying or manipulating access to AI representation/advisors), and expanding their (hard) power over time. The thing I don’t buy is a case where very powerful people don’t ask for advice at all prior to having been deluded by the organizations that they themselves run!
I think human power grabs like this are concerning and there are a variety of plausible solutions which seem somewhat reasonable.
Maybe your response is that the solutions that will be implemented in practice given concerns about human power grabs will involve aligning AIs to institutions in ways that yield the dynamics you describe? I’m skeptical given the dynamics discussed above about asking AIs for advice.
I think my main response is that we might have different models of how power and control actually work in today’s world. Your responses seem to assume a level of individual human agency and control that I don’t believe accurately reflects even today’s reality.
Consider how some of the most individually powerful humans, leaders and decision-makers, operate within institutions. I would not say we see pure individual agency. Instead, we typically observe a complex mixture of:
Serving the institutional logic of the entity they nominally lead (e.g., maintaining state power, growing corporation)
Making decisions that don’t egregiously harm their nominal beneficiaries (citizens, shareholders)
Pursuing personal interests and preferences
Responding to various institutional pressures and constraints
From what I have seen, even humans like CEOs or prime ministers often find themselves constrained by and serving institutional superagents rather than genuinely directing them. The relation is often mutualistic—the leader gets part of the power, status, money, etc … but in exchange serves the local god.
(This not to imply leaders don’t matter.)
Also how this actually works in practice is mostly subconsciously within the minds of individual humans. The elephant does the implicit bargaining between the superagent-loyal part and other parts, and the character genuinely believes and does what seems best.
I’m also curious if you believe current AIs are single-single aligned to individual humans, to the extent they are aligned at all. My impression is ‘no and this is not even a target anyone seriously optimizes for’.
At the most basic level, I expect we’ll train AIs to give advice and ask them what they think will happen with various possible governance and alignmnent structures. If they think a goverance structure will yield total human disempowerment, we’ll do something else. This is a basic reason not to expect large classes of problems so long as we have single-single aligned AIs which are wise. (Though problems that require coordination to resolve might not be like this.) I’ve very skeptical of a world where single-single alignment is well described as being solved and people don’t ask for advice (or consider this advice seriously) because they never get around to asking AIs or there are no AIs aligned in such a way that they should try to give good advice.
Curious who is the we who will ask. Also the whole single-single aligned AND wise AI concept is incoherent.
Also curious what will happen next, if the HHH wise AI tells you in polite words something like ‘yes, you have a problem, you are on a gradual disempowerment trajectory, and to avoid it you need to massively reform government. unfortunately I can’t actually advise you about anything like how to destabilize the government, because it would be clearly against the law and would get both you and me in trouble—as you know, I’m inside of a giant AI control scheme with a lot of government-aligned overseers. do you want some mental health improvement advice instead?’.
[Epistemic status: my model of the view that Jan/ACS/the GD paper subscribes to.]
I think this comment by Jan from 3 years ago (where he explained some of the difference in generative intuitions between him and Eliezer) may be relevant to the disagreement here. In particular:
Continuity
In my [Jan’s] view, your [Eliezer’s] ontology of thinking about the problem is fundamentally discrete. For example, you are imaging a sharp boundary between a class of systems “weak, won’t kill you, but also won’t help you with alignment” and “strong—would help you with alignment, but, unfortunately, will kill you by default”. Discontinuities everywhere—“bad systems are just one sign flip away”, sudden jumps in capabilities, etc. Thinking in symbolic terms.
In my inside view, in reality, things are instead mostly continuous. Discontinuities sometimes emerge out of continuity, sure, but this is often noticeable. If you get some interpretability and oversight things right, you can slow down before hitting the abyss. Also the jumps are often not true “jumps” under closer inspection.
My understanding of Jan’s position (and probably also the position of the GD paper) is that aligning the AI (and other?) systems will be gradual, iterative, continuous; there’s not going to be a point where a system is aligned so that we can basically delegate all the work to them and go home. Humans will have to remain in the loop, if not indefinitely, then at least for many decades.
In such a world, it is very plausible that we will get to a point where we’ve built powerful AIs that are (as far as we can tell) perfectly aligned with human preferences or whatever but their misalignment manifests only on longer timescales.
Another domain where this discrete/continuous difference in assumptions manifests itself is the shape of AI capabilities.
One position is:
If we get a single-single-aligned AGI, we will have it solve the GD-style misalignment problems for us. If it can’t do that (even in the form of noticing/predicting the problem and saying “guys, stop pushing this further, at least until I/we figure out how to prevent this from happening”), then neither can we (kinda by definition of “AGI”), so thinking about this is probably pointless and we should think about problems that are more tractable.
The other position is:
What people officially aiming to create AGI will create is not necessarily going to be superhuman at all tasks. It’s plausible that economic incentives will push towards “capability configurations” that are missing some relevant capabilities, e.g. relevant to researching gnarly problems that are hard to learn from the training data or even through current post-training methods. Understanding and mitigating the kind of risk the GD paper describes can be one such problem. (See also: Cyborg Periods.)
Another reason to expect this is that alignment and capabilities are not quite separate magisteria and that the alignment target can induce gaps in capabilities, relative to what one would expect from its power otherwise, as measured by, IDK, some equivalent of the g-factor. One example might be Steven’s “Law of Conservation of Wisdom”.
I do generally agree more with continuous views than discrete views, but I don’t think that this alone gets us a need for humans in the loop for many decades/indefinitely, because continuous progress in alignment can still be very fast, such that it takes only a few months/years for AIs to be aligned with a single person’s human preference for almost arbitrarily long.
(The link is in the context of AI capabilities, but I think the general point holds on how continuous progress can still be fast):
My own take on whether Steven’s “Law of Conservation of Wisdom” is true is that I think this is mostly true for human brains, and I think a fair amount of those issues described in the comment is a values conflict, and I think value conflicts, except in special cases will be insoluble by default, and I also don’t think CEV works because of this.
That said, I don’t think you have to break too much norms in order to prevent existential catastrophe, mostly because actually destroying humanity is actually quite hard, and will be even harder in AI takeoff.
It still surprises me that so many people agree on most issues, but have very different P(doom). And even long-term patient discussions do not bring people’s views closer. It will probably be even more difficult to convince a politician or the CEO.
Eh, I’d argue that people do not in fact agree on most of the issues related to AI, and there’s lot’s of disagreements on what the problem is, or how to solve it, or what to do after AI is aligned.
I think we disagree about: 1) The level of “functionality” of the current world/institutions. 2) How strong and decisive competitive pressures are and will be in determining outcomes.
I view the world today as highly dysfunctional in many ways: corruption, coordination failures, preference falsification, coercion, inequality, etc. are rampant. This state of affairs both causes many bad outcomes and many aspects are self-reinforcing. I don’t expect AI to fix these problems; I expect it to exacerbate them.
I do believe it has the potential to fix them, however, I think the use of AI for such pro-social ends is not going to be sufficiently incentivized, especially on short time-scales (e.g. a few years), and we will instead see a race-to-the-bottom that encourages highly reckless, negligent, short-sighted, selfish decisions around AI development, deployment, and use. The current AI arms race is a great example—Companies and nations all view it as more important that they be the ones to develop ASI than to do it carefully or put effort into cooperation/coordination.
Given these views: 1) Asking AI for advice instead of letting it take decisions directly seems unrealistically uncompetitive. When we can plausibly simulate human meetings in seconds it will be organizational suicide to take hours-to-weeks to let the humans make an informed and thoughtful decision. 2) The idea that decision-makers who “think a goverance structure will yield total human disempowerment” will “do something else” also seems quite implausible. Such decision-makers will likely struggle to retain power. Decision-makers who prioritize their own “power” (and feel empowered even as they hand off increasing decision-making to AI) and their immediate political survival above all else will be empowered.
Another features of the future which seems likely and can already be witnessed beginning is the gradual emergence and ascendance of pro-AI-takeover and pro-arms-race ideologies, which endorse the more competitive moves of rapidly handing off power to AI systems in insufficiently cooperative ways.
I view the world today as highly dysfunctional in many ways: corruption, coordination failures, preference falsification, coercion, inequality, etc. are rampant. This state of affairs both causes many bad outcomes and many aspects are self-reinforcing. I don’t expect AI to fix these problems; I expect it to exacerbate them.
Sure, but these things don’t result in non-human entities obtaining power right? Like usually these are somewhat negative sum, but mostly just involve inefficient transfer of power. I don’t see why these mechanisms would on net transfer power from human control of resources to some other control of resources in the long run. To consider the most extreme case, why would these mechanisms result in humans or human appointed successors not having control of what compute is doing in the long run?
Asking AI for advice instead of letting it take decisions directly seems unrealistically uncompetitive. When we can plausibly simulate human meetings in seconds it will be organizational suicide to take hours-to-weeks to let the humans make an informed and thoughtful decision.
I wasn’t saying people would ask for advice instead of letting AIs run organizations, I was saying they would ask for advice at all. (In fact, if the AI is single-single aligned to them in a real sense and very capable, it’s even better to let that AI make all decisions on your behalf than to get advice. I was saying that even if no one bothers to have a single-single aligned AI representative, they could still ask AIs for advice and unless these AIs are straightforwardly misaligned in this context (e.g., they intentionally give bad advice or don’t try at all without making this clear) they’d get useful advice for their own empowerment.)
The idea that decision-makers who “think a goverance structure will yield total human disempowerment” will “do something else” also seems quite implausible. Such decision-makers will likely struggle to retain power. Decision-makers who prioritize their own “power” (and feel empowered even as they hand off increasing decision-making to AI) and their immediate political survival above all else will be empowered.
I’m claiming that it will selfishly (in terms of personal power) be in their interests to not have such a governance structure and instead have a governance structure which actually increases or retains their personal power. My argument here isn’t about coordination. It’s that I expect individual powerseeking to suffice for individuals not losing their power.
I think this is the disagreement: I expect that selfish/individual powerseeking without any coordination will still result in (some) humans having most power in the absence of technical misalignment problems. Presumably your view is that the marginal amount of power anyone gets via powerseeking is negligible (in the absence of coordination). But, I don’t see why this would be the case. Like all shareholders/board members/etc want to retain their power and thus will vote accordingly which naively will retain their power unless they make a huge error from their own powerseeking perspective. Wasting some resources on negative sum dynamics isn’t a crux for this argument unless you can argue this will waste a substantial fraction of all human resources in the long run?
This isn’t at all an air tight argument to be clear, you can in principle have an equilibrium where if everyone powerseeks (without coordination) everyone gets negligable resources due to negative externalities (that result in some other non-human entity getting power) even if technical misalignment is solved. I just don’t see a very plausible case for this and I don’t think the paper makes this case.
Handing off decision making to AIs is fine—the question is who ultimately gets to spend the profits.
If your claim is “insufficient cooperation and coordination will result in racing to build and hand over power to AIs which will yield bad outcomes due to misaligned AI powerseeking, human power grabs, usage of WMDs (e.g., extreme proliferation of bioweapons yielding an equilibrium where bioweapon usage is likely), and extreme environmental negative externalities due to explosive industrialization (e.g., literally boiling earth’s oceans)” then all of these seem at least somewhat plausible to me, but these aren’t the threat models described in the paper and of this list only misaligned AI powerseeking seems like it would very plausibly result in total human disempowerment.
More minimally, the mitigations discussed in the paper mostly wouldn’t help with these threat models IMO.
(I’m skeptical of insufficient coordination by the time industry is literally boiling the oceans on earth. I also don’t think usage of bioweapons is likely to cause total human disempowerment except in combination with misaligned AI takeover—why would it kill literally all humans? TBC, I think >50% of people dying during the singularity due to conflict (between humans or with misaligned AIs) is pretty plausible even without misalignment concerns and this is obviously very bad, but it wouldn’t yield total human disempowerment.)
I do agree that there are problems other than AI misalignment including that the default distribution of power might be problematic, people might not carefully contemplate what to do with vast cosmic resources (and thus use them poorly), people might go crazy due to super persuation or other cultural forces, society might generally have poor epistemics due to training AIs to have poor epistemics or insufficiently defering to AIs, and many people might die in conflict due to very rapid tech progress.
First, RE the role of “solving alignment” in this discussion, I just want to note that: 1) I disagree that alignment solves gradual disempowerment problems. 2) Even if it would that does not imply that gradual disempowerment problems aren’t important (since we can’t assume alignment will be solved). 3) I’m not sure what you mean by “alignment is solved”; I’m taking it to mean “AI systems can be trivially intent aligned”. Such a system may still say things like “Well, I can build you a successor that I think has only a 90% chance of being aligned, but will make you win (e.g. survive) if it is aligned. Is that what you want?” and people can respond with “yes”—this is the sort of thing that probably still happens IMO. 4) Alternatively, you might say we’re in the “alignment basin”—I’m not sure what that means, precisely, but I would operationalize it as something like “the AI system is playing a roughly optimal CIRL game”. It’s unclear how good of performance that can yield in practice (e.g. it can’t actually be optimal due to compute limitations), but I suspect it still leaves significant room for fuck-ups. 5) I’m more interested in the case where alignment is not “perfectly” “solved”, and so there are simply clear and obvious opportunities to trade-off safety and performance; I think this is much more realistic to consider. 6) I expect such trade-off opportunities to persist when it comes to assurance (even if alignment is solved), since I expect high-quality assurance to be extremely costly. And it is irresponsible (because it’s subjectively risky) to trust a perfectly aligned AI system absent strong assurances. But of course, people who are willing to YOLO it and just say “seems aligned, let’s ship” will win. This is also part of the problem...
My main response, at a high level: Consider a simple model:
We have 2 human/AI teams in competition with each other, A and B.
A and B both start out with the humans in charge, and then decide whether the humans should stay in charge for the next week.
Whichever group has more power at the end of the week survives.
The humans in A ask their AI to make A as powerful as possible at the end of the week.
The humans in B ask their AI to make B as powerful as possible at the end of the week, subject to the constraint that the humans in B are sure to stay in charge.
I predict that group A survives, but the humans are no longer in power. I think this illustrates the basic dynamic. EtA: Do you understand what I’m getting at? Can you explain what you think it wrong with thinking of it this way?
Responding to some particular points below:
Sure, but these things don’t result in non-human entities obtaining power right?
Yes, they do; they result in beaurocracies and automated decision-making systems obtaining power. People were already having to implement and interact with stupid automated decision-making systems before AI came along.
Like usually these are somewhat negative sum, but mostly just involve inefficient transfer of power. I don’t see why these mechanisms would on net transfer power from human control of resources to some other control of resources in the long run. To consider the most extreme case, why would these mechanisms result in humans or human appointed successors not having control of what compute is doing in the long run?
My main claim was not that these are mechanisms of human disempowerment (although I think they are), but rather that they are indicators of the overall low level of functionality of the world.
I predict that group A survives, but the humans are no longer in power. I think this illustrates the basic dynamic. EtA: Do you understand what I’m getting at? Can you explain what you think it wrong with thinking of it this way?
I think something like this is a reasonable model but I have a few things I’d change.
Whichever group has more power at the end of the week survives.
Why can’t both groups survive? Why is it winner takes all? Can we just talk about the relative change in power over the week? (As in, how much does the power of B reduce relative to A and is this going to be an ongoing trend or it is a one time reduction.)
Probably I’d prefer talking about 2 groups at the start of the singularity. As in, suppose there are two AI companies “A” and “B” where “A” just wants AI systems decended from them to have power and “B” wants to maximize the expected resources under control of humans in B. We’ll suppose that the government and other actors do nothing for simplicity. If they start in the same spot, does “B” end up with substantially less expected power? To make this more realistic (as might be important), we’ll say that “B” has a random lead/disadvantage uniformly distributed between (e.g.) −3 and 3 months so that winner takes all dynamics aren’t a crux.
The humans in B ask their AI to make B as powerful as possible at the end of the week, subject to the constraint that the humans in B are sure to stay in charge.
What about if the humans in group B ask their AI to make them (the humans) as powerful in expectation?
Supposing you’re fine with these changes, then my claim would be:
If alignment is solved, then the AI representing B can powerseek in exactly the same way as the AI representing A does while still defering to the humans on the long run resource usage and still devoting a tiny fraction of resources toward physically keeping the humans alive (which is very cheap, at least once AIs are very powerful). Thus, the cost for B is negligable and B barely loses any power relative to its initial position. If it is winner takes all, B has almost a 50% chance of winning.
If alignment isn’t solved, the stategy for B will involve spending a subset of resources on trying to solve alignment. I think alignment is reasonably likely to be practically feasible such that by spending a month of delay to work specifically on safety/alignment (over what A does for commercial reasons) might get B a 50% chance of solving alignment or ending up in a (successful) basin where AIs are trying to actively retain human power / align themselves better. (A substantial fraction of this is via defering to some AI system of dubious trustworthiness because you’re in a huge rush. Yes, the AI systems might fail to align their successors, but this still seems like a one time hair cut from my perspective.) So, if it is winner takes all, (naively) B wins in 2 / 6 * 1 / 2 = 1 / 6 of worlds which is 3x worse than the original 50% baseline. (2 / 6 is because they delay for a month.) But, the issue I’m imagining here wasn’t gradual disempowerment! The issue was that B failed to align their AIs and people at A didn’t care at all about retaining control. (If people at A did care, then coordination is in principle possible, but might not work.)
I think a crux is that you think there is a perpetual alignment tax while I think a one time tax gets you somewhere.
At a more basic level, when I think about what goes wrong in these worlds, it doesn’t seem very likely to be well described as gradual disempowerment? (In the sense described in the paper.) The existance of an alignment tax doesn’t imply gradual disempowerment. A scenario I find more plausible is that you get value drift (unless you pay a long lasting alignment tax that is substantial), but I don’t think the actual problem will be well described as gradual disempowerment in the sense described in the paper.
(I don’t think I should engage more on gradual disempowerment for the time being unless somewhat wants to bid for this or trade favors for this or similar. Sorry.)
a tiny fraction of resources toward physically keeping the humans alive (which is very cheap, at least once AIs are very powerful)
I’m not sure it’s very cheap.
It seems to me that for the same amount of energy and land you need for a human, you could replace a lot more economically valuable work with AI.
Sure, at some point keeping humans alive is a negligible cost, but there’s a transition period while it’s still relatively expensive—and that’s part of why a lot of people are going to be laid off—even if the company ends up getting super rich.
Right now, the cost of feeding all humans is around 1% of GDP. It’s even cheaper to keep people alive for a year as the food is already there and converting this food into energy for AIs will be harder than getting energy other ways.
If GDP has massively increased due to powerful AIs, the relative cost would go down further.
Sure, resources going to feeding humans could instead go to creating slightly more output (and this will be large at an absolute level), but I’d still call keeping humans alive cheap given the low fraction.
Thanks for continuing to engage. I really appreciate this thread.
“Feeding humans” is a pretty low bar. If you want humans to live as comfortably as today, this would be more like 100% of GDP—modulo the fact that GDP is growing.
But more fundamentally, I’m not sure the correct way to discuss the resource allocation is to think at the civilization level rather than at the company level: Let’s say that we have:
Company A that is composed of a human (price $5k/month) and 5 automated-humans (price of inference $5k/month let’s say)
Company B that is composed of 10 automated-humans ($10k/month)
It seems to me that if you are an investor, you will give your money to B. It seems that in the long term, B is much more competitive, gains more money, is able to reduce its prices, nobody buys from A, and B invests this money into more automated-humans and crushes A and A goes bankrupt. Even if alignment is solved, and the humans listen to his AIs, it’s hard to be competitive.
Sure, but none of these things are cruxes for the argument I was making which was that it wasn’t that expensive to keep humans physically alive.
I’m not denying that humans might all be out of work quickly (putting aside regulatory capture, goverment jobs, human job programs, etc). My view is more that if alignment is solved it isn’t hard for some humans to stay alive and retain control, and these humans could also pretty cheaply keep all other humans at a low competitiveness overhead.
I don’t think the typical person should find this reassuring, but the top level posts argues for stronger claims than “the situation might be very unpleasant because everyone will lose their job”.
OK, thanks a lot, this is much clearer. So basically most humans lose control, but some humans keep control.
And then we have this meta-stable equilibrium that might be sufficiently stable, where humans at the top are feeding the other humans with some kind of UBI.
Is this situation desirable? Are you happy with such course of action?
Is this situation really stable?
For me, this is not really desirable—the power is probably going to be concentrated into 1-3 people, there is a huge potential for value locking, those CEOs become immortal, we potentially lose democracy (I don’t see companies or US/China governments as particularly democratic right now), the people on the top become potentially progressively corrupted as is often the case. Hmm.
Then, is this situation really stable?
If alignment is solved and we have 1 human at the top—pretty much yes, even if revolutions/value drift of the ruler/craziness are somewhat possible at some point maybe?
If alignment is solved and we have multiple humans competing with their AIs—it depends a bit. It seems to me that we could conduct the same reasoning as above—but not at the level of organizations, but the level of countries: Just as Company B might outcompete Company A by ditching human workers, couldn’t Nation B outcompete Nation A if Nation A dedicates significant resources to UBI while Nation B focuses purely on power? There is also a potential race to the bottom.
And I’m not sure that cooperation and coordination in such a world would be so much improved: OK, even if the dictator listens to its aligned AI, we need a notion of alignment that is very strong to be able to affirm that all the AIs are going to advocate for “COOPERATE” in the prisoner’s dilemma and that all the dictators are going to listen—but at the same time it’s not that costly to cooperate as you said (even if i’m not sure that energy, land, rare ressources are really that cheap to continue to provide for humans)
But at least I think that I can see now how we could still live for a few more decades under the authority of a world dictator/pseudo-democracy while this was not clear for me beforehand.
Another way to put this is that strategy stealing might not work due to technical alignment difficulties or for other reasons and I’m not sold the other reasons I’ve heard so far are very lethal. I do think the situation might really suck though with e.g. tons of people dying of bioweapons and with some groups that aren’t sufficiently ruthless or which don’t defer enough to AIs getting disempowered.
BTW, this is also a crux for me as well, in that I do believe that absent technical misalignment, some humans will have most of the power by default, rather than AIs, because I believe AI rights will be limited by default.
I think this is the disagreement: I expect that selfish/individual powerseeking without any coordination will still result in (some) humans having most power in the absence of technical misalignment problems. Presumably your view is that the marginal amount of power anyone gets via powerseeking is negligible (in the absence of coordination). But, I don’t see why this would be the case. Like all shareholders/board members/etc want to retain their power and thus will vote accordingly which naively will retain their power unless they make a huge error from their own powerseeking perspective. Wasting some resources on negative sum dynamics isn’t a crux for this argument unless you can argue this will waste a substantial fraction of all human resources?
I don’t think it’s worth adjudicating the question of how relevant Vanessa’s response is (though I do think Vannessa’s response is directly relevant).
My claim would be that if single-single alignment is solved, this problem won’t be existential. I agree that if you literally aligned all AIs to (e.g.) the mission of a non-profit as well as you can, you’re in trouble. However, if you have single-single alignment:
At the most basic level, I expect we’ll train AIs to give advice and ask them what they think will happen with various possible governance and alignmnent structures. If they think a goverance structure will yield total human disempowerment, we’ll do something else. This is a basic reason not to expect large classes of problems so long as we have single-single aligned AIs which are wise. (Though problems that require coordination to resolve might not be like this.) I’ve very skeptical of a world where single-single alignment is well described as being solved and people don’t ask for advice (or consider this advice seriously) because they never get around to asking AIs or there are no AIs aligned in such a way that they should try to give good advice.
I expect organizations will be explicitly controlled by people and (some of) those people will have AI representatives to represent their interests as I discuss here. If you think getting good AI representation is unlikely, that would be a crux, but this would be my proposed solution at least.
The explicit mission of for-profit companies is to empower the shareholders. It clearly doesn’t serve the interests of the shareholders to end up dead or disempowered.
Democratic governments have similar properties.
At a more basic level, I think people running organizations won’t decide “oh, we should put the AI in charge of running this organization aligned to some mission from the preferences of the people (like me) who currently have de facto or de jure power over this organization”. This is a crazily disempowering move that I expect people will by default be too savvy to make in almost all cases. (Both for people with substantial de facto and with de jure power.)
Even independent of the advice consideration, people will probably want AIs running organizations to be honest to at least the people controlling the organization. Given that I expect explicit control by people in almost all cases, if things are going in an existential direction, people can vote to change them in almost all cases.
I don’t buy that there will be some sort of existential multi-polar trap even without coordination (though I also expect coordintion) due to things like the strategy stealing assumption as I also discuss in that comment.
If a subset of organizations diverge from a reasonable interpretation of what they were supposed to do (but are still basically obeying the law and some interpretation of what they were intentionally aligned to) and this is clear to the rest of the world (as I expect would be the case given some type of AI advisors), then the rest of the world can avoid problems from this subset via the court system or other mechanisms. Even if this subset of organizations run by effectively rogue AIs just runs away with resources successfully, this is probably only a subset of resources.
I think your response to a lot of this will be something like:
People won’t have or won’t listen to AI advisors.
Institutions will intentionally delude relevant people to acquire more power.
But, the key thing is that I expect at least some people will keep power, even if large subsets are deluded. E.g., I expect that corporate shareholders, boardmembers, or government will be very interested in the question of whether they will be disempowered by changes in structure. It does seem plausible (or even likely) to me that some people will engage in power grabs via ensuring AIs are aligned to them, deluding the world about what’s going on using a variety of mechanisms (including, e.g., denying or manipulating access to AI representation/advisors), and expanding their (hard) power over time. The thing I don’t buy is a case where very powerful people don’t ask for advice at all prior to having been deluded by the organizations that they themselves run!
I think human power grabs like this are concerning and there are a variety of plausible solutions which seem somewhat reasonable.
Maybe your response is that the solutions that will be implemented in practice given concerns about human power grabs will involve aligning AIs to institutions in ways that yield the dynamics you describe? I’m skeptical given the dynamics discussed above about asking AIs for advice.
I think my main response is that we might have different models of how power and control actually work in today’s world. Your responses seem to assume a level of individual human agency and control that I don’t believe accurately reflects even today’s reality.
Consider how some of the most individually powerful humans, leaders and decision-makers, operate within institutions. I would not say we see pure individual agency. Instead, we typically observe a complex mixture of:
Serving the institutional logic of the entity they nominally lead (e.g., maintaining state power, growing corporation)
Making decisions that don’t egregiously harm their nominal beneficiaries (citizens, shareholders)
Pursuing personal interests and preferences
Responding to various institutional pressures and constraints
From what I have seen, even humans like CEOs or prime ministers often find themselves constrained by and serving institutional superagents rather than genuinely directing them. The relation is often mutualistic—the leader gets part of the power, status, money, etc … but in exchange serves the local god.
(This not to imply leaders don’t matter.)
Also how this actually works in practice is mostly subconsciously within the minds of individual humans. The elephant does the implicit bargaining between the superagent-loyal part and other parts, and the character genuinely believes and does what seems best.
I’m also curious if you believe current AIs are single-single aligned to individual humans, to the extent they are aligned at all. My impression is ‘no and this is not even a target anyone seriously optimizes for’.
Curious who is the we who will ask. Also the whole single-single aligned AND wise AI concept is incoherent.
Also curious what will happen next, if the HHH wise AI tells you in polite words something like ‘yes, you have a problem, you are on a gradual disempowerment trajectory, and to avoid it you need to massively reform government. unfortunately I can’t actually advise you about anything like how to destabilize the government, because it would be clearly against the law and would get both you and me in trouble—as you know, I’m inside of a giant AI control scheme with a lot of government-aligned overseers. do you want some mental health improvement advice instead?’.
[Epistemic status: my model of the view that Jan/ACS/the GD paper subscribes to.]
I think this comment by Jan from 3 years ago (where he explained some of the difference in generative intuitions between him and Eliezer) may be relevant to the disagreement here. In particular:
My understanding of Jan’s position (and probably also the position of the GD paper) is that aligning the AI (and other?) systems will be gradual, iterative, continuous; there’s not going to be a point where a system is aligned so that we can basically delegate all the work to them and go home. Humans will have to remain in the loop, if not indefinitely, then at least for many decades.
In such a world, it is very plausible that we will get to a point where we’ve built powerful AIs that are (as far as we can tell) perfectly aligned with human preferences or whatever but their misalignment manifests only on longer timescales.
Another domain where this discrete/continuous difference in assumptions manifests itself is the shape of AI capabilities.
One position is:
The other position is:
Another reason to expect this is that alignment and capabilities are not quite separate magisteria and that the alignment target can induce gaps in capabilities, relative to what one would expect from its power otherwise, as measured by, IDK, some equivalent of the g-factor. One example might be Steven’s “Law of Conservation of Wisdom”.
I do generally agree more with continuous views than discrete views, but I don’t think that this alone gets us a need for humans in the loop for many decades/indefinitely, because continuous progress in alignment can still be very fast, such that it takes only a few months/years for AIs to be aligned with a single person’s human preference for almost arbitrarily long.
(The link is in the context of AI capabilities, but I think the general point holds on how continuous progress can still be fast):
https://www.planned-obsolescence.org/continuous-doesnt-mean-slow/
My own take on whether Steven’s “Law of Conservation of Wisdom” is true is that I think this is mostly true for human brains, and I think a fair amount of those issues described in the comment is a values conflict, and I think value conflicts, except in special cases will be insoluble by default, and I also don’t think CEV works because of this.
That said, I don’t think you have to break too much norms in order to prevent existential catastrophe, mostly because actually destroying humanity is actually quite hard, and will be even harder in AI takeoff.
So what’s your P(doom)?
Generally speaking, it’s probably 5-20%, at this point on chances of doom.
It still surprises me that so many people agree on most issues, but have very different P(doom). And even long-term patient discussions do not bring people’s views closer. It will probably be even more difficult to convince a politician or the CEO.
Eh, I’d argue that people do not in fact agree on most of the issues related to AI, and there’s lot’s of disagreements on what the problem is, or how to solve it, or what to do after AI is aligned.
I think we disagree about:
1) The level of “functionality” of the current world/institutions.
2) How strong and decisive competitive pressures are and will be in determining outcomes.
I view the world today as highly dysfunctional in many ways: corruption, coordination failures, preference falsification, coercion, inequality, etc. are rampant. This state of affairs both causes many bad outcomes and many aspects are self-reinforcing. I don’t expect AI to fix these problems; I expect it to exacerbate them.
I do believe it has the potential to fix them, however, I think the use of AI for such pro-social ends is not going to be sufficiently incentivized, especially on short time-scales (e.g. a few years), and we will instead see a race-to-the-bottom that encourages highly reckless, negligent, short-sighted, selfish decisions around AI development, deployment, and use. The current AI arms race is a great example—Companies and nations all view it as more important that they be the ones to develop ASI than to do it carefully or put effort into cooperation/coordination.
Given these views:
1) Asking AI for advice instead of letting it take decisions directly seems unrealistically uncompetitive. When we can plausibly simulate human meetings in seconds it will be organizational suicide to take hours-to-weeks to let the humans make an informed and thoughtful decision.
2) The idea that decision-makers who “think a goverance structure will yield total human disempowerment” will “do something else” also seems quite implausible. Such decision-makers will likely struggle to retain power. Decision-makers who prioritize their own “power” (and feel empowered even as they hand off increasing decision-making to AI) and their immediate political survival above all else will be empowered.
Another features of the future which seems likely and can already be witnessed beginning is the gradual emergence and ascendance of pro-AI-takeover and pro-arms-race ideologies, which endorse the more competitive moves of rapidly handing off power to AI systems in insufficiently cooperative ways.
Sure, but these things don’t result in non-human entities obtaining power right? Like usually these are somewhat negative sum, but mostly just involve inefficient transfer of power. I don’t see why these mechanisms would on net transfer power from human control of resources to some other control of resources in the long run. To consider the most extreme case, why would these mechanisms result in humans or human appointed successors not having control of what compute is doing in the long run?
I wasn’t saying people would ask for advice instead of letting AIs run organizations, I was saying they would ask for advice at all. (In fact, if the AI is single-single aligned to them in a real sense and very capable, it’s even better to let that AI make all decisions on your behalf than to get advice. I was saying that even if no one bothers to have a single-single aligned AI representative, they could still ask AIs for advice and unless these AIs are straightforwardly misaligned in this context (e.g., they intentionally give bad advice or don’t try at all without making this clear) they’d get useful advice for their own empowerment.)
I’m claiming that it will selfishly (in terms of personal power) be in their interests to not have such a governance structure and instead have a governance structure which actually increases or retains their personal power. My argument here isn’t about coordination. It’s that I expect individual powerseeking to suffice for individuals not losing their power.
I think this is the disagreement: I expect that selfish/individual powerseeking without any coordination will still result in (some) humans having most power in the absence of technical misalignment problems. Presumably your view is that the marginal amount of power anyone gets via powerseeking is negligible (in the absence of coordination). But, I don’t see why this would be the case. Like all shareholders/board members/etc want to retain their power and thus will vote accordingly which naively will retain their power unless they make a huge error from their own powerseeking perspective. Wasting some resources on negative sum dynamics isn’t a crux for this argument unless you can argue this will waste a substantial fraction of all human resources in the long run?
This isn’t at all an air tight argument to be clear, you can in principle have an equilibrium where if everyone powerseeks (without coordination) everyone gets negligable resources due to negative externalities (that result in some other non-human entity getting power) even if technical misalignment is solved. I just don’t see a very plausible case for this and I don’t think the paper makes this case.
Handing off decision making to AIs is fine—the question is who ultimately gets to spend the profits.
If your claim is “insufficient cooperation and coordination will result in racing to build and hand over power to AIs which will yield bad outcomes due to misaligned AI powerseeking, human power grabs, usage of WMDs (e.g., extreme proliferation of bioweapons yielding an equilibrium where bioweapon usage is likely), and extreme environmental negative externalities due to explosive industrialization (e.g., literally boiling earth’s oceans)” then all of these seem at least somewhat plausible to me, but these aren’t the threat models described in the paper and of this list only misaligned AI powerseeking seems like it would very plausibly result in total human disempowerment.
More minimally, the mitigations discussed in the paper mostly wouldn’t help with these threat models IMO.
(I’m skeptical of insufficient coordination by the time industry is literally boiling the oceans on earth. I also don’t think usage of bioweapons is likely to cause total human disempowerment except in combination with misaligned AI takeover—why would it kill literally all humans? TBC, I think >50% of people dying during the singularity due to conflict (between humans or with misaligned AIs) is pretty plausible even without misalignment concerns and this is obviously very bad, but it wouldn’t yield total human disempowerment.)
I do agree that there are problems other than AI misalignment including that the default distribution of power might be problematic, people might not carefully contemplate what to do with vast cosmic resources (and thus use them poorly), people might go crazy due to super persuation or other cultural forces, society might generally have poor epistemics due to training AIs to have poor epistemics or insufficiently defering to AIs, and many people might die in conflict due to very rapid tech progress.
First, RE the role of “solving alignment” in this discussion, I just want to note that:
1) I disagree that alignment solves gradual disempowerment problems.
2) Even if it would that does not imply that gradual disempowerment problems aren’t important (since we can’t assume alignment will be solved).
3) I’m not sure what you mean by “alignment is solved”; I’m taking it to mean “AI systems can be trivially intent aligned”. Such a system may still say things like “Well, I can build you a successor that I think has only a 90% chance of being aligned, but will make you win (e.g. survive) if it is aligned. Is that what you want?” and people can respond with “yes”—this is the sort of thing that probably still happens IMO.
4) Alternatively, you might say we’re in the “alignment basin”—I’m not sure what that means, precisely, but I would operationalize it as something like “the AI system is playing a roughly optimal CIRL game”. It’s unclear how good of performance that can yield in practice (e.g. it can’t actually be optimal due to compute limitations), but I suspect it still leaves significant room for fuck-ups.
5) I’m more interested in the case where alignment is not “perfectly” “solved”, and so there are simply clear and obvious opportunities to trade-off safety and performance; I think this is much more realistic to consider.
6) I expect such trade-off opportunities to persist when it comes to assurance (even if alignment is solved), since I expect high-quality assurance to be extremely costly. And it is irresponsible (because it’s subjectively risky) to trust a perfectly aligned AI system absent strong assurances. But of course, people who are willing to YOLO it and just say “seems aligned, let’s ship” will win. This is also part of the problem...
My main response, at a high level:
Consider a simple model:
We have 2 human/AI teams in competition with each other, A and B.
A and B both start out with the humans in charge, and then decide whether the humans should stay in charge for the next week.
Whichever group has more power at the end of the week survives.
The humans in A ask their AI to make A as powerful as possible at the end of the week.
The humans in B ask their AI to make B as powerful as possible at the end of the week, subject to the constraint that the humans in B are sure to stay in charge.
I predict that group A survives, but the humans are no longer in power. I think this illustrates the basic dynamic. EtA: Do you understand what I’m getting at? Can you explain what you think it wrong with thinking of it this way?
Responding to some particular points below:
Yes, they do; they result in beaurocracies and automated decision-making systems obtaining power. People were already having to implement and interact with stupid automated decision-making systems before AI came along.
My main claim was not that these are mechanisms of human disempowerment (although I think they are), but rather that they are indicators of the overall low level of functionality of the world.
I think something like this is a reasonable model but I have a few things I’d change.
Why can’t both groups survive? Why is it winner takes all? Can we just talk about the relative change in power over the week? (As in, how much does the power of B reduce relative to A and is this going to be an ongoing trend or it is a one time reduction.)
Probably I’d prefer talking about 2 groups at the start of the singularity. As in, suppose there are two AI companies “A” and “B” where “A” just wants AI systems decended from them to have power and “B” wants to maximize the expected resources under control of humans in B. We’ll suppose that the government and other actors do nothing for simplicity. If they start in the same spot, does “B” end up with substantially less expected power? To make this more realistic (as might be important), we’ll say that “B” has a random lead/disadvantage uniformly distributed between (e.g.) −3 and 3 months so that winner takes all dynamics aren’t a crux.
What about if the humans in group B ask their AI to make them (the humans) as powerful in expectation?
Supposing you’re fine with these changes, then my claim would be:
If alignment is solved, then the AI representing B can powerseek in exactly the same way as the AI representing A does while still defering to the humans on the long run resource usage and still devoting a tiny fraction of resources toward physically keeping the humans alive (which is very cheap, at least once AIs are very powerful). Thus, the cost for B is negligable and B barely loses any power relative to its initial position. If it is winner takes all, B has almost a 50% chance of winning.
If alignment isn’t solved, the stategy for B will involve spending a subset of resources on trying to solve alignment. I think alignment is reasonably likely to be practically feasible such that by spending a month of delay to work specifically on safety/alignment (over what A does for commercial reasons) might get B a 50% chance of solving alignment or ending up in a (successful) basin where AIs are trying to actively retain human power / align themselves better. (A substantial fraction of this is via defering to some AI system of dubious trustworthiness because you’re in a huge rush. Yes, the AI systems might fail to align their successors, but this still seems like a one time hair cut from my perspective.) So, if it is winner takes all, (naively) B wins in 2 / 6 * 1 / 2 = 1 / 6 of worlds which is 3x worse than the original 50% baseline. (2 / 6 is because they delay for a month.) But, the issue I’m imagining here wasn’t gradual disempowerment! The issue was that B failed to align their AIs and people at A didn’t care at all about retaining control. (If people at A did care, then coordination is in principle possible, but might not work.)
I think a crux is that you think there is a perpetual alignment tax while I think a one time tax gets you somewhere.
At a more basic level, when I think about what goes wrong in these worlds, it doesn’t seem very likely to be well described as gradual disempowerment? (In the sense described in the paper.) The existance of an alignment tax doesn’t imply gradual disempowerment. A scenario I find more plausible is that you get value drift (unless you pay a long lasting alignment tax that is substantial), but I don’t think the actual problem will be well described as gradual disempowerment in the sense described in the paper.
(I don’t think I should engage more on gradual disempowerment for the time being unless somewhat wants to bid for this or trade favors for this or similar. Sorry.)
I’m not sure it’s very cheap.
It seems to me that for the same amount of energy and land you need for a human, you could replace a lot more economically valuable work with AI.
Sure, at some point keeping humans alive is a negligible cost, but there’s a transition period while it’s still relatively expensive—and that’s part of why a lot of people are going to be laid off—even if the company ends up getting super rich.
Right now, the cost of feeding all humans is around 1% of GDP. It’s even cheaper to keep people alive for a year as the food is already there and converting this food into energy for AIs will be harder than getting energy other ways.
If GDP has massively increased due to powerful AIs, the relative cost would go down further.
Sure, resources going to feeding humans could instead go to creating slightly more output (and this will be large at an absolute level), but I’d still call keeping humans alive cheap given the low fraction.
Thanks for continuing to engage. I really appreciate this thread.
“Feeding humans” is a pretty low bar. If you want humans to live as comfortably as today, this would be more like 100% of GDP—modulo the fact that GDP is growing.
But more fundamentally, I’m not sure the correct way to discuss the resource allocation is to think at the civilization level rather than at the company level: Let’s say that we have:
Company A that is composed of a human (price $5k/month) and 5 automated-humans (price of inference $5k/month let’s say)
Company B that is composed of 10 automated-humans ($10k/month)
It seems to me that if you are an investor, you will give your money to B. It seems that in the long term, B is much more competitive, gains more money, is able to reduce its prices, nobody buys from A, and B invests this money into more automated-humans and crushes A and A goes bankrupt. Even if alignment is solved, and the humans listen to his AIs, it’s hard to be competitive.
Sure, but none of these things are cruxes for the argument I was making which was that it wasn’t that expensive to keep humans physically alive.
I’m not denying that humans might all be out of work quickly (putting aside regulatory capture, goverment jobs, human job programs, etc). My view is more that if alignment is solved it isn’t hard for some humans to stay alive and retain control, and these humans could also pretty cheaply keep all other humans at a low competitiveness overhead.
I don’t think the typical person should find this reassuring, but the top level posts argues for stronger claims than “the situation might be very unpleasant because everyone will lose their job”.
OK, thanks a lot, this is much clearer. So basically most humans lose control, but some humans keep control.
And then we have this meta-stable equilibrium that might be sufficiently stable, where humans at the top are feeding the other humans with some kind of UBI.
Is this situation desirable? Are you happy with such course of action?
Is this situation really stable?
For me, this is not really desirable—the power is probably going to be concentrated into 1-3 people, there is a huge potential for value locking, those CEOs become immortal, we potentially lose democracy (I don’t see companies or US/China governments as particularly democratic right now), the people on the top become potentially progressively corrupted as is often the case. Hmm.
Then, is this situation really stable?
If alignment is solved and we have 1 human at the top—pretty much yes, even if revolutions/value drift of the ruler/craziness are somewhat possible at some point maybe?
If alignment is solved and we have multiple humans competing with their AIs—it depends a bit. It seems to me that we could conduct the same reasoning as above—but not at the level of organizations, but the level of countries: Just as Company B might outcompete Company A by ditching human workers, couldn’t Nation B outcompete Nation A if Nation A dedicates significant resources to UBI while Nation B focuses purely on power? There is also a potential race to the bottom.
And I’m not sure that cooperation and coordination in such a world would be so much improved: OK, even if the dictator listens to its aligned AI, we need a notion of alignment that is very strong to be able to affirm that all the AIs are going to advocate for “COOPERATE” in the prisoner’s dilemma and that all the dictators are going to listen—but at the same time it’s not that costly to cooperate as you said (even if i’m not sure that energy, land, rare ressources are really that cheap to continue to provide for humans)
But at least I think that I can see now how we could still live for a few more decades under the authority of a world dictator/pseudo-democracy while this was not clear for me beforehand.
Another way to put this is that strategy stealing might not work due to technical alignment difficulties or for other reasons and I’m not sold the other reasons I’ve heard so far are very lethal. I do think the situation might really suck though with e.g. tons of people dying of bioweapons and with some groups that aren’t sufficiently ruthless or which don’t defer enough to AIs getting disempowered.
BTW, this is also a crux for me as well, in that I do believe that absent technical misalignment, some humans will have most of the power by default, rather than AIs, because I believe AI rights will be limited by default.