It seems to me you are using the word “alignment” as a boolean, whereas I’m using it to refer to either a scalar (“how aligned is the system?”) or a process (“the system has been aligned, i.e., has undergone a process of increasing its alignment”). I prefer the scalar/process usage, because it seems to me that people who do alignment research (including yourself) are going to produce ways of increasing the “alignment scalar”, rather than ways of guaranteeing the “perfect alignment” boolean. (I sometimes use “misaligned” as a boolean due to it being easier for people to agree on what is “misaligned” than what is “aligned”.) In general, I think it’s very unsafe to pretend numbers that are very close to 1 are exactly 1, because e.g., 1^(10^6) = 1 whereas 0.9999^(10^6) very much isn’t 1, and the way you use the word “aligned” seems unsafe to me in this way.
(Perhaps you believe in some kind of basin of convergence around perfect alignment that causes sufficiently-well-aligned systems to converge on perfect alignment, in which case it might make sense to use “aligned” to mean “inside the convergence basin of perfect alignment”. However, I’m both dubious of the width of that basin, and dubious that its definition is adequately social-context-independent [e.g., independent of the bargaining stances of other stakeholders], so I’m back to not really believing in a useful Boolean notion of alignment, only scalar alignment.
I’m fine with talking about alignment as a scalar (I think we both agree that it’s even messier than a single scalar). But I’m saying:
The individual systems in your could do something different that would be much better for their principals, and they are aware of that fact, but they don’t care. That is to say, they are very misaligned.
The story is risky precisely to the extent that these systems are misaligned.
In any case, I agree profit maximization it not a perfectly aligned goal for a company, however, it is a myopically pursued goal in a tragedy of the commons resulting from a failure to agree (as you point out) on something better to do (e.g., reducing competitive pressures to maximize profits).
The systems in your story aren’t maximizing profit in the form of real resources delivered to shareholders (the normal conception of “profit”). Whatever kind of “profit maximization” they are doing does not seem even approximately or myopically aligned with shareholders.
I don’t think the most obvious “something better to do” is to reduce competitive pressures, it’s just to actually benefit shareholders. And indeed the main mystery about your story is why the shareholders get so screwed by the systems that they are delegating to, and how to reconcile that with your view that single-single alignment is going to be a solved problem because of the incentives to solve it.
Yes, it seems this is a good thing to hone in on. As I envision the scenario, the automated CEO is highly aligned to the point of keeping the Board locally happy with its decisions conditional on the competitive environment, but not perfectly aligned [...] I’m not sure whether to say “aligned” or “misaligned” in your boolean-alignment-parlance.
I think this system is misaligned. Keeping me locally happy with your decisions while drifting further and further from what I really want is a paradigm example of being misaligned, and e.g. it’s what would happen if you made zero progress on alignment and deployed existing ML systems in the context you are describing. If I take your stuff and don’t give it back when you ask, and the only way to avoid this is to check in every day in a way that prevents me from acting quickly in the world, then I’m misaligned. If I do good things only when you can check while understanding that my actions lead to your death, then I’m misaligned. These aren’t complicated or borderline cases, they are central example of what we are trying to avert with alignment research.
(I definitely agree that an aligned system isn’t automatically successful at bargaining.)
These aren’t complicated or borderline cases, they are central example of what we are trying to avert with alignment research.
I’m wondering if the disagreement over the centrality of this example is downstream from a disagreement about how easy the “alignment check-ins” that Critch talks about are. If they are the sort of thing that can be done successfully in a couple of days by a single team of humans, then I share Critch’s intuition that the system in question starts off only slightly misaligned. By contrast, if they require a significant proportion of the human time and effort that was put into originally training the system, then I am much more sympathetic to the idea that what’s being described is a central example of misalignment.
My (unsubstantiated) guess is that Paul pictures alignment check-ins becoming much harder (i.e. closer to the latter case mentioned above) as capabilities increase? Whereas maybe Critch thinks that they remain fairly easy in terms of number of humans and time taken, but that over time even this becomes economically uncompetitive.
Perhaps this is a crux in this debate: If you think the ‘agent-agnostic perspective’ is useful, you also think a relatively steady state of ‘AI Safety via Constant Vigilance’ is possible. This would be a situation where systems that aren’t significantly inner misaligned (otherwise they’d have no incentive to care about governing systems, feedback or other incentives) but are somewhat outer misaligned (so they are honestly and accurately aiming to maximise some complicated measure of profitability or approval, not directly aiming to do what we want them to do), can be kept in check by reducing competitive pressures, building the right institutions and monitoring systems, and ensuring we have a high degree of oversight.
Paul thinks that it’s basically always easier to just go in and fix the original cause of the misalignment, while Andrew thinks that there are at least some circumstances where it’s more realistic to build better oversight and institutions to reduce said competitive pressures, and the agent-agnostic perspective is useful for the latter of these project, which is why he endorses it.
I think that this scenario of Safety via Constant Vigilance is worth investigating—I take Paul’s later failure storyto be a counterexample to such a thing being possible, as it’s a case where this solution was attempted and works for a little while before catastrophically failing. This also means that the practical difference between the RAAP 1a-d failure stories and Paul’s story just comes down to whether there is an ‘out’ in the form of safety by vigilance
I’m fine with talking about alignment as a scalar (I think we both agree that it’s even messier than a single scalar). But I’m saying:
The individual systems in your could do something different that would be much better for their principals, and they are aware of that fact, but they don’t care. That is to say, they are very misaligned.
The story is risky precisely to the extent that these systems are misaligned.
The systems in your story aren’t maximizing profit in the form of real resources delivered to shareholders (the normal conception of “profit”). Whatever kind of “profit maximization” they are doing does not seem even approximately or myopically aligned with shareholders.
I don’t think the most obvious “something better to do” is to reduce competitive pressures, it’s just to actually benefit shareholders. And indeed the main mystery about your story is why the shareholders get so screwed by the systems that they are delegating to, and how to reconcile that with your view that single-single alignment is going to be a solved problem because of the incentives to solve it.
I think this system is misaligned. Keeping me locally happy with your decisions while drifting further and further from what I really want is a paradigm example of being misaligned, and e.g. it’s what would happen if you made zero progress on alignment and deployed existing ML systems in the context you are describing. If I take your stuff and don’t give it back when you ask, and the only way to avoid this is to check in every day in a way that prevents me from acting quickly in the world, then I’m misaligned. If I do good things only when you can check while understanding that my actions lead to your death, then I’m misaligned. These aren’t complicated or borderline cases, they are central example of what we are trying to avert with alignment research.
(I definitely agree that an aligned system isn’t automatically successful at bargaining.)
I’m wondering if the disagreement over the centrality of this example is downstream from a disagreement about how easy the “alignment check-ins” that Critch talks about are. If they are the sort of thing that can be done successfully in a couple of days by a single team of humans, then I share Critch’s intuition that the system in question starts off only slightly misaligned. By contrast, if they require a significant proportion of the human time and effort that was put into originally training the system, then I am much more sympathetic to the idea that what’s being described is a central example of misalignment.
My (unsubstantiated) guess is that Paul pictures alignment check-ins becoming much harder (i.e. closer to the latter case mentioned above) as capabilities increase? Whereas maybe Critch thinks that they remain fairly easy in terms of number of humans and time taken, but that over time even this becomes economically uncompetitive.
Perhaps this is a crux in this debate: If you think the ‘agent-agnostic perspective’ is useful, you also think a relatively steady state of ‘AI Safety via Constant Vigilance’ is possible. This would be a situation where systems that aren’t significantly inner misaligned (otherwise they’d have no incentive to care about governing systems, feedback or other incentives) but are somewhat outer misaligned (so they are honestly and accurately aiming to maximise some complicated measure of profitability or approval, not directly aiming to do what we want them to do), can be kept in check by reducing competitive pressures, building the right institutions and monitoring systems, and ensuring we have a high degree of oversight.
Paul thinks that it’s basically always easier to just go in and fix the original cause of the misalignment, while Andrew thinks that there are at least some circumstances where it’s more realistic to build better oversight and institutions to reduce said competitive pressures, and the agent-agnostic perspective is useful for the latter of these project, which is why he endorses it.
I think that this scenario of Safety via Constant Vigilance is worth investigating—I take Paul’s later failure story to be a counterexample to such a thing being possible, as it’s a case where this solution was attempted and works for a little while before catastrophically failing. This also means that the practical difference between the RAAP 1a-d failure stories and Paul’s story just comes down to whether there is an ‘out’ in the form of safety by vigilance