Are there any high p(doom) orgs who are focused on the following:
Pick an alignment “plan” from a frontier lab (or org like AISI)
Demonstrate how the plan breaks or doesn’t work
Present this clearly and legibly for policymakers
Seems like this is a good way for people to deploy technical talent in a way which is tractable. There are a lot of people who are smart but not alignment-solving levels of smart who are currently not really able to help.
AI companies don’t actually have specific plans, they mostly just hope that they’ll be able to iterate. (See Sam Bowman’s bumper post for an articulation of a plan like this.) I think this is a reasonable approach in principle: this is how progress happens in a lot of fields. For example, the AI companies don’t have plans for all kinds of problems that will arise with their capabilities research in the next few years, they just hope to figure it out as they get there. But the lack of specific proposals makes it harder to demonstrate particular flaws.
A lot of my concerns about alignment proposals are that when AIs are sufficiently smart, the plan won’t work anymore. But in many cases, the plan does actually work fine right now at ensuring particular alignment properties. (Most obviously, right now, AIs are so bad at reasoning about training processes that scheming isn’t that much of an active concern.) So you can’t directly demonstrate that current plans will fail later without making analogies to future systems; and observers (reasonably enough) are less persuaded by evidence that requires you to assess the analogousness of a setup.
(Psst: a lot of AISI’s work is this, and they have sufficient independence and expertise credentials to be quite credible; this doesn’t go for all of their work, some of which is indeed ‘try for a better plan’)
(There are projects that stress-test the assumptions behind AGI labs’ plans, of course, but I don’t think anyone is (1) deliberately picking at the plans AGI labs claim, in a basically adversarial manner, (2) optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers. Explicitly setting those priorities might be useful.)
optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers.
People who do research like this are definitely optimizing for legibility to policymakers (always at least a bit, and usually a lot).
One problem is that if AI researchers think your work is misleading/scientifically suspect, they get annoyed at you and tell people that your research sucks and you’re a dishonest ideologue. This is IMO often a healthy immune response, though it’s a bummer when you think that the researchers are wrong and your work is fine. So I think it’s pretty costly to give up on convincingness to AI researchers.
“Not optimized to be convincing to AI researchers” ≠ “looks like fraud”. “Optimized to be convincing to policymakers” might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it’s totally fine) but isn’t well-known outside specialist circles.
E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don’t understand, instead of manually coded symbolic programs. This isn’t as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.
What kind of “research” would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for “machine learning”?
Are there any high p(doom) orgs who are focused on the following:
Pick an alignment “plan” from a frontier lab (or org like AISI)
Demonstrate how the plan breaks or doesn’t work
Present this clearly and legibly for policymakers
Seems like this is a good way for people to deploy technical talent in a way which is tractable. There are a lot of people who are smart but not alignment-solving levels of smart who are currently not really able to help.
I’d say that work like our Alignment Faking in Large Language Models paper (and the model organisms/alignment stress-testing field more generally) is pretty similar to this (including the “present this clearly to policymakers” part).
A few issues:
AI companies don’t actually have specific plans, they mostly just hope that they’ll be able to iterate. (See Sam Bowman’s bumper post for an articulation of a plan like this.) I think this is a reasonable approach in principle: this is how progress happens in a lot of fields. For example, the AI companies don’t have plans for all kinds of problems that will arise with their capabilities research in the next few years, they just hope to figure it out as they get there. But the lack of specific proposals makes it harder to demonstrate particular flaws.
A lot of my concerns about alignment proposals are that when AIs are sufficiently smart, the plan won’t work anymore. But in many cases, the plan does actually work fine right now at ensuring particular alignment properties. (Most obviously, right now, AIs are so bad at reasoning about training processes that scheming isn’t that much of an active concern.) So you can’t directly demonstrate that current plans will fail later without making analogies to future systems; and observers (reasonably enough) are less persuaded by evidence that requires you to assess the analogousness of a setup.
(Psst: a lot of AISI’s work is this, and they have sufficient independence and expertise credentials to be quite credible; this doesn’t go for all of their work, some of which is indeed ‘try for a better plan’)
That seems like a pretty good idea!
(There are projects that stress-test the assumptions behind AGI labs’ plans, of course, but I don’t think anyone is (1) deliberately picking at the plans AGI labs claim, in a basically adversarial manner, (2) optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers. Explicitly setting those priorities might be useful.)
People who do research like this are definitely optimizing for legibility to policymakers (always at least a bit, and usually a lot).
One problem is that if AI researchers think your work is misleading/scientifically suspect, they get annoyed at you and tell people that your research sucks and you’re a dishonest ideologue. This is IMO often a healthy immune response, though it’s a bummer when you think that the researchers are wrong and your work is fine. So I think it’s pretty costly to give up on convincingness to AI researchers.
“Not optimized to be convincing to AI researchers” ≠ “looks like fraud”. “Optimized to be convincing to policymakers” might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it’s totally fine) but isn’t well-known outside specialist circles.
E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don’t understand, instead of manually coded symbolic programs. This isn’t as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.
What kind of “research” would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for “machine learning”?
AI Plans does this
yes. AI Plans