I think about AI alignment; send help.
James Payor
I really should have something short to say, that turns the whole argument on its head, given how clear-cut it seems to me. I don’t have that yet, but I do have some rambly things to say.
I basically don’t think overhangs are a good way to think about things, because the bridge that connects an “overhang” to an outcome like “bad AI” seems flimsy to me. I would like to see a fuller explication some time from OpenAI (or a suitable steelman!) that can be critiqued. But here are some of my thoughts.
The usual argument that leads from “overhang” to “we all die” has some imaginary other actor who is scaling up their methods with abandon at the end, killing us all because it’s not hard to scale and they aren’t cautious. This is then used to justify scaling up your own method with abandon, hoping that we’re not about to collectively fall off a cliff.
For one thing, the hype and work being done now is making this problem a lot worse at all future timesteps. There was (and still is) a lot people need to figure out regarding effectively using lots of compute. (For instance, architectures that can be scaled up, training methods and hyperparameters, efficient compute kernels, putting together datacenters and interconnect, data, etc etc.) Every chipmaker these days has started working on things with a lot of memory right next to a lot compute with a tonne of bandwidth, tailored to these large models. These are barriers-to-entry that it would have been better to leave in place, if one was concerned with rapid capability gains. And just publishing fewer things and giving out fewer hints would have helped.
Another thing: I would take the whole argument as being more in good-faith if I saw attempts being made to scale up anything other than capabilities at high speed, or signs that made it seem at all likely that “alignment” might be on track. Examples:
A single alignment result that was supported by a lot of OpenAI staff. (Compare and contrast the support that the alignment team’s projects get to what a main training run gets.)
Any focus on trying to claw cognition back out of the giant inscrutable floating-point numbers, into a domain easier to understand, rather than pouring more power into the systems that get much harder to inspect as you scale them. (Failure to do this suggests OpenAI and others are mostly just doing what they know how to do, rather than grappling with navigating us toward better AI foundations.)
Any success in understanding how shallow vs deep the thinking of the LLMs is, in the sense of “how long a chain of thoughts/inferences can it make as it composes dialogue”, and how this changes with scale. (Since the whole “LLMs are safer” thing relies on their thinking being coupled to the text they output; otherwise you’re back in giant inscrutable RL agent territory)
The delta between “intelligence embedded somewhere in the system” and “intelligence we can make use of” looking smaller than it does. (Since if our AI gets to use of more of its intelligence than us, and this gets worse as we scale, this looks pretty bad for the “use our AI to tame the AI before it’s too late” plan.)
Also I can’t make this point precisely, but I think there’s something like capabilities progress just leaves more digital fissile material lying around the place, especially when published and hyped. And if you don’t want “fast takeoff”, you want less fissile material lying around, lest it get assembled into something dangerous.
Finally, to more directly talk about LLMs, my crux for whether they’re “safer” than some hypothetical alternative is about how much of the LLM “thinking” is closely bound to the text being read/written. My current read is that they’re more like doing free-form thinking inside, that tries to concentrate mass on right prediction. As we scale that up, I worry that any “strange competence” we see emerging is due to the LLM having something like a mind inside, and less due to it having accrued more patterns.
- May 25, 2023, 8:55 PM; 11 points) 's comment on [Linkpost] “Governance of superintelligence” by OpenAI by (
As usual, the part that seems bonkers crazy is where they claim the best thing they can do is keep making every scrap of capabilities progress they can. Keep making AI as smart as possible, as fast as possible.
“This margin is too small to contain our elegant but unintuitive reasoning for why”. Grump. Let’s please have a real discussion about this some time.
(Edit: others have made this point already, but anyhow)
My main objection to this angle: self-improvements do not necessarily look like “design a successor AI to be in charge”. They can look more like “acquire better world models”, “spin up more copies”, “build better processors”, “train lots of narrow AI to act as fingers”, etc.
I don’t expect an AI mind to have trouble finding lots of pathways like these (that tractably improve abilities without risking a misalignment catastrophe) that take it well above human level, given the chance.
Is the following an accurate summary?
The agent is built to have a “utility function” input that the humans can change over time, and a probability distribution over what the humans will ask for at different time steps, and maximizes according a combination of the utility functions it anticipates across time steps?
If that’s correct, here are some places this conflicts with my intuition about how things should be done:
I feel awkward about the randomness is being treated essential. I’d rather be able to do something other than randomness in order to get my mild optimization, and something feels unstable/non-compositional about needing randomness in place for your evaluations… (Not that I have an alternative that springs to mind!)
I also feel like “worst case” is perhaps problematic, since it’s bringing maximization in, and you’re then needing to rely on your convex set being some kind of smooth in order to get good outcomes. If I have a distribution over potential utility functions, and quantilize for the worst 10% of possibilities, does that do the same sort of work that “worst case” is doing for mild optimization?
Can I check that I follow how you recover quantilization?
Are you evaluating distributions over actions, and caring about the worst-case expectation of that distribution?
If so, proposing a particular action is evaluated badly? (Since there’s a utility function in your set that spikes downward at that action.)
But proposing a range of actions to randomize amongst can be assessed to have decent worst-case expected utility, since particular downward spikes get smoothed over, and you can rely on your knowledge of “in-distribution” behaviour?
Edited to add: fwiw it seems awesome to see quantilization formalized as popping out of an adversarial robustness setup! I haven’t seen something like this before, and didn’t notice if the infrabayes tools were building to these kinds of results. I’m very much wanting to understand why this works in my own native-ontology-pieces.
I want to say that I agree the transformer circuits work is great, and that I like it, and am glad I had the opportunity to read it! I still expect it was pretty harmful to publish.
Nerdsniping goes both ways: you also inspire things like the Hyena work trying to improve architectures based on components of what transformers can do.
I think indiscriminate hype and trying to do work that will be broadly attention-grabbing falls on the wrong side, likely doing net harm. Because capabilities improvements seem empirically easier than understanding them, and there’s a lot more attention/people/incentives for capabilities.
I think there are more targeted things that would be better for getting more good work to happen. Like research workshops or unconferences, where you choose who to invite, or building community with more aligned folk who are looking for interesting and alignment-relevant research directions. This would come with way less potential harm imo as a recruitment strategy.
Hm I should also ask if you’ve seen the results of current work and think it’s evidence that we get more understandable models, moreso than we get more capable models?
I think the issue is that when you get more understandable base components, and someone builds an AGI out of those, you still don’t understand the AGI.
That research is surely helpful though if it’s being used to make better-understood things, rather than enabling folk to make worse-understood more-powerful things.
I think moving in the direction of “insights are shared with groups the researcher trusts” should broadly help with this.
I’m perhaps misusing “publish” here, to refer to “putting stuff on the internet” and “raising awareness of the work through company Twitter” and etc.
I mostly meant to say that, as I see it, too many things that shouldn’t be published are being published, and the net effect looks plausibly terrible with little upside (though not much has happened yet in either direction).
The transformer circuits work strikes me this way, so does a bunch of others.
Also, I’m grateful to know your read! I’m broadly interested to hear this and other raw viewpoints, to get a sense of how things look to other people.
I mostly do just mean “keeping it within a single research group” in the absence of better ideas. And I don’t have a better answer, especially not for independent folk or small orgs.
I wonder if we need an arxiv or LessWrong clone where you whitelist who you want to discuss your work with. And some scheme for helping independents find each other, or find existing groups they trust. Maybe with some “I won’t use this for capabilities work without the permission of the authors” legal docs as well.
This isn’t something I can visualize working, but maybe it has components of an answer.
I don’t think that the interp team is a part of Anthropic just because they might help with a capabilities edge; seems clear they’d love the agenda to succeed in a way that leaves neural nets no smarter but much better understood. But I’m sure that it’s part of the calculus that this kind of fundamental research is also worth supporting because of potential capability edges. (Especially given the importance of stuff like figuring out the right scaling laws in the competition with OpenAI.)
(Fwiw I don’t take issue with this sort of thing, provided the relationship isn’t exploitative. Like if the people doing the interp work have some power/social capital, and reason to expect derived capabilities to be used responsibly.)
There’s definitely a whole question about what sorts of things you can do with LLMs and how dangerous they are and whatnot.
This post isn’t about that though, and I’d rather not discuss that here. Could you instead ask this in a top level post or question? I’d be happy to discuss there.
To throw in my two cents, I think it’s clear that whole classes of “mechansitic interpretability” work are about better understanding architectures in ways that, if the research is successful, make it easier to improve their capabilities.
And I think this points strongly against publishing this stuff, especially if the goal is to “make this whole field more prestigious real quick”. Insofar as the prestige is coming from folks who work on AI capabilities, that’s drinking from a poisoned well (since they’ll grant the most prestige to the work that helps them accelerate).
One relevant point I don’t see discussed is that interpretability research is trying to buy us “slack”, but capabilities research consumes available “slack” as fuel until none is left.
What do I mean by this? Sometimes we do some work and are left with more understanding and grounding about what our neural nets are doing. The repeated pattern then seems to be that this helps someone design a better architecture or scale things up, until we’re left with a new more complicated network. Maybe because you helped them figure out a key detail about gradient flow in a deep network, or let them quantize the network better so they can run things faster, or whatnot.
Idk how to point at this thing properly, my examples aren’t great. I think I did a better job talking about this over here on twitter recently, if anyone is interested.
But anyhow I support folks doing their research without broadcasting their ideas to people who are trying to do capabilities work. It seems nice to me if there was mostly research closure. And I think I broadly see people overestimating the benefits publishing their work relative to keeping it within a local cluster.
Thinking about maximization and corrigibility
“We are not currently training GPT-5. We’re working on doing more things with GPT-4.” – Sam Altman at MIT
Count me surprised if they’re not working on GPT-5. I wonder what’s going on with this?
I saw rumors that this is because they’re waiting on supercomputer improvements (H100s?), but I would have expected at least early work like establishing their GPT-5 scaling laws and whatnot. In which case perhaps they’re working on it, just haven’t started what is considered the main training run?
I’m interested to know if Sam said any other relevant details in that talk, if anyone knows.
Seems right, oops! A5 is here saying that if any part of my is flat it had better stay flat!
I think I can repair my counterexample but looks like you’ve already found your own.
No on Q4? I think Alex’s counterexample applies to Q4 as well.
(EDIT: Scott points out I’m wrong here, Alex’s counterexample doesn’t apply, and mine violates A5.)
In particular I think A4 and A5 don’t imply anything about the rate of change as we move between lotteries, so we can have movements too sharp to be concave. We only have quasi-concavity.
My version of the counterexample: you have two outcomes and , we prefer anything with equally, and we otherwise prefer higher .
If you give me a corresponding , it must satisfy , but convexity demands that , which in this case means , a contradiction.
yep!
I kinda reject the energy of the hypothetical? But I can speak to some things I wish I saw OpenAI doing:
Having some internal sense amongst employees about whether they’re doing something “good” given the stakes, like Google’s old “don’t be evil” thing. Have a culture of thinking carefully about things and managers taking considerations seriously, rather than something more like management trying to extract as much engineering as quickly as possible without “drama” getting in the way.
(Perhaps they already have a culture like this! I haven’t worked there. But my prediction is that it is not, and the org has a more “extractive” relationship to its employees. I think that this is bad, causes working toward danger, and exacerbates bad outcomes.)
To the extent that they’re trying to have the best AGI tech in order to provide “leadership” of humanity and AI, I want to see them be less shady / marketing / spreading confusion about the stakes.
They worked to pervert the term “alignment” to be about whether you can extract more value from their LLMs, and distract from the idea that we might make digital minds that are copyable and improvable, while also large and hard to control. (While pushing directly on AGI designs that have the “large and hard to control” property, which I guess they’re denying is a mistake, but anyhow.)
I would like to see less things perverted/distracted/confused, like it’s according-to-me entirely possible for them to state more clearly what the end of all this is, and be more explicit about how they’re trying to lead the effort.
Reconcile with Anthropic. There is no reason, speaking on humanity’s behalf, to risk two different trajectories of giant LLMs built with subtly different technology, while dividing up the safety know-how amidst both organizations.
Furthermore, I think OpenAI kind-of stole/appropriated the scaling idea from the Anthropic founders, who left when they lost a political battle about the direction of the org. I suspect it was a huge fuck-you when OpenAI tried to spread this secret to the world, and continued to grow their org around it, while ousting the originators. If my model is at-all-accurate, I don’t like it, and OpenAI should look to regain “good standing” by acknowledging this (perhaps just privately), and looking to cooperate.
Idk, maybe it’s now legally impossible/untenable for the orgs to work together, given the investors or something? Or given mutual assumption of bad-faith? But in any case this seems really shitty.
I also mentioned some other things in this comment.