LessWrong dev & admin as of July 5th, 2022.
RobertM
Ah, yep, I read it at the time; this has just been on my mind lately and sometimes it bears to repeat the obvious.
This was a post written as part of the lessonline puzzle hunt. It should’ve been unlisted; seems like it wasn’t by accident.
I have a pretty huge amount of uncertainty about the distribution of how hypothetical future paradigms score on those (and other) dimensions, but there does seem room for it to be worse, yeah.
ETA: (To be clear, something that looks relevantly like today’s LLMs while still having superhuman scientific R&D capabilities seems quite scary and I think if we find ourselves there in, say, 5 years, then we’re pretty fucked. I don’t want anyone to think that I’m particularly optimistic about the current paradigm’s safety properties.)
If your model says that LLMs are unlikely to scale up to ASI, this is not sufficient for low p(doom). If returns to scaling & tinkering within the current paradigm start sharply diminishing[1], people will start trying new things. Some of them will eventually work.
- ^
Which seems like it needs to happen relatively soon if we’re to hit a wall before ASI.
- ^
This comment doesn’t seem to be responding to the contents of the post at all, nor does it seem to understand very basic elements of the relevant worldview it’s trying to argue against (i.e. “which are the countries you would probably least want to be in control of AGI”; no, it doesn’t matter which country ends up building an ASI, because the end result is the same).
It also tries to leverage arguments that depend on assumptions not shared by MIRI (such as that research on stronger models is likely to produce enough useful output to avert x-risk, or that x-risk is necessarily downstream of LLMs).
But surely “saying nearly nothing” ranks among the worst-possible options for being seen as a “systemic cooperator”?
Unfortunately, it looks like non-disparagement clauses aren’t unheard of in general releases:
Release Agreements commonly include a “non-disparagement” clause – in which the employee agrees not to disparage “the Company.”
The release had a very broad definition of the company (including officers, directors, shareholders, etc.), but a fairly reasonable scope of the claims I was releasing. So far, so good. But then it included a general non-disparagement provision, which basically said I couldn’t say anything bad about the company, which, by itself, is also fairly typical and reasonable.
Given the way the contract is worded it might be worth checking whether executing your own “general release” (without a non-disparagement agreement in it) would be sufficient, but I’m not a lawyer and maybe you need the counterparty to agree to it for it to count.
And as a matter of industry practice, this is of course an extremely non-standard requirement for retaining vested equity (or equity-like instruments), whereas it’s pretty common when receiving an additional severance package. (Though even in those cases I haven’t heard of any such non-disparagement agreement that was itself covered by a non-disclosure agreement… but would I have?)
This seems to be arguing that the big labs are doing some obviously-inefficient R&D in terms of advancing capabilities, and that government intervention risks accidentally redirecting them towards much more effective R&D directions. I am skeptical.
If such training runs are not dangerous then the AI safety group loses credibility.
It could give a false sense of security when a different arch requiring much less training appears and is much more dangerous than the largest LLM.
It removes the chance to learn alignment and safety details from such large LLM
I’m not here for credibility. (Also, this seems like it only happens, if it happens, after the pause ends. Seems fine.)
I’m generally unconvinced by arguments of the form “don’t do [otherwise good thing x]; it might cause people to let their guard down and get hurt by [bad thing y]” that don’t explain why they aren’t a fully-general counterargument.
If you think LLMs are hitting a wall and aren’t likely to ever lead to dangerous capabilities then I don’t know why you expect to learn anything particularly useful from the much larger LLMs that we don’t have yet, but not from those we do have now.
This seems non-reponsive to arguments already in my post:
If we institute a pause, we should expect to see (counterfactually) reduced R&D investment in improving hardware capabilities, reduced investment in scaling hardware production, reduced hardware production, reduced investment in research, reduced investment in supporting infrastructure, and fewer people entering the field.
We ran into a hardware shortage during a period of time where there was no pause, which is evidence that the hardware manufacturer was behaving conservatively. If they’re behaving conservatively during a boom period like this, it’s not crazy to think they might be even more conservative in terms of novel R&D investment & ramping up manufacturing capacity if they suddenly saw dramatically reduced demand from their largest customers.
For example, suppose we pause now for 3 years and during that time NVIDIA releases the RTX5090,6090,7090 which are produced using TSMC’s 3nm, 2nm and 10a processes.
This and the rest of your comment seems to have ignored the rest of my post (see: multiple inputs to progress, all of which seem sensitive to “demand” from e.g. AGI labs), so I’m not sure how to respond. Do you think NVIDIA’s planning is totally decoupled from anticipated demand for their products? That seems kind of crazy, but that’s the scenario you seem to be describing. Big labs are just going to continue to increase their willingness-to-spend along a smooth exponential for as a long as the pause lasts? What if the pause lasts 10 years?
If you think my model of how inputs to capabilities progress are sensitive to demand for those inputs from AGI labs is wrong, then please argue so directly, or explain how your proposed scenario is compatible with it.
Against “argument from overhang risk”
Yeah, “they’re following their stated release strategy for the reasons they said motivated that strategy” also seems likely to share some responsibility. (I might not think those reasons justify that release strategy, but that’s a different argument.)
Yeah, I agree that it’s too early to call it re: hitting a wall. I also just realized that releasing 4o for free might be some evidence in favor of 4.5/5 dropping soon-ish.
Vaguely feeling like OpenAI might be moving away from GPT-N+1 release model, for some combination of “political/frog-boiling” reasons and “scaling actually hitting a wall” reasons. Seems relevant to note, since in the worlds where they hadn’t been drip-feeding people incremental releases of slight improvements over the original GPT-4 capabilities, and instead just dropped GPT-5 (and it was as much of an improvement over 4 as 4 was over 3, or close), that might have prompted people to do an explicit orientation step. As it is, I expect less of that kind of orientation to happen. (Though maybe I’m speaking too soon and they will drop GPT-5 on us at some point, and it’ll still manage to be a step-function improvement over whatever the latest GPT-4* model is at that point.)
It’s not obvious to me why training LLMs on synthetic data produced by other LLMs wouldn’t work (up to a point). Under the model where LLMs are gradient-descending their way into learning algorithms that predict tokens that are generated by various expressions of causal structure in the universe, tokens produced by other LLMs don’t seem redundant with respect to the data used to train those LLMs. LLMs seem pretty different from most other things in the universe, including the data used to train them! It would surprise me if the algorithms that LLMs developed to predict non-LLM tokens were perfectly suited for predicting other LLM tokens “for free”.
EDIT: looks like habryka got there earlier and I didn’t see it.
https://www.lesswrong.com/posts/zXJfH7oZ62Xojnrqs/#sLay9Tv65zeXaQzR4
Intercom is indeed hidden on mobile (since it’d be pretty intrusive at that screen size).
Ah, does look like Zach beat me to the punch :)
I’m also still moderately confused, though I’m not that confused about labs not speaking up—if you’re playing politics, then not throwing the PM under the bus seems like a reasonable thing to do. Maybe there’s a way to thread the needle of truthfully rebutting the accusations without calling the PM out, but idk. Seems like it’d be difficult if you weren’t either writing your own press release or working with a very friendly journalist.
I hadn’t, but I just did and nothing in the article seems to be responsive to what I wrote.
Amusingly, not a single news source I found reporting on the subject has managed to link to the “plan” that the involved parties (countries, companies, etc) agreed to.
Nothing in that summary affirmatively indicates that companies agreed to submit their future models to pre-deployment testing by the UK AISI. One might even say that it seems carefully worded to avoid explicitly pinning the companies down like that.
EDIT: I believe I’ve found the “plan” that Politico (and other news sources) managed to fail to link to, maybe because it doesn’t seem to contain any affirmative commitments by the named companies to submit future models to pre-deployment testing by UK AISI.
I’ve seen a lot of takes (on Twitter) recently suggesting that OpenAI and Anthropic (and maybe some other companies) violated commitments they made to the UK’s AISI about granting them access for e.g. predeployment testing of frontier models. Is there any concrete evidence about what commitment was made, if any? The only thing I’ve seen so far is a pretty ambiguous statement by Rishi Sunak, who might have had some incentive to claim more success than was warranted at the time. If people are going to breathe down the necks of AGI labs about keeping to their commitments, they should be careful to only do it for commitments they’ve actually made, lest they weaken the relevant incentives. (This is not meant to endorse AGI labs behaving in ways which cause strategic ambiguity about what commitments they’ve made; that is also bad.)
The WSJ article says the following:
I don’t think it’s fair to say that claim 5 was knowably, obviously false at the time it was made, based on this. The above two paragraphs really sound like “Sam Altman was fired from YCombinator”. Now, it’s possible that the journalist who wrote this was engaging in selective quotation and the non-quoted sections are deliberately misleading. This is compatible with PG’s recent clarification on Twitter. But I think it’d be stranger to read those two paragraphs and then believe that he wasn’t fired, than to believe that he was fired. In isolation, PG’s rejection of the word “fired” because “he agreed immediately” is nonsensical. Agreeing to be fired is still being fired.
I still have substantial uncertainty about what happened here. “The firm’s leaders asked him to resign” is a pretty straightforward claim about reality written in the journalist’s voice, and I would be somewhat surprised if the journalist knew that Paul & Jessica had (claimed) to have presented Sam with the “choose one” option and decided to describe that as “asked him to resign”. That’s less “trying to give people a misleading impression” and more “lying about an obvious matter of fact”.