My guess is more that we were talking past each other than that his intended claim was false/unrepresentative. I do think it’s true that EA’s mostly talk about people doing gain of function research as the problem, rather than about the insufficiency of the safeguards; I just think the latter is why the former is a problem.
Adam Scholl
There have been frequent and severe biosafety accidents for decades, many of which occurred at labs which were attempting to follow BSL protocol.
The EA cause area around biorisk is mostly happy to rely on those levels
I disagree—I think nearly all EA’s focused on biorisk think gain of function research should be banned, since the risk management framework doesn’t work well enough to drive the expected risk below that of the expected benefit. If our framework for preventing lab accidents worked as well as e.g. our framework for preventing plane accidents, I think few EA’s would worry much about GoF.
(Obviously there are non-accidental sources of biorisk too, for which we can hardly blame the safety measures; but I do think the measures work sufficiently poorly that even accident risk alone would justify a major EA cause area).
Man, I can’t believe there are no straightforwardly excited comments so far!
Personally, I think an institution like this is sorely needed, and I’d be thrilled if Lightcone built one. There are remarkably few people in the world who are trying to think carefully about the future, and fewer still who are trying to solve alignment; institutions like this seem like one of the most obvious ways to help them.
Your answer might also be “I, Oliver, will play this role”. My gut take would be excited for you to be like one of three people in this role (with strong co-leads, who are maybe complementary in the sense that they’re strong at some styles of thinking you don’t know exactly how to replicate), and kind of weakly pessimistic about you doing it alone. (It certainly might be that that pessimism is misplaced.)
For what it’s worth, my guess is that your pessimism is misplaced. Oliver certainly isn’t as famous as Bostrom, so I doubt he’d be a similar “beacon.” But I’m not sure a beacon is needed—historically, plenty of successful research institutions (e.g. Bells Labs, IAS, the Royal Society in most eras) weren’t led by their star researchers, and the track record of those that were strikes me as pretty mixed.
Oliver spends most of his time building infrastructure for researchers, and I think he’s become quite good at it. For example, you are reading this comment on (what strikes me as) rather obviously the best-designed forum on the internet; I think the review books LessWrong made are probably the second-best designed books I’ve seen, after those from Stripe Press; and the Lighthaven campus is an exceedingly nice place to work.
Personally, I think Oliver would probably be my literal top choice to head an institution like this.
I ask partly because I personally would be more excited of a version of this that wasn’t ignoring AGI timelines, but I think a version of this that’s not ignoring AGI timelines would probably be quite different from the intellectual spirit/tradition of FHI.
This frame feels a bit off to me. Partly because I don’t think FHI was ignoring timelines, and because I think their work has proved quite useful already—mostly by substantially improving our concepts for reasoning about existential risk.
But also, the portfolio of alignment research with maximal expected value need not necessarily perform well in the most likely particular world. One might imagine, for example—and indeed this is my own bet—that the most valuable actions we can take will only actually save us in the subset of worlds in which we have enough time to develop a proper science of alignment.
I agree metrology is cool! But I think units are mostly helpful for engineering insofar as they reflect fundamental laws of nature—see e.g. the metric units—and we don’t have those yet for AI. Until we do, I expect attempts to define them will be vague, high-level descriptions more than deep scientific understanding.
(And I think the former approach has a terrible track record, at least when used to define units of risk or controllability—e.g. BSL levels, which have failed so consistently and catastrophically they’ve induced an EA cause area, and which for some reason AI labs are starting to emulate).
I assumed “anyone” was meant to include OpenAI—do you interpret it as just describing novel entrants? If so I agree that wouldn’t be contradictory, but it seems like a strange interpretation to me in the context of a pitch deck asking investors for a billion dollars.
I agree it’s common for startups to somewhat oversell their products to investors, but I think it goes far beyond “somewhat”—maybe even beyond the bar for criminal fraud, though I’m not sure—to tell investors you’re aiming to soon get “too far ahead for anyone to catch up in subsequent cycles,” if your actual plan is to avoid getting meaningfully ahead at all.
“Diverting money” strikes me as the wrong frame here. Partly because I doubt this actually was the consequence—i.e., I doubt OpenAI etc. had a meaningfully harder time raising capital because of Anthropic’s raise—but also because it leaves out the part where this purported desirable consequence was achieved via (what seems to me like) straightforward deception!
If indeed Dario told investors he hoped to obtain an insurmountable lead soon, while telling Dustin and others that he was committed to avoid gaining any meaningful lead, then it sure seems like one of those claims was a lie. And by my ethical lights, this seems like a horribly unethical thing to lie about, regardless of whether it somehow caused OpenAI to have less money.
Huh, I’ve also noticed a larger effect from indoors/outdoors than seems reflected by CO2 monitors, and that I seem smarter when it’s windy, but I never thought of this hypothesis; it’s interesting, thanks.
Yeah, seems plausible; but either way it seems worth noting that Dario left Dustin, Evan and Anthropic’s investors with quite different impressions here.
It seems Dario left Dustin Moskovitz with a different impression—that Anthropic had a policy/commitment to not meaningfully advance the frontier:
Interesting, I checked LW/Google for the keyword before writing and didn’t see much, but maybe I missed it; it does seem like a fairly natural riff, e.g. someone wrote a similar post on EA forum a few months later.
I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).
I do have substantial credence (~15%?) on AGI being built by hobbyists/small teams. I definitely think it’s more likely to be built by huge teams with huge computers, like most recent advances. But my guess is that physics permits vastly more algorithmic efficiency than we’ve discovered, and it seems pretty plausible to me—especially in worlds with longer timelines—that some small group might discover enough of it in time.
Nonetheless, I acknowledge that my disagreement with these proposals often comes down to a more fundamental disagreement about the difficulty of alignment, rather than any beliefs about the social response to AI risk.
My guess is that this disagreement (about the difficulty of alignment) also mostly explains the disagreement about humanity’s relative attentiveness/competence. If the recent regulatory moves seem encouraging to you, I can see how that would seem like counterevidence to the claim that governments are unlikely to help much with AI risk.
But personally it doesn’t seem like much counterevidence, because the recent moves haven’t seemed very encouraging. They’ve struck me as slightly encouraging, insofar as they’ve caused me to update slightly upwards that governments might eventually coordinate to entirely halt AI development. But absent the sort of scientific understanding we typically have when deploying high-risk engineering projects—where e.g., we can answer at least most of the basic questions about how the systems work, and generally understand how to predict in advance what will happen if we do various things, etc.—little short of a Stop is going to reduce my alarm much.
AI used to be a science. In the old days (back when AI didn’t work very well), people were attempting to develop a working theory of cognition.
Those scientists didn’t succeed, and those days are behind us.
I claim many of them did succeed, for example:
George Boole invented boolean algebra in order to establish (part of) a working theory of cognition—the book where he introduces it is titled “An Investigation of the Laws of Thought,” and his stated aim was largely to help explain how minds work.[1]
Ramón y Cajal discovered neurons in the course of trying to better understand cognition.[2]
Turing described his research as aimed at figuring out what intelligence is, what it would mean for something to “think,” etc.[3]
Shannon didn’t frame his work this way quite as explicitly, but information theory is useful because it characterizes constraints on the transmission of thoughts/cognition between people, and I think he was clearly generally interested in figuring out what was up with agents/minds—e.g., he spent time trying to design machines to navigate mazes, repair themselves, replicate, etc.
Geoffrey Hinton initially became interested in neural networks because he was trying to figure out how brains worked.
Not all of these scientists thought of themselves as working on AI, of course, but I do think many of the key discoveries which make modern AI possible—boolean algebra, neurons, computers, information theory, neural networks—were developed by people trying to develop theories of cognition.
- ^
The opening paragraph of Boole’s book: “The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; to make that method itself the basis of a general method for the application of the mathematical doctrine of Probabilities; and, finally, to collect from the various elements of truth brought to view in the course of these inquiries some probable intimations concerning the nature and constitution of the human mind.”
- ^
From Cajal’s autobiography: ”… the problem attracted us irresistibly. We saw that an exact knowledge of the structure of the brain was of supreme interest for the building up of a rational psychology. To know the brain, we said, is equivalent to ascertaining the material course of thought and will, to discovering the intimate history of life in its perpetual duel with external forces; a history summarized, and in a way engraved in the defensive neuronal coordinations of the reflex, of instinct, and of the association of ideas” (305).
- ^
The opening paragraph of Turing’s paper, Computing Machinery and Intelligence: “I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine ‘and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think ’are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.”
But it’s not just language any longer either, with image inputs, etc… all else equal I’d prefer a name that emphasized how little we understand how they work (“model” seems to me to connote the opposite), but I don’t have any great suggestions.
I think we must still be missing each other somehow. To reiterate, I’m aware that there is also non-accidental biorisk, for which one can hardly blame the safety measures. But there is substantial accident risk too, since labs often fail to contain pathogens even when they’re trying to.