Erich_Grunewald(Erich Grunewald)
This has a mighty It’s Decorative Gourd Season, Motherfuckers energy.
Kelsey Piper wrote this comment on the EA Forum:
It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here’s what I came away with:
On December 15, Alice states that she’d had very little to eat all day, that she’d repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don’t want to get fast food. She asks again about Burger King and is told it’s inconvenient to get there. Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that ‘they have some salads’ but nothing else for her. She assures him that it’s fine to not get her anything.
It seems completely reasonable that Alice remembers this as ‘she was barely eating, and no one in the house was willing to go out and get her nonvegan foods’ - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being repeatedly deflected with ‘we are down to get anything that isn’t fast food’ and ‘we are down to go anywhere within a 12 min drive’ and ‘our only criteria is decent vibe + not fast food’, after which she fails to find a restaurant meeting those (I note, kind of restrictive if not in a highly dense area) criteria and they go somewhere without vegan options and don’t get her anything to eat.
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice’s language throughout emphasizes how she’ll be fine, it’s no big deal, she’s so grateful that they tried (even though they failed and she didn’t get any food out of the 12⁄15 trip, if I understand correctly). I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people. But it doesn’t seem to me that Alice is lying to have experienced this as ‘she had covid, was barely eating, told people she was barely eating, and they declined to pick up Burger King for her because they didn’t want to go to a fast food restaurant, and instead gave her very limiting criteria and went somewhere that didn’t have any options she could eat’.
On December 16th it does look like they successfully purchased food for her.
My big takeaway from these exchanges is not that the Nonlinear team are heartless or insane people, but that this degree of professional and personal entanglement and dependence, in a foreign country, with a young person, is simply a recipe for disaster. Alice’s needs in the 12⁄15 chat logs are acutely not being met. She’s hungry, she’s sick, she conveys that she has barely eaten, she evidently really wants someone to go to BK and get an impossible burger for her, but (speculatively) because of this professional/personal entanglement, she lobbies for this only by asking a few times why they ruled out Burger King, and ultimately doesn’t protest when they instead go somewhere without food she can eat, assuring them it’s completely fine. This is also how I relate to my coworkers, tbh—but luckily, I don’t live with them and exclusively socialize with them and depend on them completely when sick!!
Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as ‘not getting her needs met despite trying quite hard to do so’, and the Nonlinear team remembers that they went out of their way that week to get Alice food—which is based on the logs from the 16th clearly true! But I don’t think I’d call Alice a liar based on reading this, because she did express that she’d barely eaten and request apologetically for them to go somewhere she could get vegan food (with BK the only option she’d been able to find) only for them to refuse BK because of the vibes/inconvenience.To which Kat Woods replied:
We definitely did not fail to get her food, so I think there has been a misunderstanding—it says in the texts below that Alice told Drew not to worry about getting food because I went and got her mashed potatoes. Ben mentioned the mashed potatoes in the main post, but we forgot to mention it again in our comment—which has been updated
The texts involved on 12/15/21:
I also offered to cook the vegan food we had in the house for her.
I think that there’s a big difference between telling everyone “I didn’t get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!” and “they refused to get me vegan food and I barely ate for 2 days”.
Also, re: “because of this professional/personal entanglement”—at this point, Alice was just a friend traveling with us. There were no professional entanglements.
This is interesting, though I expect it’s an upper bound on Copilot productivity boosts:
Writing an HTTP server is a common, clearly defined task which has lots of examples online.
JavaScript is a popular language (meaning there’s lots of training data for Copilot).
I imagine Copilot is better for building a thing from ground up, whereas the programming most programmers do most days consists in extending, modifying and fixing existing stuff, meaning more thinking and reading and less typing.
They often do things of the form “leaving out info, knowing this has misleading effects”
On that, here are a few examples of Conjecture leaving out info in what I think is a misleading way.
(Context: Control AI is an advocacy group, launched and run by Conjecture folks, that is opposing RSPs. I do not want to discuss the substance of Control AI’s arguments—nor whether RSPs are in fact good or bad, on which question I don’t have a settled view—but rather what I see as somewhat deceptive rhetoric.)
One, Control AI’s X account features a banner image with a picture of Dario Amodei (“CEO of Anthropic, $2.8 billion raised”) saying, “There’s a one in four chance AI causes human extinction.” That is misleading. What Dario Amodei has said is, “My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10-25%.” I understand that it is hard to communicate uncertainty in advocacy, but I think it would at least have been more virtuous to use the middle of that range (“one in six chance”), and to refer to “global catastrophe” or something rather than “human extinction”.
Two, Control AI writes that RSPs like Anthropic’s “contain wording allowing companies to opt-out of any safety agreements if they deem that another AI company may beat them in their race to create godlike AI”. I think that, too, is misleading. The closest thing Anthropic’s RSP says is:
However, in a situation of extreme emergency, such as when a clearly bad actor (such as a rogue state) is scaling in so reckless a manner that it is likely to lead to imminent global catastrophe if not stopped (and where AI itself is helpful in such defense), we could envisage a substantial loosening of these restrictions as an emergency response. Such action would only be taken in consultation with governmental authorities, and the compelling case for it would be presented publicly to the extent possible.
Anthropic’s RSP is clearly only meant to permit labs to opt out when any other outcome very likely leads to doom, and for this to be coordinated with the government, with at least some degree of transparency. The scenario is not “DeepMind is beating us to AGI, so we can unilaterally set aside our RSP”, but more like “North Korea is beating us to AGI, so we must cooperatively set aside our RSP”.
Relatedly, Control AI writes that, with RSPs, companies “can decide freely at what point they might be falling behind – and then they alone can choose to ignore the already weak” RSPs. But part of the idea with RSPs is that they are a stepping stone to national or international policy enforced by governments. For example, ARC and Anthropic both explicitly said that they hope RSPs will be turned into standards/regulation prior to the Control AI campaign. (That seems quite plausible to me as a theory of change.) Also, Anthropic commits to only updating its RSP in consultation with its Long-Term Benefit Trust (consisting of five people without any financial interest in Anthropic) -- which may or may not work well, but seems sufficiently different from Anthropic being able to “decide freely” when to ignore its RSP that I think Control AI’s characterisation is misleading. Again, I don’t want to discuss the merits of RSPs, I just think Control AI is misrepresenting Anthropic’s and others’ positions.
Three, Control AI seems to say that Anthropic’s advocacy for RSPs is an instance of safetywashing and regulatory capture. (Connor Leahy: “The primary aim of responsible scaling is to provide a framework which looks like something was done so that politicians can go home and say: ‘We have done something.’ But the actual policy is nothing.” And also: “The AI companies in particular and other organisations around them are trying to capture the summit, lock in a status quo of an unregulated race to disaster.”) I don’t know exactly what Anthropic’s goals are—I would guess that its leadership is driven by a complex mixture of motivations—but I doubt it is so clear-cut as Leahy makes it out to be.
To be clear, I think Conjecture has good intentions, and wants the whole AI thing to go well. I am rooting for its safety work and looking forward to seeing updates on CoEm. And again, I personally do not have a settled view on whether RSPs like Anthropic’s are in fact good or bad, or on whether it is good or bad to advocate for them – it could well be that RSPs turn out to be toothless, and would displace better policy – I only take issue with the rhetoric.
(Disclosure: Open Philanthropy funds the organisation I work for, though the above represents only my views, not my employer’s.)
Israel’s strategy since the Hamas took the strip over in 2007 has been to try and contain it, and keeping it weak by periodic, limited confrontations (the so called Mowing the Lawn doctorine), and trying to economically develop the strip in order to give Hamas incentives to avoid confrontation. While Hamas grew stronger, the general feeling was that the strategy works and the last 15 years were not that bad.
I am surprised to read the bolded part! What actions have the Israeli government taken to develop Gaza, and did Gaza actually develop economically in that time? (That is not a rhetorical question—I know next to nothing about this.)
Looking quickly at some stats, real GDP per capita seems to have gone up a bit since 2007, but has declined since 2016, and its current figure ($5.6K in 2021) is lower than e.g., Angola, Bangladesh, and Venezuela.
Qualitatively, the blockade seems to have been net negative for Gaza’s economic development. NYT writes:
The Palestinian territory of Gaza has been under a suffocating Israeli blockade, backed by Egypt, since Hamas seized control of the coastal strip in 2007. The blockade restricts the import of goods, including electronic and computer equipment, that could be used to make weapons and prevents most people from leaving the territory.
More than two million Palestinians live in Gaza. The tiny, crowded coastal enclave has a nearly 50 percent unemployment rate, and Gaza’s living conditions, health system and infrastructure have all deteriorated under the blockade.
But that is a news report, so we should take it with a grain of salt.
Well, it’s not like vegans/vegetarians are some tiny minority in EA. Pulling together some data from the 2022 ACX survey, people who identify as EA are about 40% vegan/vegetarian, and about 70% veg-leaning (i.e., vegan, vegetarian, or trying to eat less meat and/or offsetting meat-eating for moral reasons). (That’s conditioning on identifying as an LW rationalist, since anecdotally I think being vegan/vegetarian is somewhat less common among Bay Area EAs, and the ACX sample is likely to skew pretty heavily rationalist, but the results are not that different if you don’t condition.)
ETA: From the 2019 EA survey, 46% of EAs are vegan/vegetarian and 77% veg-leaning.
I think it is reasonable to treat this as a proxy for the state of the evidence, because lots of AI policy people specifically praised it as a good and thoughtful paper on policy.
All four of those AI policy people are coauthors on the paper—that does not seem like good evidence that the paper is widely considered good and thoughtful, and therefore a good proxy (though I think it probably is an ok proxy).
ARC & Open Philanthropy state in a press release “In a sane world, all AGI progress should stop. If we don’t, there’s more than a 10% chance we will all die.”
Could you spell out what you mean by “in a sane world”? I suspect a bunch of people you disagree with do not favor a pause due to various empirical facts about the world (e.g., there being competitors like Meta).
You may be referring to the BIG-bench canary string?
Some possibly relevant data:
As of 2020, anti-government protests in North America rose steadily from 2009 to 2017 where it peaked (at ~7x the 2009 number) and started to decline (to ~4x the 2009 number in 2019).
Americans’ trust in the US government is very low (only ~20% say they trust the USG to do what’s right most of the time) and has been for over a decade. It seems to have locally peaked at ~50% after 9/11, and then declined to ~15% in 2010, after the financial crisis.
Congressional turnover rates have risen somewhat since the 90s, and are now at about the same level as in the 1970s.
Congress seems to pass fewer bills every year since at least the mid-1970s (though apparently bottoming out in 2011, following the 2010 red wave midterms).
The volume of executive orders seems fairly stable or even declining since WWII.
DSA membership is down to 85K in 2023 from a peak of 95K in 2021. I can’t think of an analogous right-wing group that publishes membership numbers.
Great post!
But let’s back up and get some context first. The year was 1812, and mathematical tables were a thing.
What are mathematical tables, you ask? Imagine that you need to do some trigonometry. What’s
sin(79)
?Well, today you’d just look it up online. 15 years ago you’d probably grab your TI-84 calculator. But in the year 1812, you’d have to consult a mathematical table. Something like this:
They’d use computers to compute all the values and write them down in books. Just not the type of computers you’re probably thinking of. No, they’d use human computers.
Interestingly, humans having to do a lot of calculation manually was also how John Napier discovered the logarithm in the 17th century. The logarithm reduces the task of multiplication to the much faster and less error-prone task of addition. Of course that meant you also needed to get the logarithms of numbers, so it in turn spawned an industry of printed logarithmic tables (which Charles Babbage later tried to disrupt with his Difference Engine).
I think your analysis makes sense if using a “center” name really should require you to have some amount of eminence or credibility first. I’ve updated a little bit in that direction now, but I still mostly think it’s just synonymous with “institute”, and on that view I don’t care if someone takes a “center” name (any more than if someone takes an “institute” name). It’s just, you know, one of the five or so nouns non-profits and think tanks use in their names (“center”, “institute”, “foundation”, “organization”, “council”, blah).
Or actually, maybe it’s more like I’m less convinced that there’s a common pool of social/political capital that CAIS is now spending from. I think the signed statement has resulted in other AI gov actors now having higher chances of getting things done. I think if the statement had been not very successful, it wouldn’t have harmed those actors’ ability to get things done. (Maybe if it was really botched it would’ve, but then my issue would’ve been with CAIS’s botching the statement, not with their name.)
I guess I also don’t really buy that using “center” spends from this pool (to the extent that there is a pool). What’s the scarce resource it’s using? Policy-makers’ time/attention? Regular people’s time/attention? Or do people only have a fixed amount of respect or credibility to accord various AI safety orgs? I doubt, for example, that other orgs lost out on opportunities to influence people, or inform policy-makers, due to CAIS’s actions. I guess what I’m trying to say is I’m a bit confused about your model!
Btw, in case it matters, the other examples I had in mind were Center for Security and Emerging Technology (CSET) and Centre for the Governance of AI (GovAI).
Here’s what I usually try when I want to get the full text of an academic paper:
Search Sci-Hub. Give it the DOI (e.g.
https://doi.org/...
) and then, if that doesn’t work, give it a link to the paper’s page at an academic journal (e.g.https://www.sciencedirect.com/science...
).Search Google Scholar. I can often just search the paper’s name, and if I find it, there may be a link to the full paper (HTML or PDF) on the right of the search result. The linked paper is sometimes not the exact version of the paper I am after—for example, it may be a manuscript version instead of the accepted journal version—but in my experience this is usually fine.
Search the web for
"name of paper in quotes" filetype:pdf
. If that fails, search for"name of paper in quotes"
and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.)Check the paper’s authors’ personal websites for the paper. Many researchers keep an up-to-date list of their papers with links to full versions.
Email an author to politely ask for a copy. Researchers spend a lot of time on their research and are usually happy to learn that somebody is eager to read it.
Fwiw, there is also AI governance work that is neither policy nor lab governance, in particular trying to answer broader strategic questions that are relevant to governance, e.g., timelines, whether a pause is desirable, which intermediate goals are valuable to aim for, and how much computing power Chinese actors will have access to. I guess this is sometimes called “AI strategy”, but often the people/orgs working on AI governance also work on AI strategy, and vice versa, and they kind of bleed into each other.
How do you feel about that sort of work relative to the policy work you highlight above?
The only criticism of you and your team in the OP is that you named your team the “Center” for AI Safety, as though you had much history leading safety efforts or had a ton of buy-in from the rest of the field.
Fwiw, I disagree that “center” carries these connotations. To me it’s more like “place where some activity of a certain kind is carried out”, or even just a synonym of “institute”. (I feel the same about the other 5-10 EA-ish “centers/centres” focused on AI x-risk-reduction.) I guess I view these things more as “a center of X” than “the center of X”. Maybe I’m in the minority on this but I’d be kind of surprised if that were the case.
It’s not clear whether that will mean the end of humanity in the sense of the systems we’ve created destroying us. It’s not clear if that’s the case, but it’s certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.
It’s interesting that he seems so in despair over this now. To the extent that he’s worried about existential/catastrophic risks, I wonder if he is unaware of efforts to mitigate those, or if he is aware but thinks they are hopeless (or at least not guaranteed to succeed, which—fair enough). To the extent that he’s more broadly worried about human obsolescence (or anyway something more metaphysical), well, there are people trying to slow/stop AI, and others trying to enhance human capabilities—maybe he’s pessimistic about those efforts, too.
If only we could spread the meme of irresponsible Western powers charging head-first into building AGI without thinking through the consequences and how wise the Chinese regulation is in contrast.
That sort of strategy seems like it could easily backfire, where people only pick up the first part of that statement (“irresponsible Western powers charging head-first into building AGI”) and think “oh, that means we need to speed up”. Or maybe that’s what you mean by “if only”—that it’s hard to spread even weakly nuanced messages?
Yeah but it’s not clear to me that they needed 8 months of safety research. If they released it after 12 months, they could’ve still written that they’d been “evaluating, adversarially testing, and iteratively improving” it for 12 months. So it’s still not clear to me how much they delayed bc they had to, versus how much (if at all) they did due to the forecasters and/or acceleration considerations.
But this itself is surprising: GPT-4 was “finished training” in August 2022, before ChatGPT was even released! I am unsure of what “finished training” means here—is the released model weight-for-weight identical to the 2022 version? Did they do RLHF since then?
I think “finished training” is the next-token prediction pre-training, and what they did since August is the fine-tuning and the RLHF + other stuff.
In summary, saying “accident” makes it sounds like an unpredictable effect, instead of painfully obviously risk that was not taken seriously enough.
In e.g. aviation, incidents leading to some sort of loss are always called “accidents”, even though we know there’s always some risk of planes crashing—it happens regularly. That doesn’t mean aircraft engineers and manufacturers don’t take the risks seriously enough, they usually take them very seriously indeed. It’s just that “accident” only means something like “unplanned and undesired loss” (IIRC that’s roughly how Nancy Leveson defines it).
Seems to me like (a), (b) and maybe (d) are true for the airplane manufacturing industry, to some degree.
But I’d still guess that flying is safer with substantial regulation than it would be in a counterfactual world without substantial regulation.
That would seem to invalidate your claim that regulation would make AI x-risk worse. Do you disagree with (1), and/or with (2), and/or see some important dissimilarities between AI and flight that make a difference here?