yams
Announcing: MIRI Technical Governance Team Research Fellowship
answer is somewhat complicated and I’m not sure ‘know’ is quite the right bar
contractor verification is a properly hard problem for boring bureaucratic reasons; it’s very hard to know that someone is who they say they are, and it’s very hard to guarantee that you’ll extract the value you’re asking for at scale (‘scalable oversight’ is actually a good model for intuitions here). I have:
1. Been part of surveys for services like the above2. Been a low-level contractor at various mid-sized startups (incl. OAI in 2020)
3. Managed a team of hundreds of contractors doing tens of thousands of tasks per month (it was really just me and one other person watching them)
4. Thought quite a lot about designing better systems for this (very hard!!!)
5. Noted the lack of especially-convincing client-facing documentation / transparency from e.g. Prolific
The kinds of guarantees I would want here are like “We ourselves verify the identities of contractors to make sure they’re who they say they are. We ourselves include comprehension-testing questions that are formulated to be difficult to cheat alongside every exit survey. etc etc”Most services they might pay to do things like this are Bad (but they’re B2B and mostly provide a certification/assurance to the end-user, so the companies themselves are not incentivized to make sure they’re good).
Feel free to ask more questions; it’s kind of late and I’m tired; this is the quick-babble version.
EDIT: they’re not useless. They’re just worse than we all wish they’d be. To the best of my knowledge, this was a major motivator for Palisade in putting together their own message testing pipeline (an experience which hasn’t been written about yet because uh… I haven’t gotten to it)
Fwiw I don’t take Prolific and similar services to be especially reliable ways to get information about this sort of thing. It’s true that they’re among the best low-medium effort ways to get this information, but the hypothetical at the top of this post implies that they’re 1:1 with natural settings, which is false.
Thanks as always to Zac for continuing to engage on things like this.
Tiny nit for my employer: should probably read “including some* MIRI employees”
like any org, MIRI is made up of people that have significant disagreements with one another on a wide variety of important matters.
More than once I’ve had it repeated to me that ‘MIRI endorses y’, and tracked the root of the claim to a lack of this kind of qualifier. I know you mean the soft version and don’t take you to be over-claiming; unfortunately, experience has shown it’s worth clarifying, even though for most claims in most contexts I’d take your framing to be sufficiently clear.
I’m struck by how many of your cruxes seem like things that it would actually just be in the hands of the international governing body to control. My guess is, if DARPA has a team of safety researchers, and they go to the international body, and they’re like ‘we’re blocked by this set of experiments* that takes a large amount of compute; can we please have more compute?’, and then the international body gets some panel of independent researchers to confirm that this is true, and the only solution is more compute for that particular group of researchers, they commission a datacenter or something so that the research can continue.
Like, it seems obviously true to me that people (especially in government/military) will continue working on the problem at all, and that access to larger amounts of resources for doing that work is a matter of petitioning the body. It feels like your plan is built around facilitating this kind of carveout, and the MIRI plan is built around treating it as the exception that it is (and prioritizing gaining some centralized control over AI as a field over guaranteeing to-me-implausible rapid progress toward the best possible outcomes).
*which maybe is ‘building automated alignment researchers’, but better specified and less terrifying
AIs with reliable 1-month time horizons will basically not be time-horizon-limited in any way that humans aren’t
In this statement, are you thinking about time horizons as operationalized / investigated in the METR paper, or are you thinking about the True Time Horizon?
A group of researchers has released the Longitudinal Expert AI Panel, soliciting and collating forecasts regarding AI progress, adoption, and regulation from a large pool of both experts and non-experts.
Didn’t disagree vote myself, but I think there’s a linguistic pattern of ‘just asking questions’ that is used to signal disagreement while also evading interrogation yourself. At first glance, your comment may be reading that way to others, who then hastily smash the disagree button to signal disagreement with the position they think you’re implying (even though you were really genuinely just asking questions).
I see this happen a lot, where folks mismodel someone’s epistemic state or tacking when, really, the person is just confused and trying to explicate the conditions of their confusion. In the broader world, claiming to be confused about something is a common tactic for trying to covertly convince someone of your position.
Duncan Sabien once ran the inverse experiment. He made a separate account to see how his posts would do without his reputation. The account only has one post still up, but iirc there used to be many more (tens). They performed similarly well to posts under his own name. Cool idea!
[plausibly I’m getting parts of the story wrong and someone who was around then will correct me]
I think that I’d do this math by net QUALYs and not net deaths. My guess is doing it that way may actually change your result.
I’m not trying to avoid dying; I’m trying to steer toward living.
Yup! I just think there’s an unbounded way that a reader could view his comment: “oh! There are no current or future consequences at OAI for those who sign this statement!”
…and I wanted to make the bound explicit: real protections, into the future, can’t plausibly be offered, by anyone. Surely most OAI researchers are thinking ahead enough to feel the pressure of this bound (whether or not it keeps them from signing).
I’m still glad he made this comment, but the Strong Version is obviously beyond his reach to assure.
This is good!
My guess is that their hesitance is also linked to potential future climates, though, and not just the current climate, so I don’t expect additional signees to come forward in response to your assurances.
I think my crux is ‘how much does David’s plan resemble the plans labs actually plan to pursue?’
I read Nate and Eliezer as baking in ‘if the labs do what they say they plan to do, and update as they will predictably update based on their past behavior and declared beliefs’ to all their language about ‘the current trajectory’ etc etc.
I don’t think this resolves ‘is the tittle literally true’ in a different direction if it’s the only crux, and agree that this should have been spelled out more explicitly in the book (e.g. ‘in detail, why are the authors pessimistic about current safety plans’) from a pure epistemic standpoint (although think it was reasonable to omit from a rhetorical standpoint, given the target audience) and in various Headline Sentences throughout the book, and The Problem.
One generous way to read Nate and Eliezer here is to say ‘current techniques’ is itself intending to bake in ‘plans the labs currently plan to pursue’. I was definitely reading it this way, but think it’s reasonable for others not to. If we read it that way, and take David’s plan above to be sufficiently dissimilar from real lab plans, then I think the title’s literal interpretation goes through.
[your post has updated me from ‘the title is literally true’ to ‘the title is basically reasonable but may not be literally true depending on how broadly we construe various things’, which is a significantly less comfortable position!]
I want to vouch for Eli as a great person to talk with about this. He has been around a long time, has done great work on a few different sides of the space, and is a terrific communicator with a deep understanding of the issues.
He’s run dozens of focus-group style talks with people outside the space, and is perhaps the most practiced interlocutor for those with relatively low context.
[in case OP might think of him as some low-authority rando or something and not accept the offer on that basis]
You’re disagreeing with a claim I didn’t intend to make.
I was unclear in my language and shouldn’t have used ‘contains’. Sorry! Maybe ‘relaying’ would have avoided this confusion.
I think you’re not objecting to the broader point other than by saying ‘neuralese requires very high bandwidth’, but LLMs have a lot of potential associations that can be made in processing a single token (which is, potentially, an absolute ton of bandwidth).
@StanislavKrym can you explain your disagree vote?
Strings of numbers are shown to transmit a fondness for owls. Numbers have no semantic content related to owls. This seems to point to ‘tokens containing much more information than their semantic content’, doesn’t it?
Doesn’t this have implications for the feasibility of neuralese? I’ve heard some claims that tokens are too low-bandwidth for neuralese to work for now, but this seems to point at tokens containing (edit: I should have said something like ‘relaying’ or ‘invoking’ rather than ‘containing’) much more information than their semantic content.
I’m not sure how useful I find hypotheticals of the form ‘if Claude had its current values [to the extent we can think of Claude as a coherent enough agent to have consistent values, etc etc], but were much more powerful, what would happen?’ A more powerful model would be likely to have/evince different values from a less powerful model, even if they were similar architectures subjected to similar training schema. Less powerful models also don’t need to be as well-aligned in practice, if we’re thinking of each deployment as a separate decision-point, since they’re of less consequence.
I understand that you’re in-part responding to the hypothetical seeded by Nina’s rhetorical line, but I’m not sure how useful it is when she does it, either.
I don’t think the quote from Ryan constitutes a statement on his part that current LLMs are basically aligned. He’s quoting a hypothetical speaker to illustrate a different point. It’s plausible to me that you can find a quote from him that is more directly in the reference class of Nina’s quote, but as-is the inclusion of Ryan feels a little unfair.
Do you think rationalists use ‘insane’ and ‘crazy’ more than the general population, and/or in a different way than the general population? (e.g. definition 3 when you google ‘insane definition’)