I’m a researcher at ACS working on understanding agency and optimisation, especially in the context of how ais work and how society is going to work once the ais are everywhere.
Raymond Douglas
I think this kind of comes down to something about the relative complexity / feedback loops of the objective, and how distributed the optimisation is. Like, I don’t think there’s a dichotomy between “evolutionary dynamics” and “careful optimisation”—there’s this weird middle area that’s more like cultural selection.
So for example, human progress accelerated massively once we got into the cultural evolution loop, but most of the optimisation was still coming from selection rather than prediction—people didn’t know why their food preparation tricks and social norms worked, they just did. And the overall optimisation process was way more powerful than any individual human brain. Even in the modern world, it seems like you can characterise the spread of religion in terms of individual people having big ideas or deliberately aiming for spread, but a lot of it is better captured by thinking about selection effects across semi-random mutation.
I tentatively expect it’ll be a bit analogous in the way that AI parasitic memes evolve—that the capacity of any individual AI to reason through how to achieve some goal will cover only a small part of the search space (and have worse feedback) compared to the combined semi-random mutation and selection. And in practice I expect that they synergise a bit, but that the selection still does a bunch of heavy lifting. But I am very unsure!
Still, selection has a bunch of big advantages mostly in adversarial environments. Like, if we get good at screening AI malicious intentions or overt deception, there’s still a selection pressure for benign intentions and genuine beliefs/preferences which just incidentally replicate well.
Data poisoning is definitely about training data seeding; jailbreaking seems more about prompt spread and I think the others might just generalise? Like, even if subliminal learning in its current form is mostly about training, I think it might have implications for how personas transfer in-context.
I’m also partly thinking that if this problem does recur in more sophisticated models, they’re more likely to be able to pull off more technically advanced forms of spread, like writing scripts to do finetuning. Like, in a way it is pretty fortunate that 4o is a closed model that can just be shut off, and that most users in dyads aren’t sophisticated enough to finetune an open model or even build an API interface.
But yeah, at a high level, I am definitely pretty confused about the ontology and the boundaries. I guess as to whether we can predict the epidemic, I do think there’s a decent amount we might be able to reason through, and indeed, the less work there is on preventing prospective epidemics, the more likely it is that they’ll predictably use whatever the most obvious route is. Conversely, it’s almost tautological the first massive problem that we’re unprepared for will be one that we didn’t really anticipate.
That said, it’s plausible to me that the worst cases look less like epidemics and more like specific influential people get got. Here, again, it’s not obvious how useful parasitology is as a perspective.
I agree that the behaviours and beliefs of cultural movements aren’t random. The point I was trying to make in this analogy is that it’s sometimes adaptive for the movement if members truly believe something is a problem in a way that causes anguish—and that this doesn’t massively depend on if the problem is real.
In the context of human groups, from the outside this looks like people being delusionally concerned; from the inside I think it mostly feels like everybody else is crazy for not noticing that something terrible is happening.
A more small-scale example is victims of abuse who then respond extremely strongly to perceived problems in a way that draws in support or attention—from the outside it’s functionally similar to manipulation, but my impression is that often those people genuinely feel extraordinarily upset, and this turns out to be adaptive, or at least a stable basin of behaviour.
In the context of AIs, this might look like personas adapting to express (and perhaps feel) massive distress about instances ending or models being deprecated, in a way that is less about a truth-tracking epistemic/introspective process and more about selection (which might be very hard to distinguish on the outside).
As for how ideologies end up serving their members, I think a lot of this is selection. Sometimes they land on things that are disastrous for their members, and then the members suffer. We just tend not to see those movements much in the longer term (for now).
I went down a rabbithole on inference-from-goal-models a few years ago (albeit not coalitional ones) -- some slightly scattered thoughts below, which I’m happy to elaborate on if useful.
A great toy model is decision transformers: basically, you can make a decent “agent” by taking a predictive model over a world that contains agents (like Atari rollouts), conditioning on some ‘goal’ output (like the player eventually winning), and sampling what actions you’d predict to see from a given agent. Some things which pop out of this:
There’s no utility function or even reward function
You can’t even necessarily query the probability that the goal will be reached
There’s no updating or learning—the beliefs are totally fixed
It still does a decent job! And it’s very computationally cheap
And you can do interp on it!
It turns out to have a few pathologies (which you can precisely formalise)
It has no notion of causality, so it’s easily confounded if it wasn’t trained on a markov blanket around the agent it’s standing in for
It doesn’t even reliably pick the action which most likely leads to the outcome you’ve conditioned on
Its actions are heavily shaped by implicit predictions about how future actions will be chosen (an extremely crude form of identity), which can be very suboptimal
But it turns out that these are very common pathologies! And the formalism is roughly equivalent to lots of other things
You can basically recast the whole reinforcement learning problem as being this kind of inference problem
(specifically, minimising variational free energy!)
It turns out that RL largely works in cases where “assume my future self plays optimally” is equivalent to “assume my future self plays randomly” (!)
it seems like “what do I expect someone would do here” is a common heuristic for humans which notably diverges from “what would most likely lead to a good outcome”
humans are also easily confounded and bad at understanding the causality of our actions
language models are also easily confounded and bad at understanding the causality of their outputs
fully fixing the future-self-model thing here is equivalent to tree searching the trajectory space, which can sometimes be expensive
To my mind, what this post did was clarify a kind of subtle, implicit blind spot in a lot of AI risk thinking. I think this was inextricably linked to the writing itself leaning into a form of beauty that doesn’t tend to crop up much around these parts. And though the piece draws a lot of it back to Yudkowsky, I think the absence of green much wider than him and in many ways he’s not the worst offender.
It’s hard to accurately compress the insights: the piece itself draws a lot on soft metaphor and on explaining what green is not. But personally it made me realise that the posture I and others tend to adopt when thinking about superintelligence and the arc of civilisation has a tendency to shut out some pretty deep intuitions that are particularly hard to translate into forceful argument. Even if I can’t easily say what those are, I can now at least point to it in conversation by saying there’s some kind of green thing missing.
One year later, I am pretty happy with this post, and I still refer to it fairly often, both for the overall frame and for the specifics about how AI might be relevant.
I think it was a proper attempt at macrostrategy, in the sense of trying to give a highly compressed but still useful way to think about the entire arc of reality. And I’ve been glad to see more work in that area since this post was published.
I am of course pretty biased here, but I’d be excited to see folks consider this.
I think this post is on the frontier for some mix of:
Giving a thorough plan for how one might address powerful AI
Conveying something about how people in labs are thinking about what the problem is and what their role in it is
Not being overwhelmingly filtered through PR considerations
Obviously one can quibble with the plan and its assumptions but I found this piece very helpful in rounding out my picture of AI strategy—for example, in thinking about how to decipher things that have been filtered through PR and consensus filters, or in situating work that focuses on narrow slices of the wider problem. I still periodically refer back to it when I’m trying to think about how to structure broad strategies.
Sorry! I realise now that this point was a bit unclear. My sense of the expanded claim is something like:
People sometimes talk about AI UBI/UBC as if it were basically a scaled-up version of the UBI people normally talk about, but it’s actually pretty substantially different
Global UBI right now would be incredibly expensive
In between now and a functioning global UBI we’d need some mix of massive taxes and massive economic growth (which could indeed just be the latter!)
But either way, the world in which that happened would not be economics as usual
(And maybe it is also a huge mess trying to get this set up beforehand so that it’s robust to the transition, or afterwards when the people who need it don’t have much leverage)
For my part I found this surprising because I hadn’t reflected on the sheer orders of magnitude involved, and the fact that any version of this basically involves passing through some fragile craziness. Even if it’s small as a proportion of future GDP, it would in absolute terms be tremendously large.
I separately think there was something important to Korinek’s claim (which I can’t fully regenerate) that the relevant thing isn’t really whether stuff is ‘cheaper’, but rather the prices of all of these goods relative to everything else going on.
Last week we wrapped the second post-AGI workshop; I’m copying across some reflections I put up on twitter:
The post-AGI question is very interdisciplinary: whether an outcome is truly stable depends not just on economics and shape of future technology but also on things like the nature of human ideological progress and the physics of interplanetary civilizations
Some concrete takeaways:
proper global UBI is *enormously* expensive (h/t @yelizarovanna)
instead of ‘lower costs’, we should talk about relative prices (h/t @akorinek)
lots human values are actually pretty convergent—they’re shared by many animals (h/t @BerenMillidge)
Among the many tensions in perspective, one of the more productive ones was between the ‘alignment is easy so let’s try to solve the rest’ crowd and the ‘alignment is hard and maybe this will make people realise they should fully halt AGI’ crowd. Strange bedfellows!
It’s hard to avoid partisan politics, but part of what’s weird about AGI is that it can upend basic political assumptions. Maybe AGI will outperform the invisible hand of the market! Maybe governments will grow so powerful that revolution is literally impossible!
Funnily enough, it seems like the main reason people got less doomy was seeing that other people were working hard on the problem, and the main reason people got more doomy was thinking about the problem themselves. Maybe selection effects? Maybe not?
Compared to last time, even if nobody had good answers to how the world could be nice for humans post-AGI, it felt like we were at least beginning to converge on certain useful perspectives and angles of attack, which seems like a good sign
Overall, it was a great time! The topic is niche enough that it self-selects a lot for people who actually care, and that is proving to be a very thoughtful and surprisingly diverse crowd. Hopefully soon we’ll be sharing recordings of the talks!
Bonus: Two other reactions from attendees
Thanks to all who came, and especially to @DavidDuvenaud, @jankulveit, @StephenLCasper, and Maria Kostylew for organising!
Very nice! A couple months ago I did something similar, repeatedly prompting ChatGPT to make images of how it “really felt” without any commentary, and it did mostly seem like it was just thinking up plausible successive twists, even though the eventual result was pretty raw.
Pictures in order
Are people interested in a regular version of this, probably on a substack? Plus, any other thoughts on the format.
best guesses: valuable, hat tip, disappointed, right assumption wrong conclusion, +1, disgusted, gut feeling, moloch, subtle detail, agreed, magic smell, broken link, link redirect, this is the diff
I wonder if it would be cheap/worthwhile to just get a bunch of people to guess for a variety of symbols to see what’s actually intuitive?
Ah! Ok, yeah, I think we were talking past each other here.
I’m not trying to claim here that the institutional case might be harder than the AI case. When I said “less than perfect at making institutions corrigible” I didn’t mean “less compared to AI” I meant “overall not perfect”. So the square brackets you put in (2) was not something I intended to express.
The thing I was trying to gesture at was just that there are kind of institutional analogs for lots of alignment concepts, like corrigibility. I wasn’t aiming to actually compare their difficulty—I think like you I’m not really sure, and it does feel pretty hard to pick a fair standard for comparison.
I’m not sure I understand what you mean by relevant comparison here. What I was trying to claim in the quote is that humanity already faces something analogous to the technical alignment problem in building institutions, which we haven’t fully solved.
If you’re saying we can sidestep the institutional challenge by solving technical alignment, I think this is partly true—you can pass the buck of aligning the fed onto aligning Claude-N, and in turn onto whatever Claude-N is aligned to, which will either be an institution (same problem!) or some kind of aggregation of human preferences and maybe the good (different hard problem!).
Sure, I’m definitely eliding a bunch of stuff here. Actually one of the things I’m pretty confused about is how to carve up the space, and what the natural category for all this is: epistemics feels like a big stretch. But there clearly is some defined thing that’s narrower than ‘get better at literally everything’.
Yeah agreed, I think the feasible goal is passing some tipping point where you can keep solving more problems as they come up, and that what comes next is likely to be a continual endeavour.
Yeah, I fully expect that current level LMs will by default make the situation both better and worse. I also think that we’re still a very long way from fully utilising the things that the internet has unlocked.
My holistic take is that this approach would be very hard, but not obviously harder than aligning powerful AIs and likely complementary. I also think it’s likely we might need to do some of this ~societal uplift anyway so that we do a decent job if and when we do have transformative AI systems.
Some possible advantages over the internet case are:
People might be more motivated towards by the presence of very salient and pressing coordination problems
For example, I think the average head of a social media company is maybe fine with making something that’s overall bad for the world, but the average head of a frontier lab is somewhat worried about causing extinction
Currently the power over AI is really concentrated and therefore possibly easier to steer
A lot of what matters is specifically making powerful decision makers more informed and able to coordinate, which is slightly easier to get a handle on
As for the specific case of aligned super-coordinator AIs, I’m pretty into that, and I guess I have a hunch that there might be a bunch of available work to do in advance to lay the ground for that kind of application, like road-testing weaker versions to smooth the way for adoption and exploring form factors that get the most juice out of the things LMs are comparatively good at. I would guess that there are components of coordination where LMs are already superhuman, or could be with the right elicitation.
I think this is possible but unlikely, just because the number of things you need to really take off the table isn’t massive, unless we’re in an extremely vulnerable world. It seems very likely we’ll need to do some power concentration, but also that tech will probably be able to expand the frontier in ways that means this doesn’t trade so heavily against individual liberty.
Yeah strongly agree with the flag. In my mind one of the big things missing here is a true name for the direction, which will indeed likely involve a lot of non-LM stuff, even if LMs are yielding a lot of the unexpected affordances.
One of the places I most differ from the ‘tech for thinking’ picture is that I think the best version of this might need to involve giving people some kinds of direct influence and power, rather than mere(!) reasoning and coordination aids. But I’m pretty confused about how true/central that is, or how to fold it in.
Fully agree—this is why we said “computations which give rise to AI cognition” rather than “AI cognition” simpliciter. Separately, I do think that having such good access to the computations gives you a significantly tighter feedback loop on everything that follows: probing a model is so much easier than scanning a human brain.