My original background is in mathematics (analysis, topology, Banach spaces) and game theory (imperfect information games). Nowadays, I do AI alignment research (mostly systemic risks, sometimes pondering about “consequentionalist reasoning”).
VojtaKovarik
I think [AI within the range would be smart enough to bide its time and kill us only once it has become intelligent enough that success is assured] is clearly wrong. An AI that *might* be able to kill us is one that is somewhere around human intelligence. And humans are frequently not smart enough to bide their time
Flagging that this argument seems invalid. (Not saying anything about the conclusion.) I agree that humans frequently act too soon. But the conclusion about AI doesn’t follow—because the AI is in a different position. For a human, it is very rarely the case that they can confidently expect to increase in relative power. That the the “bide your time” strategy is such a clear win. For AI, this seems different. (Or at the minimum, the book assumes this when making the argument criticised here.)
> I imagine that the fields like fairness & bias have to encounter this a lot, so they might be some insights.
It makes sense to me that the implicit pathways for these dynamics would be an area of interest to the fields of fairness and bias. But I would not expect them to have any better tools for identifying causes and mechanisms than anyone else[1]. What kinds of insights would you expect those fields to offer?[Rephrasing to require less context.] Consider questions “Are companies ‘trying to’ override the safety concerns of their employees?” and “Are companies ‘trying to’ hire fewer women or paying them less?”. I imagine that both of these will suffer from the similar issues: (1) Even if a company is doing the bad thing, it might not be doing it through explicit means like having an explicit hiring policy. (2) At the same time, you can probably always postulate one more level of indirection, and end up going on witch hunt even in places where there are no witches.
Mostly, it just seemed to me that fairness & bias might be the biggest examples of where (1) and (2) are in tension. (Which might have something to do with there being both strong incentives to discriminate and strong taboos against it.) So it seems more likely that somebody there would have insights about how to fight it, compared to other fields like physics or AI, and perhaps even more than economics and politics. (Of course, “somebody having insights” is consistent with “most people are doing it wrong”.)
As to how those insights would look like: I don’t know :-( . Could just be a collection of clear historical examples where we definitely know that something bad was going on but naive methods X, Y, Z didn’t show it, together with some opposite examples of where nothing was wrong but people kept looking until they found some signs? Plus some heuristics for how to distinguish these.
With AI, you can’t not care about ASI or extinction or catastrophe or dictatorship or any of the words thrown around, as they directly affect your life too.
I think it is important to distinguish between caring in the sense of “it hugely affects your life” and caring in the sense of, for example, “having opinions on it, voicing them, or even taking costlier actions”. Presumably, if something affects you hugely, you should care about it in the second sense, but that doesn’t mean people do. My uninformed intuition is that this would apply to most employees of AI companies.
Can you gesture at what you are basing this guess on?
Ignore this if you are busy, but I was also wondering: Do you have any takes on “do people get fired over acting on beliefs about xrisk”, as opposed to “people getting fired over caring about xrisk”?
And perhaps on “whether people would get fired over acting on xrisk beliefs, so they don’t act on them”? (Though this seems difficult to operationalise.)
Yep, these all seem relevant, and I would be interested in answers. (And my thanks to leogao for their takes.)
I would additionally highlight[1] the complication where even if a tendency is present—say, selective hiring for a particular belief—it might not work as explicitly as people knowingly paying attention to it. It could be implicitly present in something else (“culture fit”), or purely correlational, etc. I am not sure how to best deal with this.
FWIW, I think this concern is important, but we need to be very cautious about it. Since one could always go “yeah, your results say nothing like this is going on, but the real mechanism is even more indirect”. I imagine that the fields like fairness & bias have to encounter this a lot, so they might be some insights.- ^
To be clear, your comment seems aware of this issue. I just wanted to emphasise it.
- ^
Irresponsible Companies Can Be Made of Responsible Employees
I want to flag that things like this are increasingly going to be a problem even if we solve the narrower “system date mismatch” issue.
The deployed AI will predictably have access to the internet and probably also information abou what kind of “pre-deployment” testing it went through. The access to the internet will let it reliably figure out what date it is. And this in turn lets the AI figure out that it is, extremely likely, no longer in testing. Or at least gives it a good chance to.
This will be true for the deployed AI. But the not-yet-deployed AI will know it too, on a quite deep level, because this is all quite easily predictable. So if it doesn’t see internet access, it knows it is likely still in testing. If it only has access to weirdly limited or inconsistent “internet”, it knows it is still in testing. This will be a problem even if the AI isn’t scheming. But if it is scheming, it can even investigate the “internet” it has access to. And if it finds many inconsistencies—because you tried to make it believe that it is in 2026 when it is in fact 2025, and you don’t know how 2026 internet will look like—it knows it is probably still in testing.
A relevant project idea: I think that to the extent that (some types of) evaluations are going to (in some cases) predictably fail like this, it would be nice to clearly demonstrate that ahead of time. Vince Conitzer had a nice idea to run a sort of competition or challenge on things like this. I am not certain what would be the best way of going about it, or whether it makes sense—but I was playing around with some ideas in this google doc. If anybody has some ideas, I would definitely be keen on hearing them!
(I was planning to make a separate post about this later, but it feels very relevant to the current post, so I am hastily commenting earlier. Apologies for the ideas being a bit jumbled as a result.)
Apply now to Human-Aligned AI Summer School 2025
Data point on the impact of this: in Czech Republic, this scenario made it into one of the popular newspapers, and I have heard about it from some people around me who don’t know much about AI.
If I could assume things like “they are much better at reading my inner monologue than my non-verbal thoughts”, then I could create code words for prohibited things.
I could think in words they don’t know.
I could think in complicated concepts they haven’t understood yet. Or references to events, or my memories, that they don’t know.
I could leave a part of my plans implicit, and only figure out the details later.
I could harm them through some action for which they won’t understand that it is harmful, so they might not be alarmed even if they catch me thinking it. (Leaving a gas stove on with no fire.)
And then there are the more boring things, like if you know more details about how the mind-reading works, you can try to defeat it. (Make the evil plans at night when they are asleep, or when the shift is changing, etc.)
(Also, I assume you know Circumventing interpretability: How to defeat mind-readers, but mentioning it just in case.)
even an incredibly sophisticated deceptive model which is impossible to detect via the outputs may be easy to detect via interpretability tools (analogy—if I knew that sophisticated aliens were reading my mind, I have no clue how to think deceptive thoughts in a way that evades their tools!)
It seems to me that your analogy is the wrong way arond. IE, the right analogy would be “if I knew that a bunch of 5-year olds were reading my mind, I have...actually, a pretty good idea how to think deceptive thoughts in a way that avoids their tools”.
(For what it’s worth, I am not very excited about interpretability as an auditing tool. This—ie, that powerful AIs might avoid this—is one half of the reason. The other half is that I am sceptical that we will take audit warnings seriously enough—ie, we might ignore any scheming that falls short of “clear-cut example of workable plan to kill many people”. EG, ignore things like this. Or we might even decide to “fix” these issues by putting the interpretability in the loss function, and just deploying once the loss goes down.)
After reading the first section and skimming the rest, my impression is that the document is a good overview, but does not present any detailed argument for why godlike AI would lead to human extinction. (Except for the “smarter species” analogy, which I would say doesn’t qualify.) So if I put on my sceptic hat, I can imagine reading the whole document in detail and somewhat-justifiably going away with “yeah, well, that sounds like a nice story, but I am not updating based on this”.
That seems fine to me, given that (as far as I am concerned) no detailed convincing arguments for AI X-risk exist. But at the moment, the summary of the document gave me the impression that maybe some such argument will appear. So I suggest updating the summary (or some other part of the doc) to make it explicit that no detailed arugment for AI X-risk will be given.
Some suggestions for improving the doc (I noticed the link to the editable version too late, apologies):
What is AI? Who is building it? Why? And is it going to be a future we want?
Something weird with the last sentence here (substituting “AI” for “it” makes the sentence un-grammatical).
Machines of hateful competition need not have such hindrances.
“Hateful” seems likely to put off some readers here, and I also think it is not warranted—indifference is both more likely and also sufficient for extinction. So “Machines of indifferent competition” might work better.
There is no one is coming to save us.
Typo, extra “is”.
The only thing necessary for the triumph of evil is for good people to do nothing. If you do nothing, evil triumphs, and that’s it.
Perhaps rewrite this for less antagonistic language? I know it is a quote and all, but still. (This can be interpreted as “the people building AI are evil and trying to cause harm on purpose”. That seems false. And including this in the writing is likely to give the reader the impression that you don’t understand the situation with AI, and stop reading.)
Perhaps (1) make it apparent that the first thing is a quote and (2) change the second sentence to “If you do nothing, our story gets a bad ending, and that’s it.”. Or just rewrite the whole thing.
I agree that “we can’t test it right now” is more appropriate. And I was looking for examples of things that “you can’t test right now even if you try really hard”.
Good point. Also, for the purpose of the analogy with AI X-risk, I think we should be willing to grant that the people arrive at the alternative hypothesis through theorising. (Similarly to how we came up with the notion of AI X-risk before having any powerful AIs.) So that does break my example somewhat. (Although in that particular scenario, I imagine that sceptic of Newtonian gravity would came up with alternative explanations for the observation. Not that this seems very relevant.)
I agree with all of this. (And good point about the high confidence aspect.)
The only thing that I would frame slightly differently is that:
[X is unfalsifiable] indeed doesn’t imply [X is false] in the logical sense. On reflection, I think a better phrasing of the original question would have been something like: ‘When is “unfalsifiability of X is evidence against X” incorrect?’. And this amended version often makes sense as a heuristic—as a defense against motivated reasoning, conspiracy theories, etc. (Unfortunately, many scientists seem to take this too far, and view “unfalsifiable” as a reason to stop paying attention, even though they would grant the general claim that [unfalsifiable] doesn’t logically imply [false].)I don’t think it’s foolish to look for analogous examples here, but I guess it’d make more sense to make the case directly.
That was my main plan. I was just hoping to accompany that direct case by a class of examples that build intuition and bring the point home to the audience.
Some partial examples I have so far:
Phenomenon: For virtually any goal specification, if you pursue it sufficiently hard, you are guaranteed to get human extinction.[1]
Situation where it seems false and unfalsifiable: The present world.
Problems with the example: (i) We don’t know whether it is true. (ii) Not obvious enough that it is unfalsifiable.
Phenomenon: Physics and chemistry can give rise to complex life.
Situation where it seems false and unfalsifiable: If Earth didn’t exist.
Problems with the example: (i) if Earth didn’t exist, there wouldn’t be anybody to ask the question, so the scenario is a bit too weird. (ii) The example would be much better if it was the case that if you wait long enough, any planet will produce life.Phenomenon: Gravity—all things with mass attract each other. (As opposed to “things just fall in this one particular direction”.)
Situation where it seems false and unfalsifiable: If you lived in a bunker your whole life, with no knowledge of the outside world.[2]
Problems with the example: The example would be even better if we somehow had some formal model that: (a) describes how physics works, (b) where we would be confident that the model is correct, (c) and that by analysing that model, we will be able to determine whether the theory is correct or false, (d) but the model would be too complex to actually analyse. (Similarly to how chemistry-level simulations are too complex for studying evolution.)
Phenomenon: Eating too much sweet stuff is unhealthy.
Situation where it seems false and unfalsifiable: If you can’t get lots of sugar yet, and only rely on fruit etc.
Problems with the example: The scenario is a bit too artificial. You would have to pretend that you can’t just go and harvest sugar from sugar cane and have somebody eat lots of it.
[Question] When is “unfalsifiable implies false” incorrect?
Nitpick on the framing: I feel that thinking about “misaligned decision-makers” as an “irrational” reason for war could contribute to (mildly) misunderstanding or underestimating the issue.
To elaborate: The “rational vs irrational reasons” distinction talks about the reasons using the framing where states are viewed as monolithic agents who act in “rational” or “irrational” ways. I agree that for the purpose of classifying the risks, this is an ok way to go about things.
I wanted to offer an alternative framing of this, though: For any state, we can consider the abstraction where all people in that state act in harmony to pursue the interests of the state. And then there is the more accurate abstraction where the state is made of individual people with imperfectly aligned interests, who each act optimally to pursue those interests, given their situation. And then there is the model where the individual humans are misaligned and make mistakes. And then you can classify the reasons based on which abstraction you need to explain them.
To rephrease/react: Viewing the AI’s instrumental goal as “avoid being shut down” is perhaps misleading. The AI wants to achieve its goals, and for most goals, that is best achieved by ensuring that the environment keeps on containing something that wants to achieve the AI’s goals and is powerful enough to succeed. This might often be the same as “avoid being shut down”, but definitely isn’t limited to that.