I claim that whatever models they had, they did not predict that AIs at current capability levels (which are obviously not capable of executing a takeover) would try to execute takeovers. Given that I’m making a claim about what their models didn’t predict, rather than what they did predict, I’m not sure what I’m supposed to cite here; EY has written many millions of words.
Oftentimes, when someone explains their model, they will also explain what their model doesn’t predict. For example, you might quote a sentence from EY which says something like: “To be clear, I wouldn’t expect a merely human-level AI to attempt takeover, even though takeover is instrumentally convergent for many objectives.”
If there’s no clarification like that, I’m not sure we can say either way what their models “did not predict”. It comes down to one’s interpretation of the model.
From my POV, the instrumental convergence model predicts that AIs will take actions they believe to be instrumentally convergent. Since current AIs make many mistakes, under an instrumental convergence model, one would expect that at times they would incorrectly estimate that they’re capable of takeover (making a mistake in said estimation) and attempt takeover on instrumental convergence grounds. This would be a relatively common mistake for them to make, since takeover is instrumentally useful for so many of the objectives we give AIs—as Yudkowsky himself argued repeatedly.
At the very least, we should be able to look at their cognition and see that they are frequently contemplating takeover, then discarding it as unrealistic given current capabilities. This should be one of the biggest findings of interpretability research.
I never saw Yudkowsky and friends explain why this wouldn’t happen. If they did explain why this wouldn’t happen, I expect that explanation would go a ways towards explaining why their original forecast won’t happen as well, since future AI systems are likely to share many properties with current ones.
If the original threat models said the current state of affairs was very unlikely… But I would like someone to point to the place where the original threat models made that claim, since I don’t think that they did.
Is there any scenario that Yudkowsky said was unlikely to come to pass? If not, it sounds kind of like you’re asserting that Yudkowsky’s ideas are unfalsifiable?
For me it’s sufficient to say: Yudkowsky predicted various events, and various other events happened, and the overlap between these two lists of events is fairly limited. That could change as more events occur—indeed, it’s a possibility I’m very worried about! But as a matter of intellectual honesty it seems valuable to acknowledge that his model hasn’t done great so far.
Also, I would still like an answer to my query for the specific link to the argument you want to see people engage with.
Also, I would still like an answer to my query for the specific link to the argument you want to see people engage with.
I haven’t looked very hard, but sure, here’s the first post that comes up when I search for “optimization user:eliezer_yudkowksky”.
The notion of a “powerful optimization process” is necessary and sufficient to a discussion about an Artificial Intelligence that could harm or benefit humanity on a global scale. If you say that an AI is mechanical and therefore “not really intelligent”, and it outputs an action sequence that hacks into the Internet, constructs molecular nanotechnology and wipes the solar system clean of human(e) intelligence, you are still dead. Conversely, an AI that only has a very weak ability steer the future into regions high in its preference ordering, will not be able to much benefit or much harm humanity.
In this paragraph we have most of the relevant section (at least w.r.t. your specific concerns, it doesn’t argue for why most powerful optimization processes would eat everything by default, but that “why” is argued for at such extensive length elsewhere when talking about convergent instrumental goals that I will forgo sourcing it).
No, I don’t think the overall model is unfalsifiable. Parts of it would be falsified if we developed an ASI that was obviously capable of executing a takeover and it didn’t, without us doing quite a lot of work to ensure that outcome. (Not clear which parts, but probably something related to the difficulties of value loading & goal specification.)
Current AIs aren’t trying to execute takeovers because they are weaker optimizers than humans. (We can observe that even most humans are not especially strong optimizers by default, such that most people don’t exert that much optimization power in their lives, even in a way that’s cooperative with other humans.) I think they have much less coherent preferences over future states than most humans. If by some miracle you figure out how to create a generally superintelligent AI which itself does not have (more-coherent-than-human) preferences over future world states, whatever process it implements when you query it to solve a Very Difficult Problem will act as if it does.
EDIT: I see that several other people already made similar points re: sources of agency, etc.
an AI that only has a very weak ability steer the future into regions high in its preference ordering, will not be able to much benefit or much harm humanity.
Arguably ChatGPT has already been a significant benefit/harm to humanity without being a “powerful optimization process” by this definition. Have you seen teachers complaining that their students don’t know how to write anymore? Have you seen junior software engineers struggling to find jobs? Shouldn’t these count as a points against Eliezer’s model?
In an “AI as electricity” scenario (basically continuing the current business-as-usual), we could see “AIs” as a collective cause huge changes, and eat all the free energy that a “powerful optimization process” would eat.
In any case, I don’t see much in your comment which engages with “agency by default” as I defined it earlier. Maybe we just don’t disagree.
No, I don’t think the overall model is unfalsifiable. Parts of it would be falsified if we developed an ASI that was obviously capable of executing a takeover and it didn’t, without us doing quite a lot of work to ensure that outcome. (Not clear which parts, but probably something related to the difficulties of value loading & goal specification.)
OK, but no pre-ASI evidence can count against your model, according to you?
That seems sketchy, because I’m also seeing people such as Eliezer claim, in certain cases, that things which have happened support their model. By conservation of expected evidence, it can’t be the case that evidence during a certain time period will only confirm your model. Otherwise you already would’ve updated. Even if the only hypothetical events are ones which confirm your model, it also has to be the case that absence of those events will count against it.
I’ve updated against Eliezer’s model to a degree, because I can imagine a past-5-years world where his model was confirmed more, and that world didn’t happen.
Current AIs aren’t trying to execute takeovers because they are weaker optimizers than humans.
I think “optimizer” is a confused word and I would prefer that people taboo it. It seems to function as something of a semantic stopsign. The key question is something like: Why doesn’t the logic of convergent instrumental goals cause current AIs to try and take over the world? Would that logic suddenly start to kick in at some point in the future if we just train using more parameters and more data? If so, why? Can you answer that question mechanistically, without using the word “optimizer”?
Trying to take over the world is not an especially original strategy. It doesn’t take a genius to realize that “hey, I could achieve my goals better if I took over the world”. Yet current AIs don’t appear to be contemplating it. I claim this is not a lack of capability, but simply that their training scheme doesn’t result in them becoming the sort of AIs which contemplate it. If the training scheme holds basically constant, perhaps adding more data or parameters won’t change things?
If by some miracle you figure out how to create a generally superintelligent AI which itself does not have (more-coherent-than-human) preferences over future world states, whatever process it implements when you query it to solve a Very Difficult Problem will act as if it does.
The results of LLM training schemes gives us evidence about the results of future AI training schemes. Future AIs could be vastly more capable on many different axes relative to current LLMs, while simultaneously not contemplating world takeover, in the same way current LLMs do not.
I don’t agree, they somehow optimize the goal of being a HHH assistant. We could almost say that they optimize the goal of being aligned. As nostalgbraist reminds us, Anthropic’s HHH paper was an alignment work in the first place. It’s not that surprising that such optimizers happen to be more aligned that the canonical optimizers envisioned by Yudkowsky.
Edit : precision : by “they” I mean the base models trying to predict the answers of an HHH assistant as good as possible (“as good as possible” being clearly a process of optimization or I don’t know what it’s mean). And in my opinion a sufficiently good prediction is effectively or pratically a simulation. Maybe not a bit perfect simulation, but a lossy simulation, an heuristic towards simulation.
LLMs are agent simulators. Why would they contemplate takeover more frequently than the kind of agent they are induced to simulate? You don’t expect a human white-collar worker, even one who make mistakes all the time, to contemplate world domination plans, let alone attempt one. You could however expect the head of state of a world power to do so.
You don’t expect a human white-collar worker, even one who make mistakes all the time, to contemplate world domination plans, let alone attempt one. You could however expect the head of state of a world power to do so.
Yes, this aligns with my current “agency is not the default” view.
Agency is not a binary. Many white collar workers are not very “agenty” in the sense of coming up with sophisticated and unexpected plans to trick their boss.
Human white-collar workers are unarguably agents in the relevant sense here (intelligent beings with desires and taking actions to fulfil those desires). The fact that they have no ability to take over the world has no bearing on this.
Human white-collar workers are unarguably agents in the relevant sense here (intelligent beings with desires and taking actions to fulfil those desires).
The sense that’s relevant to me is that of “agency by default” as I discussed previously: scheming, sandbagging, deception, and so forth.
You seem to smuggle in an unjustified assumption: that white collar workers avoid thinking about taking over the world because they’re unable to take over the world. Maybe they avoid thinking about it because that’s just not the role they’re playing in society. In terms of next-token prediction, a super-powerful LLM told to play a “superintelligent white-collar worker” might simply do the same things that ordinary white-collar workers do, but better and faster.
I think the evidence points towards this conclusion, because current LLMs are frequently mistaken, yet rarely try to take over the world. If the only thing blocking the convergent instrumental goal argument was a conclusion on the part of current LLMs that they’re incapable of world takeover, one would expect that they would sometimes make the mistake of concluding the opposite, and trying to take over the world anyways.
The evidence best fits a world where LLMs are trained in such a way that makes them super-accurate roleplayers. As we add more data and compute, and make them generally more powerful, we should expect the accuracy of the roleplay to increase further—including, perhaps, improved roleplay for exotic hypotheticals like “a superintelligent white-collar worker who is scrupulously helpful/honest/harmless”. That doesn’t necessarily lead to scheming, sandbagging, or deception.
I’m not aware of any evidence for the thesis that “LLMs only avoid taking over the world because they think they’re too weak”. Is there any reason at all to believe that they’re even contemplating the possibility internally? If not, why would increasing their abilities change things? Of course, clearly they are “strong” enough to be plenty aware of the possibility of world takeover; presumably it appears a lot in their training data. Yet it ~only appears to cross their mind if it would be appropriate for roleplay purposes.
There just doesn’t seem to be any great argument that “weak” vs “strong” will make a difference here.
You seem to smuggle in an unjustified assumption: that white collar workers avoid thinking about taking over the world because they’re unable to take over the world. Maybe they avoid thinking about it because that’s just not the role they’re playing in society.
White-collar workers avoid thinking about taking over the world because they’re unable to take over the world, and they’re unable to take over the world because their role in society doesn’t involve that kind of thing. If a white-collar worker is somehow drafted for president of the United States, you would assume their propensity to think about world hegemony will increase. (Also, white-collar workers engage in scheming, sandbagging, and deception all the time? The average person lies 1-2 times per day)
Oftentimes, when someone explains their model, they will also explain what their model doesn’t predict. For example, you might quote a sentence from EY which says something like: “To be clear, I wouldn’t expect a merely human-level AI to attempt takeover, even though takeover is instrumentally convergent for many objectives.”
If there’s no clarification like that, I’m not sure we can say either way what their models “did not predict”. It comes down to one’s interpretation of the model.
From my POV, the instrumental convergence model predicts that AIs will take actions they believe to be instrumentally convergent. Since current AIs make many mistakes, under an instrumental convergence model, one would expect that at times they would incorrectly estimate that they’re capable of takeover (making a mistake in said estimation) and attempt takeover on instrumental convergence grounds. This would be a relatively common mistake for them to make, since takeover is instrumentally useful for so many of the objectives we give AIs—as Yudkowsky himself argued repeatedly.
At the very least, we should be able to look at their cognition and see that they are frequently contemplating takeover, then discarding it as unrealistic given current capabilities. This should be one of the biggest findings of interpretability research.
I never saw Yudkowsky and friends explain why this wouldn’t happen. If they did explain why this wouldn’t happen, I expect that explanation would go a ways towards explaining why their original forecast won’t happen as well, since future AI systems are likely to share many properties with current ones.
Is there any scenario that Yudkowsky said was unlikely to come to pass? If not, it sounds kind of like you’re asserting that Yudkowsky’s ideas are unfalsifiable?
For me it’s sufficient to say: Yudkowsky predicted various events, and various other events happened, and the overlap between these two lists of events is fairly limited. That could change as more events occur—indeed, it’s a possibility I’m very worried about! But as a matter of intellectual honesty it seems valuable to acknowledge that his model hasn’t done great so far.
Also, I would still like an answer to my query for the specific link to the argument you want to see people engage with.
I haven’t looked very hard, but sure, here’s the first post that comes up when I search for “optimization user:eliezer_yudkowksky”.
In this paragraph we have most of the relevant section (at least w.r.t. your specific concerns, it doesn’t argue for why most powerful optimization processes would eat everything by default, but that “why” is argued for at such extensive length elsewhere when talking about convergent instrumental goals that I will forgo sourcing it).
No, I don’t think the overall model is unfalsifiable. Parts of it would be falsified if we developed an ASI that was obviously capable of executing a takeover and it didn’t, without us doing quite a lot of work to ensure that outcome. (Not clear which parts, but probably something related to the difficulties of value loading & goal specification.)
Current AIs aren’t trying to execute takeovers because they are weaker optimizers than humans. (We can observe that even most humans are not especially strong optimizers by default, such that most people don’t exert that much optimization power in their lives, even in a way that’s cooperative with other humans.) I think they have much less coherent preferences over future states than most humans. If by some miracle you figure out how to create a generally superintelligent AI which itself does not have (more-coherent-than-human) preferences over future world states, whatever process it implements when you query it to solve a Very Difficult Problem will act as if it does.
EDIT: I see that several other people already made similar points re: sources of agency, etc.
Arguably ChatGPT has already been a significant benefit/harm to humanity without being a “powerful optimization process” by this definition. Have you seen teachers complaining that their students don’t know how to write anymore? Have you seen junior software engineers struggling to find jobs? Shouldn’t these count as a points against Eliezer’s model?
In an “AI as electricity” scenario (basically continuing the current business-as-usual), we could see “AIs” as a collective cause huge changes, and eat all the free energy that a “powerful optimization process” would eat.
In any case, I don’t see much in your comment which engages with “agency by default” as I defined it earlier. Maybe we just don’t disagree.
OK, but no pre-ASI evidence can count against your model, according to you?
That seems sketchy, because I’m also seeing people such as Eliezer claim, in certain cases, that things which have happened support their model. By conservation of expected evidence, it can’t be the case that evidence during a certain time period will only confirm your model. Otherwise you already would’ve updated. Even if the only hypothetical events are ones which confirm your model, it also has to be the case that absence of those events will count against it.
I’ve updated against Eliezer’s model to a degree, because I can imagine a past-5-years world where his model was confirmed more, and that world didn’t happen.
I think “optimizer” is a confused word and I would prefer that people taboo it. It seems to function as something of a semantic stopsign. The key question is something like: Why doesn’t the logic of convergent instrumental goals cause current AIs to try and take over the world? Would that logic suddenly start to kick in at some point in the future if we just train using more parameters and more data? If so, why? Can you answer that question mechanistically, without using the word “optimizer”?
Trying to take over the world is not an especially original strategy. It doesn’t take a genius to realize that “hey, I could achieve my goals better if I took over the world”. Yet current AIs don’t appear to be contemplating it. I claim this is not a lack of capability, but simply that their training scheme doesn’t result in them becoming the sort of AIs which contemplate it. If the training scheme holds basically constant, perhaps adding more data or parameters won’t change things?
The results of LLM training schemes gives us evidence about the results of future AI training schemes. Future AIs could be vastly more capable on many different axes relative to current LLMs, while simultaneously not contemplating world takeover, in the same way current LLMs do not.
Or because they are not optimizers at all.
I don’t agree, they somehow optimize the goal of being a HHH assistant. We could almost say that they optimize the goal of being aligned. As nostalgbraist reminds us, Anthropic’s HHH paper was an alignment work in the first place. It’s not that surprising that such optimizers happen to be more aligned that the canonical optimizers envisioned by Yudkowsky.
Edit : precision : by “they” I mean the base models trying to predict the answers of an HHH assistant as good as possible (“as good as possible” being clearly a process of optimization or I don’t know what it’s mean). And in my opinion a sufficiently good prediction is effectively or pratically a simulation. Maybe not a bit perfect simulation, but a lossy simulation, an heuristic towards simulation.
LLMs are agent simulators. Why would they contemplate takeover more frequently than the kind of agent they are induced to simulate? You don’t expect a human white-collar worker, even one who make mistakes all the time, to contemplate world domination plans, let alone attempt one. You could however expect the head of state of a world power to do so.
Maybe not; see OP.
Yes, this aligns with my current “agency is not the default” view.
… do you deny human white-collar workers are agents?
Agency is not a binary. Many white collar workers are not very “agenty” in the sense of coming up with sophisticated and unexpected plans to trick their boss.
Human white-collar workers are unarguably agents in the relevant sense here (intelligent beings with desires and taking actions to fulfil those desires). The fact that they have no ability to take over the world has no bearing on this.
The sense that’s relevant to me is that of “agency by default” as I discussed previously: scheming, sandbagging, deception, and so forth.
You seem to smuggle in an unjustified assumption: that white collar workers avoid thinking about taking over the world because they’re unable to take over the world. Maybe they avoid thinking about it because that’s just not the role they’re playing in society. In terms of next-token prediction, a super-powerful LLM told to play a “superintelligent white-collar worker” might simply do the same things that ordinary white-collar workers do, but better and faster.
I think the evidence points towards this conclusion, because current LLMs are frequently mistaken, yet rarely try to take over the world. If the only thing blocking the convergent instrumental goal argument was a conclusion on the part of current LLMs that they’re incapable of world takeover, one would expect that they would sometimes make the mistake of concluding the opposite, and trying to take over the world anyways.
The evidence best fits a world where LLMs are trained in such a way that makes them super-accurate roleplayers. As we add more data and compute, and make them generally more powerful, we should expect the accuracy of the roleplay to increase further—including, perhaps, improved roleplay for exotic hypotheticals like “a superintelligent white-collar worker who is scrupulously helpful/honest/harmless”. That doesn’t necessarily lead to scheming, sandbagging, or deception.
I’m not aware of any evidence for the thesis that “LLMs only avoid taking over the world because they think they’re too weak”. Is there any reason at all to believe that they’re even contemplating the possibility internally? If not, why would increasing their abilities change things? Of course, clearly they are “strong” enough to be plenty aware of the possibility of world takeover; presumably it appears a lot in their training data. Yet it ~only appears to cross their mind if it would be appropriate for roleplay purposes.
There just doesn’t seem to be any great argument that “weak” vs “strong” will make a difference here.
White-collar workers avoid thinking about taking over the world because they’re unable to take over the world, and they’re unable to take over the world because their role in society doesn’t involve that kind of thing. If a white-collar worker is somehow drafted for president of the United States, you would assume their propensity to think about world hegemony will increase. (Also, white-collar workers engage in scheming, sandbagging, and deception all the time? The average person lies 1-2 times per day)