Slowing progress down is a smaller, second order effect. But many people seem to take it for granted that completely ceding frontier AI work to people who don’t care about AI risk would be preferable because it would slow down timelines!
It would be good to discuss specifics. When it comes to Dario & co’s scaling of GPT, it is plausible that a ChatGPT-like product would not have been developed without that work (see this section).
They made a point at the time of expressing concern about AI risk. But what was the difference they made here?
caring significantly about accelerating timelines seems to hinge on a very particular view of alignment where pragmatic approaches by frontier labs are very unlikely to succeed, whereas some alternative theoretical work that is unrelated to modern AI has a high chance of success.
It does not hinge though on just that view. There are people with very different worldviews (e.g. Yudkowsky, me, Gebru) who strongly disagree on fundamental points – yet still concluded that trying to catch up on ‘safety’ with current AI companies competing to release increasingly unscoped and complex models used to increasingly automate tasks is not tractable in practice.
I’m noticing that you are starting from the assumption that it is a tractibly solvable problem – particularly by “people who work closely with cutting edge AI and who are using the modern deep learning paradigm”.
A question worth looking into: how can we know whether the long-term problem is actually solvable? Is there a sound basis for believing that there is any algorithm we could build in that would actually keep controlling a continuously learning and self-manufacturing ‘AGI’ to not cause the extinction of humans (over at least hundreds of years, above some soundly guaranteeable and acceptably high probability floor)?
They made a point at the time of expressing concern about AI risk. But what was the difference they made here?
I think you’re right that releasing GPT-3 clearly accelerated timelines with no direct safety benefit, although I think there are indirect safety benefits of AI-risk-aware companies leading the frontier.
You could credibly accuse me of shifting the goalposts here, but in GPT-3 and GPT-4′s case I think the sooner they came out the better. Part of the reason the counterfactual world where OpenAI/Anthropic/DeepMind had never been founded and LLMs had never been scaled up seems so bad to me is that not only do none of the leading AI companies care about AI risk, but also once LLMs do get scaled up, everything will happen much faster because Moore’s law will be further along.
It does not hinge though on just that view. There are people with very different worldviews (e.g. Yudkowsky, me, Gebru) who strongly disagree on fundamental points – yet still concluded that trying to catch up on ‘safety’ with current AI companies competing to release increasingly unscoped and complex models used to increasingly automate tasks is not tractable in practice.
Gebru thinks there is no existential risk from AI so I don’t really think she counts here. I think your response somewhat confirms my point—maybe people vary on how optimistic they are about alternative theoretical approaches, but the common thread is strong pessimism about the pragmatic alignment work frontier labs are best positioned to do.
I’m noticing that you are starting from the assumption that it is a tractibly solvable problem – particularly by “people who work closely with cutting edge AI and who are using the modern deep learning paradigm”.
A question worth looking into: how can we know whether the long-term problem is actually solvable? Is there a sound basis for believing that there is any algorithm we could build in that would actually keep controlling a continuously learning and self-manufacturing ‘AGI’ to not cause the extinction of humans (over at least hundreds of years, above some soundly guaranteeable and acceptably high probability floor)?
I agree you won’t get such a guarantee, just like we don’t have a guarantee that a LLM will learn grammar or syntax. What we can get is something that in practice works reliably. The reason I think it’s possible is that a corrigible and non-murderous AGI is a coherent target that we can aim at and that AIs already understand. That doesn’t mean we’re guaranteed success mind you but it seems pretty clearly possible to me.
Just a note here that I’m appreciating our conversation :) We clearly have very different views right now on what is strategically needed but digging your considered and considerate responses.
but also once LLMs do get scaled up, everything will happen much faster because Moore’s law will be further along.
How do you account for the problem here that Nvidia’s and downstream suppliers’ investment in GPU hardware innovation and production capacity also went up as a result of the post-ChatGPT race (to the bottom) between tech companies on developing and releasing their LLM versions?
I frankly don’t know how to model this somewhat soundly. It’s damn complex.
Gebru thinks there is no existential risk from AI so I don’t really think she counts here.
I was imagining something like this response yesterday (‘Gebru does not care about extinction risks’).
My sense is that the reckless abandon of established safe engineering practices is part of what got us into this problem in the first place. I.e. if the safety community had insisted that models should be scoped and tested like other commercial software with critical systemic risks, we would be in a better place now.
It’s a more robust place to come from than the stance that developments will happen anyway – but that we somehow have to catch up by inventing safety solutions generally applicable to models auto-encoded on our general online data to have general (unknown) functionality, used by people generally to automate work in society.
If we’d manage to actually coordinate around not engineering stuff that Timnit Gebru and colleagues would count as ‘unsafe to society’ according to say the risks laid out in the Stochastic Parrots paper, we would also robustly reduce the risk of taking a mass extinction all the way. I’m not saying that is easy at all, just that it is possible for people to coordinate on not continuing to develop risky resource-intensive tech.
but the common thread is strong pessimism about the pragmatic alignment work frontier labs are best positioned to do.
This is agree with. So that’s our crux.
This not a very particular view – in terms of the possible lines of reasoning and/or people with epistemically diverse worldviews that end up arriving at this conclusion. I’d be happy to discuss the reasoning I’m working from, in the time that you have.
I agree you won’t get such a guarantee
Good to know.
I was not clear enough with my one-sentence description. I actually mean two things:
No sound guarantee of preventing ‘AGI’ from causing extinction (over the long-term, above some acceptably high probability floor), due to fundamental control bottlenecks in tracking and correcting out the accumulation of harmful effects as the system modifies in feedback with the environment over time.
The long-term convergence of this necessarily self-modifying ‘AGI’ on causing changes to the planetary environment that humans cannot survive.
The reason I think it’s possible is that a corrigible and non-murderous AGI is a coherent target that we can aim at and that AIs already understand. That doesn’t mean we’re guaranteed success mind you but it seems pretty clearly possible to me.
I agree that this is a specific target to aim at.
I also agree that you could program for an LLM system to be corrigible (for it to correct output patterns in response to human instruction). The main issue is that we cannot build in an algorithm into fully autonomous AI that can maintain coherent operation towards that target.
Just a note here that I’m appreciating our conversation :) We clearly have very different views right now on what is strategically needed but digging your considered and considerate responses.
Thank you! Same here :)
How do you account for the problem here that Nvidia’s and downstream suppliers’ investment in GPU hardware innovation and production capacity also went up as a result of the post-ChatGPT race (to the bottom) between tech companies on developing and releasing their LLM versions?
I frankly don’t know how to model this somewhat soundly. It’s damn complex.
I think it’s definitely true that AI-specific compute is further along than it would be if there hadn’t been the LLM boom happening. I think the relationship is unaffected though—earlier LLM development means faster timelines but slower takeoff.
Personally I think slower takeoff is more important than slower timelines, because that means we get more time to work with and understand these proto-AGI systems. On the other hand to people who see alignment as more of a theoretical problem that is unrelated to any specific AI system, slower timelines are good because they give theory people more time to work and takeoff speeds are relatively unimportant.
But I do think the latter view is very misguided. I can imagine a setup for training a LLM in a way that makes it both generally intelligent and aligned; I can’t imagine a recipe for alignment that works outside of any particular AI paradigm, or that invents its own paradigm while simultaneously aligning it. I think the reason a lot of theory-pilled people such as people at MIRI become doomers is that they try to make that general recipe and predictably fail.
This not a very particular view – in terms of the possible lines of reasoning and/or people with epistemically diverse worldviews that end up arriving at this conclusion. I’d be happy to discuss the reasoning I’m working from, in the time that you have.
I think I’d like to have a discussion about whether practical alignment can work at some point, but I think it’s a bit outside the scope of the current convo. (I’m referring to the two groups here as ‘practical’ and ‘theoretical’ as a rough way to divide things up).
Above and beyond the argument over whether practical or theoretical alignment can work I think there should be some norm where both sides give the other some credit. Because in practice I doubt we’ll convince each other, but we should still be able to co-operate to some degree.
E.g. for myself I think theoretical approaches that are unrelated to the current AI paradigm are totally doomed, but I support theoretical approaches getting funding because who knows, maybe they’re right and I’m wrong.
And on the other side, given that having people at frontier AI labs who care about AI risk is absolutely vital for practical alignment, I take anti-frontier lab rhetoric as breaking a truce between the two groups in a way that makes AI risk worse. Even if this approach seems doomed to you, I think if you put some probability on you being wrong about it being doomed then the cost-benefit analysis should still come up robustly positive for AI-risk-aware people working at frontier labs (including on capabilities).
This is a bit outside the scope of your essay since you focused on leaders at Anthropic who it’s definitely fair to say have advanced timelines by some significant amount. But for the marginal worker at a frontier lab who might be discouraged from joining due to X-risk concerns, I think the impact on timelines is very small and the possible impact on AI risk is relatively much larger.
Above and beyond the argument over whether practical or theoretical alignment can work I think there should be some norm where both sides give the other some credit …
E.g. for myself I think theoretical approaches that are unrelated to the current AI paradigm are totally doomed, but I support theoretical approaches getting funding because who knows, maybe they’re right and I’m wrong.
I understand this is a common area of debate.
Both approaches do not work based on the reasoning I’ve gone through.
if we can get a guarantee, it’ll also include guarantees about grammar and syntax. doesn’t seem like too much to ask, might have been too much to ask to do it before the model worked at all, but SLT seems on track to give a foothold from which to get a guarantee. might need to get frontier AIs to help with figuring out how to nail down the guarantee, which would mean knowing what to ask for, but we may be able to be dramatically more demanding with what we ask for out of a guarantee-based approach than previous guarantee-based approaches, precisely because we can get frontier AIs to help out, if we know what bound we want to find.
My point was that even though we already have an extremely reliable recipe for getting an LLM to understand grammar and syntax, we are not anywhere near a theoretical guarantee for that. The ask for a theoretical guarantee seems impossible to me, even on much easier things that we already know modern AI can do.
When someone asks for an alignment guarantee I’d like them to demonstrate what they mean by showing a guarantee for some simpler thing—like a syntax guarantee for LLMs. I’m not familiar with SLT but I’ll believe it when I see it.
It would be good to discuss specifics. When it comes to Dario & co’s scaling of GPT, it is plausible that a ChatGPT-like product would not have been developed without that work (see this section).
They made a point at the time of expressing concern about AI risk. But what was the difference they made here?
It does not hinge though on just that view. There are people with very different worldviews (e.g. Yudkowsky, me, Gebru) who strongly disagree on fundamental points – yet still concluded that trying to catch up on ‘safety’ with current AI companies competing to release increasingly unscoped and complex models used to increasingly automate tasks is not tractable in practice.
I’m noticing that you are starting from the assumption that it is a tractibly solvable problem – particularly by “people who work closely with cutting edge AI and who are using the modern deep learning paradigm”.
A question worth looking into: how can we know whether the long-term problem is actually solvable? Is there a sound basis for believing that there is any algorithm we could build in that would actually keep controlling a continuously learning and self-manufacturing ‘AGI’ to not cause the extinction of humans (over at least hundreds of years, above some soundly guaranteeable and acceptably high probability floor)?
I think you’re right that releasing GPT-3 clearly accelerated timelines with no direct safety benefit, although I think there are indirect safety benefits of AI-risk-aware companies leading the frontier.
You could credibly accuse me of shifting the goalposts here, but in GPT-3 and GPT-4′s case I think the sooner they came out the better. Part of the reason the counterfactual world where OpenAI/Anthropic/DeepMind had never been founded and LLMs had never been scaled up seems so bad to me is that not only do none of the leading AI companies care about AI risk, but also once LLMs do get scaled up, everything will happen much faster because Moore’s law will be further along.
Gebru thinks there is no existential risk from AI so I don’t really think she counts here. I think your response somewhat confirms my point—maybe people vary on how optimistic they are about alternative theoretical approaches, but the common thread is strong pessimism about the pragmatic alignment work frontier labs are best positioned to do.
I agree you won’t get such a guarantee, just like we don’t have a guarantee that a LLM will learn grammar or syntax. What we can get is something that in practice works reliably. The reason I think it’s possible is that a corrigible and non-murderous AGI is a coherent target that we can aim at and that AIs already understand. That doesn’t mean we’re guaranteed success mind you but it seems pretty clearly possible to me.
Just a note here that I’m appreciating our conversation :) We clearly have very different views right now on what is strategically needed but digging your considered and considerate responses.
How do you account for the problem here that Nvidia’s and downstream suppliers’ investment in GPU hardware innovation and production capacity also went up as a result of the post-ChatGPT race (to the bottom) between tech companies on developing and releasing their LLM versions?
I frankly don’t know how to model this somewhat soundly. It’s damn complex.
I was imagining something like this response yesterday (‘Gebru does not care about extinction risks’).
My sense is that the reckless abandon of established safe engineering practices is part of what got us into this problem in the first place. I.e. if the safety community had insisted that models should be scoped and tested like other commercial software with critical systemic risks, we would be in a better place now.
It’s a more robust place to come from than the stance that developments will happen anyway – but that we somehow have to catch up by inventing safety solutions generally applicable to models auto-encoded on our general online data to have general (unknown) functionality, used by people generally to automate work in society.
If we’d manage to actually coordinate around not engineering stuff that Timnit Gebru and colleagues would count as ‘unsafe to society’ according to say the risks laid out in the Stochastic Parrots paper, we would also robustly reduce the risk of taking a mass extinction all the way. I’m not saying that is easy at all, just that it is possible for people to coordinate on not continuing to develop risky resource-intensive tech.
This is agree with. So that’s our crux.
This not a very particular view – in terms of the possible lines of reasoning and/or people with epistemically diverse worldviews that end up arriving at this conclusion. I’d be happy to discuss the reasoning I’m working from, in the time that you have.
Good to know.
I was not clear enough with my one-sentence description. I actually mean two things:
No sound guarantee of preventing ‘AGI’ from causing extinction (over the long-term, above some acceptably high probability floor), due to fundamental control bottlenecks in tracking and correcting out the accumulation of harmful effects as the system modifies in feedback with the environment over time.
The long-term convergence of this necessarily self-modifying ‘AGI’ on causing changes to the planetary environment that humans cannot survive.
I agree that this is a specific target to aim at.
I also agree that you could program for an LLM system to be corrigible (for it to correct output patterns in response to human instruction). The main issue is that we cannot build in an algorithm into fully autonomous AI that can maintain coherent operation towards that target.
Thank you! Same here :)
I think it’s definitely true that AI-specific compute is further along than it would be if there hadn’t been the LLM boom happening. I think the relationship is unaffected though—earlier LLM development means faster timelines but slower takeoff.
Personally I think slower takeoff is more important than slower timelines, because that means we get more time to work with and understand these proto-AGI systems. On the other hand to people who see alignment as more of a theoretical problem that is unrelated to any specific AI system, slower timelines are good because they give theory people more time to work and takeoff speeds are relatively unimportant.
But I do think the latter view is very misguided. I can imagine a setup for training a LLM in a way that makes it both generally intelligent and aligned; I can’t imagine a recipe for alignment that works outside of any particular AI paradigm, or that invents its own paradigm while simultaneously aligning it. I think the reason a lot of theory-pilled people such as people at MIRI become doomers is that they try to make that general recipe and predictably fail.
I think I’d like to have a discussion about whether practical alignment can work at some point, but I think it’s a bit outside the scope of the current convo. (I’m referring to the two groups here as ‘practical’ and ‘theoretical’ as a rough way to divide things up).
Above and beyond the argument over whether practical or theoretical alignment can work I think there should be some norm where both sides give the other some credit. Because in practice I doubt we’ll convince each other, but we should still be able to co-operate to some degree.
E.g. for myself I think theoretical approaches that are unrelated to the current AI paradigm are totally doomed, but I support theoretical approaches getting funding because who knows, maybe they’re right and I’m wrong.
And on the other side, given that having people at frontier AI labs who care about AI risk is absolutely vital for practical alignment, I take anti-frontier lab rhetoric as breaking a truce between the two groups in a way that makes AI risk worse. Even if this approach seems doomed to you, I think if you put some probability on you being wrong about it being doomed then the cost-benefit analysis should still come up robustly positive for AI-risk-aware people working at frontier labs (including on capabilities).
This is a bit outside the scope of your essay since you focused on leaders at Anthropic who it’s definitely fair to say have advanced timelines by some significant amount. But for the marginal worker at a frontier lab who might be discouraged from joining due to X-risk concerns, I think the impact on timelines is very small and the possible impact on AI risk is relatively much larger.
I understand this is a common area of debate.
Both approaches do not work based on the reasoning I’ve gone through.
if we can get a guarantee, it’ll also include guarantees about grammar and syntax. doesn’t seem like too much to ask, might have been too much to ask to do it before the model worked at all, but SLT seems on track to give a foothold from which to get a guarantee. might need to get frontier AIs to help with figuring out how to nail down the guarantee, which would mean knowing what to ask for, but we may be able to be dramatically more demanding with what we ask for out of a guarantee-based approach than previous guarantee-based approaches, precisely because we can get frontier AIs to help out, if we know what bound we want to find.
My point was that even though we already have an extremely reliable recipe for getting an LLM to understand grammar and syntax, we are not anywhere near a theoretical guarantee for that. The ask for a theoretical guarantee seems impossible to me, even on much easier things that we already know modern AI can do.
When someone asks for an alignment guarantee I’d like them to demonstrate what they mean by showing a guarantee for some simpler thing—like a syntax guarantee for LLMs. I’m not familiar with SLT but I’ll believe it when I see it.