Sure, but I bet that’s because in fact people are usually attuned to the technical details. I imagine if you were really bad on the technical details, that would become a bigger bottleneck.
[Epistemic status: I have never really worked at a big company and Richard has. I have been a PhD student at UC Berkeley but I don’t think that counts]
I think one effect you’re missing is that the big changes are precisely the ones that tend to mostly rely on factors that are hard to specify important technical details about. E.g. “should we move our headquarters to London” or “should we replace the CEO” or “should we change our mission statement” are mostly going to be driven by coalitional politics + high-level intuitions and arguments. Whereas “should we do X training run or Y training run” are more amenable to technical discussion, but also have less lasting effects.
Do you not think it’s a problem that big-picture decisions can be blocked by a kind of overly-strong demand for rigor from people who are used to mostly think about technical details?
I sometimes notice something roughly like the following dynamic: 1. Person A is trying to make a big-picture claim (e.g. that ASI could lead to extinction) that cannot be argued for purely in terms of robust technical details (since we don’t have ASI yet to run experiments, and don’t have a theory yet), 2. Person B is more used to think about technical details that allow you to make robust but way more limited conclusions. 3. B finds some detail in A’s argument that is unjustified or isn’t exactly right, or even just might be wrong. 4. A thinks the detail really won’t change the conclusion, and thinks this just misses the point, but doesn’t want to spend time, because getting all the details exactly right would take maybe a decade. 5. B concludes A doesn’t know what they’re talking about and continues ignoring the big picture question completely and keeps focusing on more limited questions. 6. The issue ends up ignored.
It seems to me that this dynamic is part of the coalitional politics and how the high-level arguments are received?
I don’t think it’s *contradicting* it but I vaguely thought maybe it’s in tension with:
“Big changes within companiesGovernment AI x-risk policy are typically bottlenecked much more by coalitional politics than knowledge of technical details.
Because lack of knowledge of technical details by A ends up getting B to reject and oppose A.
Mostly I wasn’t trying to push against you though, and more trying to download part of your model on how important you think this is, out of curiosity, given your experience at OA.
A key crux is I don’t generally agree with this claim in AI safety:
A thinks the detail really won’t change the conclusion, and thinks this just misses the point, but doesn’t want to spend time, because getting all the details exactly right would take maybe a decade.
In this specific instance, it could work, but in general I think ignoring details is a core failure mode of people that tend towards abstract/meta stuff, which is absolutely the case on Lesswrong.
I think abstraction/meta/theoretical work is useful, but also that theory absolutely does require empirics to make sure you are focusing on the relevant parts of the problem.
This especially is the case if you are focused on working on solutions, rather than trying to get attention on a problem.
I’ll just quote from Richard Ngo here, because he made the point shorter than I can (it’s in a specific setting, but the general point holds):
I currently think of Eliezer as someone who has excellent intuitions about the broad direction of progress at a very high level of abstraction—but where the very fact that these intuitions are so abstract rules out the types of path-dependencies that I expect solutions to alignment will actually rely on. At this point, people who find Eliezer’s intuitions compelling should probably focus on fleshing them out in detail—e.g. using toy models, or trying to decompose the concept of consequentialism—rather than defending them at a high level.
But the problem is that we likely don’t have time to flesh out all the details or do all the relevant experiments before it might be too late, and governments need to understand that based on arguments that therefore cannot possibly rely on everything being fleshed out.
Of course I want people to gather as much important empirical evidence and concrete detailed theory as possible asap.
Also, the pre-everything-worked-out-in-detail arguments also need to inform which experiments are done, and so that is why people who have actually listened to those pre-detailed arguments end up on average doing much more relevant empirical work IMO.
This comment articulates the main thought I was having reading this post. I wonder how Buck is avoiding this very trap, and if there is any hope at all of the Moderate strategy overcoming this problem?
Big changes within companies are typically bottlenecked much more by coalitional politics than knowledge of technical details.
Sure, but I bet that’s because in fact people are usually attuned to the technical details. I imagine if you were really bad on the technical details, that would become a bigger bottleneck.
[Epistemic status: I have never really worked at a big company and Richard has. I have been a PhD student at UC Berkeley but I don’t think that counts]
I think one effect you’re missing is that the big changes are precisely the ones that tend to mostly rely on factors that are hard to specify important technical details about. E.g. “should we move our headquarters to London” or “should we replace the CEO” or “should we change our mission statement” are mostly going to be driven by coalitional politics + high-level intuitions and arguments. Whereas “should we do X training run or Y training run” are more amenable to technical discussion, but also have less lasting effects.
Do you not think it’s a problem that big-picture decisions can be blocked by a kind of overly-strong demand for rigor from people who are used to mostly think about technical details?
I sometimes notice something roughly like the following dynamic:
1. Person A is trying to make a big-picture claim (e.g. that ASI could lead to extinction) that cannot be argued for purely in terms of robust technical details (since we don’t have ASI yet to run experiments, and don’t have a theory yet),
2. Person B is more used to think about technical details that allow you to make robust but way more limited conclusions.
3. B finds some detail in A’s argument that is unjustified or isn’t exactly right, or even just might be wrong.
4. A thinks the detail really won’t change the conclusion, and thinks this just misses the point, but doesn’t want to spend time, because getting all the details exactly right would take maybe a decade.
5. B concludes A doesn’t know what they’re talking about and continues ignoring the big picture question completely and keeps focusing on more limited questions.
6. The issue ends up ignored.
It seems to me that this dynamic is part of the coalitional politics and how the high-level arguments are received?
Yes, that can be a problem. I’m not sure why you think that’s in tension with my comment though.
I don’t think it’s *contradicting* it but I vaguely thought maybe it’s in tension with:
Because lack of knowledge of technical details by A ends up getting B to reject and oppose A.
Mostly I wasn’t trying to push against you though, and more trying to download part of your model on how important you think this is, out of curiosity, given your experience at OA.
A key crux is I don’t generally agree with this claim in AI safety:
In this specific instance, it could work, but in general I think ignoring details is a core failure mode of people that tend towards abstract/meta stuff, which is absolutely the case on Lesswrong.
I think abstraction/meta/theoretical work is useful, but also that theory absolutely does require empirics to make sure you are focusing on the relevant parts of the problem.
This especially is the case if you are focused on working on solutions, rather than trying to get attention on a problem.
I’ll just quote from Richard Ngo here, because he made the point shorter than I can (it’s in a specific setting, but the general point holds):
But the problem is that we likely don’t have time to flesh out all the details or do all the relevant experiments before it might be too late, and governments need to understand that based on arguments that therefore cannot possibly rely on everything being fleshed out.
Of course I want people to gather as much important empirical evidence and concrete detailed theory as possible asap.
Also, the pre-everything-worked-out-in-detail arguments also need to inform which experiments are done, and so that is why people who have actually listened to those pre-detailed arguments end up on average doing much more relevant empirical work IMO.
This comment articulates the main thought I was having reading this post. I wonder how Buck is avoiding this very trap, and if there is any hope at all of the Moderate strategy overcoming this problem?