Thanks for writing this up. While I don’t have much context on what specifically has gone well or badly for your team, I do feel pretty skeptical about the types of arguments you give at several points: in particular focusing on theories of change, having the most impact, comparative advantage, work paying off in 10 years, etc. I expect that this kind of reasoning itself steers people away from making important scientific contributions, which are often driven by open-ended curiosity and a drive to uncover deep truths.
(A provocative version of this claim: for the most important breakthroughs, it’s nearly impossible to identify a theory of change for them in advance. Imagine Newton or Darwin trying to predict how understanding mechanics/evolution would change the world. Now imagine them trying to do that before they had even invented the theory! And finally imagine if they only considered plans that they thought would work within 10 years, and the sense of scarcity and tension that would give rise to.)
The rest of my comment isn’t directly about this post, but close enough that this seems like a reasonable place to put it. EDIT: to be more clear: the rest of this comment is not primarily about Neel or “pragmatic interpretability”, it’s about parts of the field that I consider to be significantly less relevant to “solving alignment” than that (though work that’s nominally on pragmatic interpretability could also fall into the same failure modes). I clarify my position further in this comment; thanks Rohin for the pushback.
I get the sense that there was a “generation” of AI safety researchers who have ended up with a very marginalist mindset about AI safety. Some examples:
the evals that Beth Barnes (and maybe Dan Hendrycks?) are focusing on
the scenarios that Daniel Kokotajlo is focusing on
the models of misalignment that Evan Hubinger is focusing on
the forecasting that the OpenPhil worldview investigations team focused on
scary demos
safety cases
policy approaches like SB-1047
In other words, whole swathes of the field are not even aspiring to be the type of thing that could solve misalignment. In the terminology of this excellent post, they are all trying to attack a category I problem not a category II problem. Sometimes it feels like almost the entire field EDIT: most of the field is Goodharting on the subgoal of “write a really persuasive memo to send to politicians”. Pragmatic interpretability feels like another step in that direction (EDIT: but still significantly more principled than the things I listed above).
This is all related to something Buck recently wrote: “I spend most of my time thinking about relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget, and about how to cause AI companies to marginally increase that budget”. I’m sure Buck has thought a lot about his strategy here, and I’m sure that you’ve thought a lot about your strategy as laid out in this post, and so on. But a part of me is sitting here thinking: man, everyone sure seems to have given up. (And yes, I know it doesn’t feel like giving up from the inside, but from my perspective that’s part of the problem.)
Now, a lot of the “old guard” seems to have given up too. But they at least know what they’ve given up on. There was an ideal of fundamental scientific progress that MIRI and Paul and a few others were striving towards; they knew at least what it would feel like (if not what it would look like) to actually make progress towards understanding intelligence. Eliezer and various others no longer think that’s plausible. I disagree. But aside from the object-level disagreement, I really want people to be aware that this is a thing that’s at least possible in principle to aim for, lest the next generation of the AI safety community ends up giving up on it before they even know what they’ve given up on.
(I’ll leave for another comment/post the question of what went wrong in my generation. The “types of arguments” I objected to above all seem quite EA-flavored, and so one salient possibility is just that the increasing prominence of EA steered my generation away from the type of mentality in which it’s even possible to aim towards scientific breakthroughs. But even if that’s one part of the story, I expect it’s more complicated than that.)
I wish when you wrote these comments you acknowledged that some people just actually think that we can substantially reduce risk via what you call “marginalist” approaches. Not everyone agrees that you have to deeply understand intelligence from first principles else everyone dies. (EDIT: See Richard’s clarification downthread.) Depending on how you choose your reference class, I’d guess most people disagree with that.
Imo the vast, vast majority of progress in the world happens via “marginalist” approaches, so if you do think you can win via “marginalist” approaches you should generally bias towards them.
Yeah, that’s basically my take—I don’t expect anything to “solve” alignment, but I think we can achieve major risk reductions by marginalist approaches. Maybe we can also achieve even more major risk reductions with massive paradigm shifts, or maybe we just waste a ton of time, I don’t know.
Its worth disambiguating two critiques in Richards comment:
1) the AI safety community doesn’t try to fundamentally understand intelligence 2) the AI safety community doesn’t try to solve alignment for smarter than human AI systems
Tbc, they are somewhat related (i.e. people trying to fundamentally understand intelligence tend to think about alignment more) but clearly distinct. The “mainstream” AI safety crowd (myself included) is much more sympathetic to 2 than 1 (indeed Neel has said as much).
There’s something to the idea that “marginal progress doesn’t fee like marginal progress from the inside”. Like, even if no one breakthrough or discovery “solves alignment”, a general frame of “lets find principled approaches” is often more generative than “let’s find the cheapest 80⁄20 approach” (both can be useful, and historically the safety community has probably leaned too far towards principled, but maybe the current generation is leaning too far the other way)
2) the AI safety community doesn’t try to solve alignment for smarter than human AI systems
I assume you’re referring to “whole swathes of the field are not even aspiring to be the type of thing that could solve misalignment”.
Imo, chain of thought monitoring, AI control, amplified oversight, MONA, reasoning model interpretability, etc, are all things that could make the difference between “x-catastrophe via misalignment” and “no x-catastrophe via misalignment”, so I’d say that lots of our work could “solve misalignment”, though not necessarily in a way where we can know that we’ve solved misalignment in advance.
Based on Richard’s previous writing (e.g. 1, 2) I expect he sees this sort of stuff as not particularly interesting alignment research / doesn’t really help, so I jumped ahead in the conversation to that disagreement.
even if no one breakthrough or discovery “solves alignment”, a general frame of “lets find principled approaches” is often more generative than “let’s find the cheapest 80⁄20 approach”
Sure, I broadly agree with this, and I think Neel would too. I don’t see Neel’s post as disagreeing with it, and I don’t think the list of examples that Richard gave is well described as “let’s find the cheapest 80⁄20 approach”.
I think me using the word “marginalist” was probably a mistake, because it conflates two distinct things that I’m skeptical about:
People no longer trying to make models more aligned (but e.g. trying to do work that primarily cashes out in political outcomes). This is what I mean by “not even aspiring to be the type of thing that could solve alignment”.
People using engineering-type approaches (rather than science-type approaches) to try to make models more aligned.
The list I gave above was of things that fall into category 1, whereas (almost?) all of the things you named fall into category 2. What I want more of is category 3: science-type approaches. One indicator that something is a science-type approach is that it could potentially help us understand something fundamental about intelligence; another is that, if it works, we’ll know in advance (I used to not care about this, but have changed my mind).
I think there are versions of most of the things you named that could be in category 3, but people mostly seem to be doing category-2 versions of them, in significant part because of the sort of EA-style reasoning that I was criticizing from Neel’s original post.
When I wrote “pragmatic interpretability feels like another step in that direction” I meant something like: ambitious interpretability was trying to do 3, and pragmatic interpretability seems like it’s nominally trying to do 2, and may in practice end up being mostly 1. For example, “Stop models acting differently when tested” could be a part of an engineering-type pipeline for fixing misalignments in models, but could also end up drifting towards “help us get better evidence to convince politicians and lab leaders of things”. However, I’m not claiming that pragmatic interpretability is a central example of “not even aspiring to be the type of thing that could solve alignment”. Apologies for the bad phrasings.
Makes sense, I still endorse my original comment in light of this answer (as I already expected something like this was your view). Like, I would now say
Imo the vast, vast majority of progress in the world happens via “engineering-type / category 2” approaches, so if you do think you can win via “engineering-type / category 2″ approaches you should generally bias towards them
while also noting that the way we are using the phrase “engineering-type” here includes a really large amount of what most people would call “science” (e.g. it includes tons of academic work), so it is important when evaluating this claim to interpret the words “engineering” and “science” in context rather than via their usual connotations.
Yepp, makes sense, and it’s a good reminder for me to be careful about how I use these terms.
One clarification I’d make to your original comment though is that I don’t endorse “you have to deeply understand intelligence from first principles else everyone dies”. My position is closer to “you have to be trying to do something principled in order for your contribution to be robustly positive”. Relatedly, agent foundations and mech-interp are approximately the only two parts of AI safety that seem robustly good to me—with a bunch of other stuff like RLHF, or evals, or (almost all) governance work, I feel pretty confused about whether they’re good or bad or basically just wash out even in expectation.
This is still consistent with risk potentially being reduced by what I call engineering-type work, it’s just that IMO that involves us “getting lucky” in an important way which I prefer we not rely on. (And trying to get lucky isn’t a neutral action—engineering-type work can also easily have harmful effects.)
Fair, I’ve edited the comment with a pointer. It still seems to me to be a pretty direct disagreement with “we can substantially reduce risk via [engineering-type / category 2] approaches”.
My claim is “while it certainly could be net negative (as is also the case for ~any action including e.g. donating to AMF), in aggregate it is substantially positive expected risk reduction”.
Your claim in opposition seems to be “who knows what the sign is, we should treat it as an expected zero risk reduction”.
Though possibly you are saying “it’s bad to take actions that have a chance of backfiring, we should focus much more on robustly positive things” (because something something virtue ethics?), in which case I think we have a disagreement on decision theory instead.
I still want to claim that in either case, my position is much more common (among the readership here), except inasmuch as they disagree because they think alignment is very hard and that’s why there’s expected zero (or negative) risk reduction. And so I wish you’d flag when your claims depend on these takes (though I realize it is often hard to notice when that is the case).
I expect it’s not worth our time to dig too deep into whose position is more common here. But I think that a lot of people on LW have high P(doom) in significant part because they share my intuition that marginalist approaches don’t reliably work. I do agree that my combination of “marginalist approaches don’t reliably improve things” and “P(doom) is <50%” is a rare one, but I was only making the former point above (and people upvoted it accordingly), so it feels a bit misleading to focus on the rareness of the overall position.
(Interestingly, while the combination I describe above is a rare one, the converse is also rare—Daniel Kokotajlo is the only person who comes to mind who disagrees with me on both of these propositions simultaneously. Note that he doesn’t characterize his current work as marginalist, but even aside from that question I think this characterization of him is accurate—e.g. he has talked to me about how changing the CEO of a given AI lab could swing his P(doom) by double digit percentage points.)
On reflection, it’s not actually about which position is more common. My real objection is that imo it was pretty obvious that something along these lines would be the crux between you and Neel (and the fact that it is a common position is part of why I think it was obvious).
Inasmuch as you are actually trying to have a conversation with Neel or address Neel’s argument on its merits, it would be good to be clear that this is the crux. I guess perhaps you might just not care about that and are instead trying to influence readers without engaging with the OP’s point of view, in which case fair enough. Personally I would find that distasteful / not in keeping with my norms around collective-epistemics but I do admit it’s within LW norms.
(Incidentally, I feel like you still aren’t quite pinning down your position—depending on what you mean by “reliably” I would probably agree with “marginalist approaches don’t reliably improve things”. I’d also agree with “X doesn’t reliably improve things” for almost any interesting value of X.)
Whoa, you think the scenarios I’m focusing on are marginalist? I didn’t expect you to say that. I generally think of what we are doing as (a) forecasting and (b) making ambitious solve-approximately-all-the-problems plans to present to the world. Forecasting isn’t marginalist, it’s a type error to think so, and as for our plans, well, they seem pretty ambitious to me.
I regret using the word “marginalist”, it’s a bit too confusing. But I do have a pretty high bar for what counts as “ambitious” in the political domain—it involves not just getting the system to do something, but rather trying to change the system itself. Cummings and Thiel are central examples (Geoff Anders maybe also was aiming in that direction at one point).
I expect that this kind of reasoning itself steers people away from making important scientific contributions, which are often driven by open-ended curiosity and a drive to uncover deep truths.
I agree with this statement denotatively, and my own interests/work have generally been “driven by open-ended curiosity and a drive to uncover deep truths”, but isn’t this kind of motivation also what got humanity into its current mess? In other words, wasn’t the main driver of AI progress this kind of curiosity (until perhaps the recent few years when it has been driven more by commercial/monetary/power incentives)?
I would hesitate to encourage more people to follow their own curiosity, for this reason, even people who are already in AI safety research, due to the consideration of illegible safety problems, which can turn their efforts net-negative if they’re being insufficiently strategic (which seems hard to do while also being driven mainly by curiosity).
I think I’ve personally been lucky, or skilled in some way that I don’t understand, in that my own curiosity has perhaps been more aligned with what’s good than most people’s, but even some of my interests, e.g. in early cryptocurrency, might have been net-negative.
I guess this is related to our earlier discussion about how important being virtuous is to good strategy/prioritization, and my general sense is that consistently good strategy requires a high amount of consequentialist reasoning, because the world is too complicated and changes too much and too frequently to rely on pre-computed shortcuts. It’s hard for me to understand how largely intuitive/nonverbal virtues/curiosity could be doing enough “compute” or “reasoning” to consistently output good strategy.
I agree with this statement denotatively, and my own interests/work have generally been “driven by open-ended curiosity and a drive to uncover deep truths”, but isn’t this kind of motivation also what got humanity into its current mess? In other words, wasn’t the main driver of AI progress this kind of curiosity (until perhaps the recent few years when it has been driven more by commercial/monetary/power incentives)?
Interestingly, I was just having a conversation with Critch about this. My contention was that, in the first few decades of the field, AI researchers were actually trying to understand cognition. The rise of deep learning (and especially the kind of deep learning driven by massive scaling) can be seen as the field putting that quest on hold in order to optimize for more legible metrics.
I don’t think you should find this a fully satisfactory answer, because it’s easy to “retrodict” ways that my theory was correct. But that’s true of all explanations of what makes the world good at a very abstract level, including your own answer of metaphilosophical competence. (Also, we can perhaps cash my claim out in predictions, like: was a significant barrier to more researchers working on deep learning the criticism that it didn’t actually provide good explanations of or insight into cognition? Without having looked it up, I suspect so.)
consistently good strategy requires a high amount of consequentialist reasoning
I don’t think that’s true. However I do think it requires deep curiosity about what good strategy is and how it works. It’s not a coincidence that my own research on a theory of coalitional agency was in significant part inspired by strategic failures of EA and AI safety (with this post being one of the earliest building blocks I laid down). I also suspect that the full theory of coalitional agency will in fact explain how to do metaphilosophy correct, because doing good metaphilosophy is ultimately a cognitive process and can therefore be characterized by a sufficiently good theory of cognition.
Again, I don’t expect you to fully believe me. But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them. Without that, it’s hard to trust metaphilosophy or even know what it is (though I think you’ve given a sketch of this in a previous reply to me at some point).
I should also try to write up the same thing, but about how virtues contributed to good things. And maybe also science, insofar as I’m trying to defend doing more science (of cognition and intelligence) in order to help fix risks caused by previous scientific progress.
But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I should also try to write up the same thing, but about how virtues contributed to good things.
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.
Thanks for writing this up. While I don’t have much context on what specifically has gone well or badly for your team, I do feel pretty skeptical about the types of arguments you give at several points: in particular focusing on theories of change, having the most impact, comparative advantage, work paying off in 10 years, etc. I expect that this kind of reasoning itself steers people away from making important scientific contributions, which are often driven by open-ended curiosity and a drive to uncover deep truths.
(A provocative version of this claim: for the most important breakthroughs, it’s nearly impossible to identify a theory of change for them in advance. Imagine Newton or Darwin trying to predict how understanding mechanics/evolution would change the world. Now imagine them trying to do that before they had even invented the theory! And finally imagine if they only considered plans that they thought would work within 10 years, and the sense of scarcity and tension that would give rise to.)
The rest of my comment isn’t directly about this post, but close enough that this seems like a reasonable place to put it. EDIT: to be more clear: the rest of this comment is not primarily about Neel or “pragmatic interpretability”, it’s about parts of the field that I consider to be significantly less relevant to “solving alignment” than that (though work that’s nominally on pragmatic interpretability could also fall into the same failure modes). I clarify my position further in this comment; thanks Rohin for the pushback.
I get the sense that there was a “generation” of AI safety researchers who have ended up with a very marginalist mindset about AI safety. Some examples:
the evals that Beth Barnes (and maybe Dan Hendrycks?) are focusing on
the scenarios that Daniel Kokotajlo is focusing on
the models of misalignment that Evan Hubinger is focusing on
the forecasting that the OpenPhil worldview investigations team focused on
scary demos
safety cases
policy approaches like SB-1047
In other words, whole swathes of the field are not even aspiring to be the type of thing that could solve misalignment. In the terminology of this excellent post, they are all trying to attack a category I problem not a category II problem. Sometimes it feels like
almost the entire fieldEDIT: most of the field is Goodharting on the subgoal of “write a really persuasive memo to send to politicians”. Pragmatic interpretability feels like another step in that direction (EDIT: but still significantly more principled than the things I listed above).This is all related to something Buck recently wrote: “I spend most of my time thinking about relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget, and about how to cause AI companies to marginally increase that budget”. I’m sure Buck has thought a lot about his strategy here, and I’m sure that you’ve thought a lot about your strategy as laid out in this post, and so on. But a part of me is sitting here thinking: man, everyone sure seems to have given up. (And yes, I know it doesn’t feel like giving up from the inside, but from my perspective that’s part of the problem.)
Now, a lot of the “old guard” seems to have given up too. But they at least know what they’ve given up on. There was an ideal of fundamental scientific progress that MIRI and Paul and a few others were striving towards; they knew at least what it would feel like (if not what it would look like) to actually make progress towards understanding intelligence. Eliezer and various others no longer think that’s plausible. I disagree. But aside from the object-level disagreement, I really want people to be aware that this is a thing that’s at least possible in principle to aim for, lest the next generation of the AI safety community ends up giving up on it before they even know what they’ve given up on.
(I’ll leave for another comment/post the question of what went wrong in my generation. The “types of arguments” I objected to above all seem quite EA-flavored, and so one salient possibility is just that the increasing prominence of EA steered my generation away from the type of mentality in which it’s even possible to aim towards scientific breakthroughs. But even if that’s one part of the story, I expect it’s more complicated than that.)
I wish when you wrote these comments you acknowledged that some people just actually think that we can substantially reduce risk via what you call “marginalist” approaches. Not everyone agrees that you have to deeply understand intelligence from first principles else everyone dies. (EDIT: See Richard’s clarification downthread.) Depending on how you choose your reference class, I’d guess most people disagree with that.
Imo the vast, vast majority of progress in the world happens via “marginalist” approaches, so if you do think you can win via “marginalist” approaches you should generally bias towards them.
Yeah, that’s basically my take—I don’t expect anything to “solve” alignment, but I think we can achieve major risk reductions by marginalist approaches. Maybe we can also achieve even more major risk reductions with massive paradigm shifts, or maybe we just waste a ton of time, I don’t know.
Its worth disambiguating two critiques in Richards comment:
1) the AI safety community doesn’t try to fundamentally understand intelligence
2) the AI safety community doesn’t try to solve alignment for smarter than human AI systems
Tbc, they are somewhat related (i.e. people trying to fundamentally understand intelligence tend to think about alignment more) but clearly distinct. The “mainstream” AI safety crowd (myself included) is much more sympathetic to 2 than 1 (indeed Neel has said as much).
There’s something to the idea that “marginal progress doesn’t fee like marginal progress from the inside”. Like, even if no one breakthrough or discovery “solves alignment”, a general frame of “lets find principled approaches” is often more generative than “let’s find the cheapest 80⁄20 approach” (both can be useful, and historically the safety community has probably leaned too far towards principled, but maybe the current generation is leaning too far the other way)
I assume you’re referring to “whole swathes of the field are not even aspiring to be the type of thing that could solve misalignment”.
Imo, chain of thought monitoring, AI control, amplified oversight, MONA, reasoning model interpretability, etc, are all things that could make the difference between “x-catastrophe via misalignment” and “no x-catastrophe via misalignment”, so I’d say that lots of our work could “solve misalignment”, though not necessarily in a way where we can know that we’ve solved misalignment in advance.
Based on Richard’s previous writing (e.g. 1, 2) I expect he sees this sort of stuff as not particularly interesting alignment research / doesn’t really help, so I jumped ahead in the conversation to that disagreement.
Sure, I broadly agree with this, and I think Neel would too. I don’t see Neel’s post as disagreeing with it, and I don’t think the list of examples that Richard gave is well described as “let’s find the cheapest 80⁄20 approach”.
I think me using the word “marginalist” was probably a mistake, because it conflates two distinct things that I’m skeptical about:
People no longer trying to make models more aligned (but e.g. trying to do work that primarily cashes out in political outcomes). This is what I mean by “not even aspiring to be the type of thing that could solve alignment”.
People using engineering-type approaches (rather than science-type approaches) to try to make models more aligned.
The list I gave above was of things that fall into category 1, whereas (almost?) all of the things you named fall into category 2. What I want more of is category 3: science-type approaches. One indicator that something is a science-type approach is that it could potentially help us understand something fundamental about intelligence; another is that, if it works, we’ll know in advance (I used to not care about this, but have changed my mind).
I think there are versions of most of the things you named that could be in category 3, but people mostly seem to be doing category-2 versions of them, in significant part because of the sort of EA-style reasoning that I was criticizing from Neel’s original post.
When I wrote “pragmatic interpretability feels like another step in that direction” I meant something like: ambitious interpretability was trying to do 3, and pragmatic interpretability seems like it’s nominally trying to do 2, and may in practice end up being mostly 1. For example, “Stop models acting differently when tested” could be a part of an engineering-type pipeline for fixing misalignments in models, but could also end up drifting towards “help us get better evidence to convince politicians and lab leaders of things”. However, I’m not claiming that pragmatic interpretability is a central example of “not even aspiring to be the type of thing that could solve alignment”. Apologies for the bad phrasings.
Makes sense, I still endorse my original comment in light of this answer (as I already expected something like this was your view). Like, I would now say
while also noting that the way we are using the phrase “engineering-type” here includes a really large amount of what most people would call “science” (e.g. it includes tons of academic work), so it is important when evaluating this claim to interpret the words “engineering” and “science” in context rather than via their usual connotations.
Yepp, makes sense, and it’s a good reminder for me to be careful about how I use these terms.
One clarification I’d make to your original comment though is that I don’t endorse “you have to deeply understand intelligence from first principles else everyone dies”. My position is closer to “you have to be trying to do something principled in order for your contribution to be robustly positive”. Relatedly, agent foundations and mech-interp are approximately the only two parts of AI safety that seem robustly good to me—with a bunch of other stuff like RLHF, or evals, or (almost all) governance work, I feel pretty confused about whether they’re good or bad or basically just wash out even in expectation.
This is still consistent with risk potentially being reduced by what I call engineering-type work, it’s just that IMO that involves us “getting lucky” in an important way which I prefer we not rely on. (And trying to get lucky isn’t a neutral action—engineering-type work can also easily have harmful effects.)
Fair, I’ve edited the comment with a pointer. It still seems to me to be a pretty direct disagreement with “we can substantially reduce risk via [engineering-type / category 2] approaches”.
My claim is “while it certainly could be net negative (as is also the case for ~any action including e.g. donating to AMF), in aggregate it is substantially positive expected risk reduction”.
Your claim in opposition seems to be “who knows what the sign is, we should treat it as an expected zero risk reduction”.
Though possibly you are saying “it’s bad to take actions that have a chance of backfiring, we should focus much more on robustly positive things” (because something something virtue ethics?), in which case I think we have a disagreement on decision theory instead.
I still want to claim that in either case, my position is much more common (among the readership here), except inasmuch as they disagree because they think alignment is very hard and that’s why there’s expected zero (or negative) risk reduction. And so I wish you’d flag when your claims depend on these takes (though I realize it is often hard to notice when that is the case).
I expect it’s not worth our time to dig too deep into whose position is more common here. But I think that a lot of people on LW have high P(doom) in significant part because they share my intuition that marginalist approaches don’t reliably work. I do agree that my combination of “marginalist approaches don’t reliably improve things” and “P(doom) is <50%” is a rare one, but I was only making the former point above (and people upvoted it accordingly), so it feels a bit misleading to focus on the rareness of the overall position.
(Interestingly, while the combination I describe above is a rare one, the converse is also rare—Daniel Kokotajlo is the only person who comes to mind who disagrees with me on both of these propositions simultaneously. Note that he doesn’t characterize his current work as marginalist, but even aside from that question I think this characterization of him is accurate—e.g. he has talked to me about how changing the CEO of a given AI lab could swing his P(doom) by double digit percentage points.)
On reflection, it’s not actually about which position is more common. My real objection is that imo it was pretty obvious that something along these lines would be the crux between you and Neel (and the fact that it is a common position is part of why I think it was obvious).
Inasmuch as you are actually trying to have a conversation with Neel or address Neel’s argument on its merits, it would be good to be clear that this is the crux. I guess perhaps you might just not care about that and are instead trying to influence readers without engaging with the OP’s point of view, in which case fair enough. Personally I would find that distasteful / not in keeping with my norms around collective-epistemics but I do admit it’s within LW norms.
(Incidentally, I feel like you still aren’t quite pinning down your position—depending on what you mean by “reliably” I would probably agree with “marginalist approaches don’t reliably improve things”. I’d also agree with “X doesn’t reliably improve things” for almost any interesting value of X.)
Whoa, you think the scenarios I’m focusing on are marginalist? I didn’t expect you to say that. I generally think of what we are doing as (a) forecasting and (b) making ambitious solve-approximately-all-the-problems plans to present to the world. Forecasting isn’t marginalist, it’s a type error to think so, and as for our plans, well, they seem pretty ambitious to me.
I regret using the word “marginalist”, it’s a bit too confusing. But I do have a pretty high bar for what counts as “ambitious” in the political domain—it involves not just getting the system to do something, but rather trying to change the system itself. Cummings and Thiel are central examples (Geoff Anders maybe also was aiming in that direction at one point).
I agree with this statement denotatively, and my own interests/work have generally been “driven by open-ended curiosity and a drive to uncover deep truths”, but isn’t this kind of motivation also what got humanity into its current mess? In other words, wasn’t the main driver of AI progress this kind of curiosity (until perhaps the recent few years when it has been driven more by commercial/monetary/power incentives)?
I would hesitate to encourage more people to follow their own curiosity, for this reason, even people who are already in AI safety research, due to the consideration of illegible safety problems, which can turn their efforts net-negative if they’re being insufficiently strategic (which seems hard to do while also being driven mainly by curiosity).
I think I’ve personally been lucky, or skilled in some way that I don’t understand, in that my own curiosity has perhaps been more aligned with what’s good than most people’s, but even some of my interests, e.g. in early cryptocurrency, might have been net-negative.
I guess this is related to our earlier discussion about how important being virtuous is to good strategy/prioritization, and my general sense is that consistently good strategy requires a high amount of consequentialist reasoning, because the world is too complicated and changes too much and too frequently to rely on pre-computed shortcuts. It’s hard for me to understand how largely intuitive/nonverbal virtues/curiosity could be doing enough “compute” or “reasoning” to consistently output good strategy.
Interestingly, I was just having a conversation with Critch about this. My contention was that, in the first few decades of the field, AI researchers were actually trying to understand cognition. The rise of deep learning (and especially the kind of deep learning driven by massive scaling) can be seen as the field putting that quest on hold in order to optimize for more legible metrics.
I don’t think you should find this a fully satisfactory answer, because it’s easy to “retrodict” ways that my theory was correct. But that’s true of all explanations of what makes the world good at a very abstract level, including your own answer of metaphilosophical competence. (Also, we can perhaps cash my claim out in predictions, like: was a significant barrier to more researchers working on deep learning the criticism that it didn’t actually provide good explanations of or insight into cognition? Without having looked it up, I suspect so.)
I don’t think that’s true. However I do think it requires deep curiosity about what good strategy is and how it works. It’s not a coincidence that my own research on a theory of coalitional agency was in significant part inspired by strategic failures of EA and AI safety (with this post being one of the earliest building blocks I laid down). I also suspect that the full theory of coalitional agency will in fact explain how to do metaphilosophy correct, because doing good metaphilosophy is ultimately a cognitive process and can therefore be characterized by a sufficiently good theory of cognition.
Again, I don’t expect you to fully believe me. But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them. Without that, it’s hard to trust metaphilosophy or even know what it is (though I think you’ve given a sketch of this in a previous reply to me at some point).
I should also try to write up the same thing, but about how virtues contributed to good things. And maybe also science, insofar as I’m trying to defend doing more science (of cognition and intelligence) in order to help fix risks caused by previous scientific progress.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.