a common discussion pattern: person 1 claims X solves/is an angle of attack on problem P. person 2 is skeptical. there is also some subproblem Q (90% of the time not mentioned explicitly). person 1 is defending a claim like “X solves P conditional on Q already being solved (but Q is easy)”, whereas person 2 thinks person 1 is defending “X solves P via solving Q”, and person 2 also believes something like “subproblem Q is hard”. the problem with this discussion pattern is it can lead to some very frustrating miscommunication:
if the discussion recurses into whether Q is hard, person 1 can get frustrated because it feels like a diversion from the part they actually care about/have tried to find a solution for, which is how to find a solution to P given a solution to Q (again, usually Q is some implicit assumption that you might not even notice you have). it can feel like person 2 is nitpicking or coming up with fully general counterarguments for why X can never be solved.
person 2 can get frustrated because it feels like the original proposed solution doesn’t engage with the hard subproblem Q. person 2 believes that assuming Q were solved, then there would be many other proposals other than X that would also suffice to solve problem P, so that the core ideas of X actually aren’t that important, and all the work is actually being done by assuming Q.
I can see how this could be a frustrating pattern for both parties, but I think it’s often an important conversation tree to explore when person 1 (or anyone) is using results about P in restricted domains to make larger claims or arguments about something that depends on solving P at the hardest difficulty setting in the least convenient possible world.
As an example, consider the following three posts:
I think both of the first two posts are valuable and important work on formulating and analyzing restricted subproblems. But I object to citation of the second post (in the third post) as evidence in support of a larger point that doom from mesa-optimizers or gradient descent is unlikely in the real world, and object to the second post to the degree that it is implicitly making this claim.
There’s an asymmetry when person I is arguing for an optimistic view on AI x-risk and person 2 is arguing for a doomer-ish view, in the sense that person I has to address all counterarguments but person 2 only has to find one hole. But this asymmetry is unfortunately a fact about the problem domain and not the argument / discussion pattern between I and 2.
I find myself in person 2′s position fairly often, and it is INCREDIBLY frustrating for person 1 to claim they’ve “solved” P, when they’re ignoring the actual hard part (or one of the hard parts). And then they get MAD when I point out why their “solution” is ineffective. Oh, wait, I’m also extremely annoyed when person 2 won’t even take steps to CONSIDER my solution—maybe subproblem Q is actually easy, when the path to victory aside from that is clarified.
In neither case can any progress be made without actually addressing how Q fits into P, and what is the actual detailed claim of improvement of X in the face of both Q and non-Q elements of P.
here’s a straw hypothetical example where I’ve exaggerated both 1 and 2; the details aren’t exactly correct but the vibe is more important:
1: “Here’s a super clever extension of debate that mitigates obfuscated arguments [etc], this should just solve alignment”
2: “Debate works if you can actually set the goals of the agents (i.e you’ve solved inner alignment), but otherwise you can get issues with the agents coordinating [etc]”
1: “Well the goals have to be inside the NN somewhere so we can probably just do something with interpretability or whatever”
2: “how are you going to do that? your scheme doesn’t tackle inner alignment, which seems to contain almost all of the difficulty of alignment to me. the claim you just made is a separate claim from your main scheme, and the cleverness in your scheme is in a direction orthogonal to this claim”
1: “idk, also that’s a fully general counterargument to any alignment scheme, you can always just say ‘but what if inner misalignment’. I feel like you’re not really engaging with the meat of my proposal, you’ve just found a thing you can say to be cynical and dismissive of any proposal”
2: “but I think most of the difficulty of alignment is in inner alignment, and schemes which kinda handwave it away are trying to some some problem which is not the actual problem we need to solve to not die from AGI. I agree your scheme would work if inner alignment weren’t a problem.”
1: “so you agree that in a pretty nontrivial number [let’s say both 1&2 agree this is like 20% or something] of worlds my scheme does actually work- I mean how can you be that confident that inner alignment is that hard? in the world’s where inner alignment turns out to be easy then my scheme will work.”
2: “I’m not super confident, but if we assume that inner alignment is easy then I think many other simpler schemes will also work, so the cleverness that your proposal adds doesn’t actually make a big difference.”
So Q=inner alignment? Seems like person 2 not only pointed to inner alignment explicitly (so it can no longer be “some implicit assumption that you might not even notice you have”), but also said that it “seems to contain almost all of the difficulty of alignment to me”. He’s clearly identified inner alignment as a crux, rather than as something meant “to be cynical and dismissive”. At that point, it would have been prudent of person 1 to shift his focus onto inner alignment and explain why he thinks it is not hard.
Note that your post suddenly introduces “Y” without defining it. I think you meant “X”.
a common discussion pattern: person 1 claims X solves/is an angle of attack on problem P. person 2 is skeptical. there is also some subproblem Q (90% of the time not mentioned explicitly). person 1 is defending a claim like “X solves P conditional on Q already being solved (but Q is easy)”, whereas person 2 thinks person 1 is defending “X solves P via solving Q”, and person 2 also believes something like “subproblem Q is hard”. the problem with this discussion pattern is it can lead to some very frustrating miscommunication:
if the discussion recurses into whether Q is hard, person 1 can get frustrated because it feels like a diversion from the part they actually care about/have tried to find a solution for, which is how to find a solution to P given a solution to Q (again, usually Q is some implicit assumption that you might not even notice you have). it can feel like person 2 is nitpicking or coming up with fully general counterarguments for why X can never be solved.
person 2 can get frustrated because it feels like the original proposed solution doesn’t engage with the hard subproblem Q. person 2 believes that assuming Q were solved, then there would be many other proposals other than X that would also suffice to solve problem P, so that the core ideas of X actually aren’t that important, and all the work is actually being done by assuming Q.
I can see how this could be a frustrating pattern for both parties, but I think it’s often an important conversation tree to explore when person 1 (or anyone) is using results about P in restricted domains to make larger claims or arguments about something that depends on solving P at the hardest difficulty setting in the least convenient possible world.
As an example, consider the following three posts:
Challenge: construct a Gradient Hacker
Gradient hacking is extremely difficult
My Objections to “We’re All Gonna Die with Eliezer Yudkowsky”
I think both of the first two posts are valuable and important work on formulating and analyzing restricted subproblems. But I object to citation of the second post (in the third post) as evidence in support of a larger point that doom from mesa-optimizers or gradient descent is unlikely in the real world, and object to the second post to the degree that it is implicitly making this claim.
There’s an asymmetry when person I is arguing for an optimistic view on AI x-risk and person 2 is arguing for a doomer-ish view, in the sense that person I has to address all counterarguments but person 2 only has to find one hole. But this asymmetry is unfortunately a fact about the problem domain and not the argument / discussion pattern between I and 2.
I find myself in person 2′s position fairly often, and it is INCREDIBLY frustrating for person 1 to claim they’ve “solved” P, when they’re ignoring the actual hard part (or one of the hard parts). And then they get MAD when I point out why their “solution” is ineffective. Oh, wait, I’m also extremely annoyed when person 2 won’t even take steps to CONSIDER my solution—maybe subproblem Q is actually easy, when the path to victory aside from that is clarified.
In neither case can any progress be made without actually addressing how Q fits into P, and what is the actual detailed claim of improvement of X in the face of both Q and non-Q elements of P.
yeah, but that’s because Q is easy if you solve PVery nicely described, this might benefit from becoming a top level post
For example?
here’s a straw hypothetical example where I’ve exaggerated both 1 and 2; the details aren’t exactly correct but the vibe is more important:
1: “Here’s a super clever extension of debate that mitigates obfuscated arguments [etc], this should just solve alignment”
2: “Debate works if you can actually set the goals of the agents (i.e you’ve solved inner alignment), but otherwise you can get issues with the agents coordinating [etc]”
1: “Well the goals have to be inside the NN somewhere so we can probably just do something with interpretability or whatever”
2: “how are you going to do that? your scheme doesn’t tackle inner alignment, which seems to contain almost all of the difficulty of alignment to me. the claim you just made is a separate claim from your main scheme, and the cleverness in your scheme is in a direction orthogonal to this claim”
1: “idk, also that’s a fully general counterargument to any alignment scheme, you can always just say ‘but what if inner misalignment’. I feel like you’re not really engaging with the meat of my proposal, you’ve just found a thing you can say to be cynical and dismissive of any proposal”
2: “but I think most of the difficulty of alignment is in inner alignment, and schemes which kinda handwave it away are trying to some some problem which is not the actual problem we need to solve to not die from AGI. I agree your scheme would work if inner alignment weren’t a problem.”
1: “so you agree that in a pretty nontrivial number [let’s say both 1&2 agree this is like 20% or something] of worlds my scheme does actually work- I mean how can you be that confident that inner alignment is that hard? in the world’s where inner alignment turns out to be easy then my scheme will work.”
2: “I’m not super confident, but if we assume that inner alignment is easy then I think many other simpler schemes will also work, so the cleverness that your proposal adds doesn’t actually make a big difference.”
So Q=inner alignment? Seems like person 2 not only pointed to inner alignment explicitly (so it can no longer be “some implicit assumption that you might not even notice you have”), but also said that it “seems to contain almost all of the difficulty of alignment to me”. He’s clearly identified inner alignment as a crux, rather than as something meant “to be cynical and dismissive”. At that point, it would have been prudent of person 1 to shift his focus onto inner alignment and explain why he thinks it is not hard.
Note that your post suddenly introduces “Y” without defining it. I think you meant “X”.