We probably disagree a little. I’d bring up a few points. E.g. I’d point out that you and I reason about X-risk better than them in large part due to the fact that we pay attention to people who are smarter than us and who are good at / ahead of the curve on reasoning about X-risk. E.g. I’d point out that more intelligence leads to more intellectual slack (meaning, if you can think faster, a given argument becomes less costly in time (though maybe not in opportunity) to come to understand). E.g. I’d point out that wisdom (in this sense: https://www.lesswrong.com/posts/fzKfzXWEBaENJXDGP/what-is-wisdom-1) is bottlenecked on considering and integrating many possibilities, which is a difficult cognitive task.
But, I agree it’s not that strong a reason.
Realistically I do not think there is any level of genetic modification at which humans can match the pace of ASI.
Yeah I agree, but that’s not the relevant threshold here. More like, can humanity feasibly get smart enough soon enough to be making a bunch of faster progress in material conditions, such that justifications of the form “AGI would give us much faster progress in material conditions” lose most of their (perhaps only apparent) force? I think probably we can.
There were just four reasons right? Your three numbered items, plus “effectful wise action is more difficult than effectful unwise action, and requires more ideas / thought / reflection, relatively speaking; and because generally humans want to do good things”. I think that quotation was the strongest argument. As for numbered item #1, I don’t know why you believe it, but it doesn’t seem clearly false to me, either.
I think the threshold of brainpower where you can start making meaningful progress on the technical problem of AGI alignment is significantly higher than the threshold where you can start making meaningful progress toward AGI.
Simply put, it’s a harder problem. More specifically, it’s got significantly worse feedback signals: it’s easier to tell when / on what tasks your performance is and is not going up, compared to telling when you’ve made a thing that will continue pursuing XYZ as it gets much smarter. You can also tell because progress in capabilities seems to accelerate given more resources, but that is (according to me) barely true or not true in alignment, so far.
My own experience (which I don’t expect you to update much on, but this is part of why I believe these things) is that I’m really smart and as far as I can tell, I’m too dumb to even really get started (cf. https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html). I’ve worked with people who are smarter than I am, and they also are AFAICT totally failing to address the problem. (To be clear, I definitely don’t think it’s “just about being smart”; but I do think there’s some threshold effect.) It’s hard to even stay focused on the problem for the years that it apparently takes to work through wrong preconceptions, bad ideas, etc., and you (or rather, I, and ~everyone I’ve directly worked with) apparently have to do that in order to understand the problem.
We probably disagree a little. I’d bring up a few points. E.g. I’d point out that you and I reason about X-risk better than them in large part due to the fact that we pay attention to people who are smarter than us and who are good at / ahead of the curve on reasoning about X-risk. E.g. I’d point out that more intelligence leads to more intellectual slack (meaning, if you can think faster, a given argument becomes less costly in time (though maybe not in opportunity) to come to understand). E.g. I’d point out that wisdom (in this sense: https://www.lesswrong.com/posts/fzKfzXWEBaENJXDGP/what-is-wisdom-1) is bottlenecked on considering and integrating many possibilities, which is a difficult cognitive task.
But, I agree it’s not that strong a reason.
Yeah I agree, but that’s not the relevant threshold here. More like, can humanity feasibly get smart enough soon enough to be making a bunch of faster progress in material conditions, such that justifications of the form “AGI would give us much faster progress in material conditions” lose most of their (perhaps only apparent) force? I think probably we can.
All of them, or just these two?
There were just four reasons right? Your three numbered items, plus “effectful wise action is more difficult than effectful unwise action, and requires more ideas / thought / reflection, relatively speaking; and because generally humans want to do good things”. I think that quotation was the strongest argument. As for numbered item #1, I don’t know why you believe it, but it doesn’t seem clearly false to me, either.
So I wrote:
Simply put, it’s a harder problem. More specifically, it’s got significantly worse feedback signals: it’s easier to tell when / on what tasks your performance is and is not going up, compared to telling when you’ve made a thing that will continue pursuing XYZ as it gets much smarter. You can also tell because progress in capabilities seems to accelerate given more resources, but that is (according to me) barely true or not true in alignment, so far.
My own experience (which I don’t expect you to update much on, but this is part of why I believe these things) is that I’m really smart and as far as I can tell, I’m too dumb to even really get started (cf. https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html). I’ve worked with people who are smarter than I am, and they also are AFAICT totally failing to address the problem. (To be clear, I definitely don’t think it’s “just about being smart”; but I do think there’s some threshold effect.) It’s hard to even stay focused on the problem for the years that it apparently takes to work through wrong preconceptions, bad ideas, etc., and you (or rather, I, and ~everyone I’ve directly worked with) apparently have to do that in order to understand the problem.