I think the text is mostly focussed on the problems humans have run into when building this stuff, because these are known and hence our only solid empirical detailed basis, while the problems AI would run into when building this stuff are entirely hypothetical.
It then makes a reasonable argument that AI probably won’t be able to circumvent these problems, because higher intelligence and speed alone would not plausibly fix them, and in fact, a plausible fix might have to be slow, human-mediated, and practical.
One can disagree with that conclusion, but as for the approach, what alternative would you propose when trying to judge AI risk?
On a personal level, none of this is relevant to AI risk. Yudkowsky’s interest in it seems like more of a byproduct of his reading choices when he was young and impressionable than anything else, which is not reading I shared. Neither he nor I think this is necessary for xrisk scenarios, with me probably being on the more skeptical side, and me believing more in practical impediments that strongly encourage doing the simple things that work, eg. conventional biotech.
Due to this not being a crux and not having the same personal draw towards discussing it, I basically don’t think about this when I think about modelling AI risk scenarios. I think about it when it comes up because it’s technically interesting. If someone is reasoning about this because they do think it’s a crux for their AI risk scenarios, and they came to me for advice, I’d suggest testing that crux before I suggested being more clever about de novo nanotech arguments.
I think the text is mostly focussed on the problems humans have run into when building this stuff, because these are known and hence our only solid empirical detailed basis, while the problems AI would run into when building this stuff are entirely hypothetical.
It then makes a reasonable argument that AI probably won’t be able to circumvent these problems, because higher intelligence and speed alone would not plausibly fix them, and in fact, a plausible fix might have to be slow, human-mediated, and practical.
One can disagree with that conclusion, but as for the approach, what alternative would you propose when trying to judge AI risk?
I think I implicitly answered you elsewhere, though I’ll add a more literal response to your question here.
On a personal level, none of this is relevant to AI risk. Yudkowsky’s interest in it seems like more of a byproduct of his reading choices when he was young and impressionable than anything else, which is not reading I shared. Neither he nor I think this is necessary for xrisk scenarios, with me probably being on the more skeptical side, and me believing more in practical impediments that strongly encourage doing the simple things that work, eg. conventional biotech.
Due to this not being a crux and not having the same personal draw towards discussing it, I basically don’t think about this when I think about modelling AI risk scenarios. I think about it when it comes up because it’s technically interesting. If someone is reasoning about this because they do think it’s a crux for their AI risk scenarios, and they came to me for advice, I’d suggest testing that crux before I suggested being more clever about de novo nanotech arguments.