(Noting that I don’t endorse the description of my argument as “physicalist”, though I acknowledge that the “spontaneously” thing kinda sounds like that. Allow me to amend / clarify: I’m saying that you, a mind with understanding and agency, cannot spontaneously assemble beams into a bridge—you have to have some understanding about load and steel and bridges and such. I use this to counter “no blockers” arguments, but I’m not denying that we’re in a special regime due to the existence of minds (humans); the point is that those minds still have to understand a bunch of specific stuff. As mentioned here: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#The__no_blockers__intuition )
Yeah, I almost added a caveat about the physicalist thing probably not being your view. But it was my interpretation.
Your clarification does make more sense. I do still feel like there’s some reference class gerrymandering with the “you, a mind with understanding and agency” because if you select for people who have already accumulated the steel beams, the probability does seem pretty high that they will be able to construct the bridge. Obviously this isn’t a very crucial nit to pick: the important part of the analogy is the part where if you’re trying to construct a bridge when trigonometry hasn’t been invented, you’ll face some trouble.
The important question is: how adequate are existing ideas wrt the problem of constructing ASI?
In some sense we both agree that current humans don’t understand what they’re doing. My ASI-soon picture is somewhat analogous to an architect simply throwing so many steel beams at the problem that they create a pile tall enough to poke out of the water so that you can, technically, drive across it (with no guarantee of safety).
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
Yeah, if I had to guess, I’d guess it’s more like this. (I’d certainly say so w.r.t. alignment—we have no fucking idea what mind-consequence-determiners even are.)
Though I suppose I don’t object to your analogy here, given that it wouldn’t actually work! That “bridge” would collapse the first time you drive a truck over it.
(Noting that I don’t endorse the description of my argument as “physicalist”, though I acknowledge that the “spontaneously” thing kinda sounds like that. Allow me to amend / clarify: I’m saying that you, a mind with understanding and agency, cannot spontaneously assemble beams into a bridge—you have to have some understanding about load and steel and bridges and such. I use this to counter “no blockers” arguments, but I’m not denying that we’re in a special regime due to the existence of minds (humans); the point is that those minds still have to understand a bunch of specific stuff. As mentioned here: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#The__no_blockers__intuition )
Yeah, I almost added a caveat about the physicalist thing probably not being your view. But it was my interpretation.
Your clarification does make more sense. I do still feel like there’s some reference class gerrymandering with the “you, a mind with understanding and agency” because if you select for people who have already accumulated the steel beams, the probability does seem pretty high that they will be able to construct the bridge. Obviously this isn’t a very crucial nit to pick: the important part of the analogy is the part where if you’re trying to construct a bridge when trigonometry hasn’t been invented, you’ll face some trouble.
The important question is: how adequate are existing ideas wrt the problem of constructing ASI?
In some sense we both agree that current humans don’t understand what they’re doing. My ASI-soon picture is somewhat analogous to an architect simply throwing so many steel beams at the problem that they create a pile tall enough to poke out of the water so that you can, technically, drive across it (with no guarantee of safety).
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
Yeah, if I had to guess, I’d guess it’s more like this. (I’d certainly say so w.r.t. alignment—we have no fucking idea what mind-consequence-determiners even are.)
Though I suppose I don’t object to your analogy here, given that it wouldn’t actually work! That “bridge” would collapse the first time you drive a truck over it.