2.1: This doesn’t appear to follow from the previous two steps. EG, is a similar argument supposed to establish that, a priori, bridges are a long way off? This seems like a very loose and unreliable form of argument, generally speaking.
It seems fine to me; bridges were a long way off at most times at which bridges didn’t exist! (What wouldn’t be fine is continuing to make the a priori argument once there is evidence that we have many of the ideas.)
I guess it depends on what “a priori” is taken to mean (and also what “bridges” is taken to mean). If “a priori” includes reasoning from your own existence, then (depending on “bridge”) it seems like bridges were never “far off” while humans were around. (Simple bridges being easy to construct & commonly useful.)
I don’t think there is a single correct “a priori” (or if there is, it’s hard to know about), so I think it is easy to move work between this step and the next step in Tsvi’s argument (which is about the a posteriori view) by shifting perspectives on what is prior vs evidence. This creates a risk of shifting things around to quietly exclude the sort of reasoning I’m doing from either the prior or the evidence.
The language Tsvi is using wrt the prior suggests a very physicalist, entropy-centric prior, EG “steel beams don’t spontaneously form themselves into bridges”—the sort of prior which doesn’t expect to be on a planet with intelligent life. Fair enough, so far as it goes. It does seem like bridges are a long way off from this prior perspective. However, Tsvi is using this as an intuition pump to suggest that the priors of ASI are very low, so it seems worth pointing out that the priors of just about everything we commonly have today are very low by this prior. Simply put, this prior needs a lot of updating on a lot of stuff, before it is ready to predict the modern world. It doesn’t make sense to ONLY update this prior on evidence that pattern-matches to “evidence that ASI is coming soon” in the obvious sense. First you have to find a good way to update it on being on a world with intelligent life & being a few centuries after an industrial revolution and a few decades into a computing revolution. This is hard to do from a purely physicalist type of perspective, because the physical probability of ASI under these circumstances is really hard to know; it doesn’t account for our uncertainty about how things will unfold & how these things work in general. (We could know the configuration of every physical particle on Earth & still only be marginally less uncertain about ASI timelines, since we can’t just run the simulation forward.)
I can’t strongly defend my framing of this as a critique of step 2.1 as opposed to step 3, since there isn’t a good objective stance on what should go in the prior vs the posterior.
(Noting that I don’t endorse the description of my argument as “physicalist”, though I acknowledge that the “spontaneously” thing kinda sounds like that. Allow me to amend / clarify: I’m saying that you, a mind with understanding and agency, cannot spontaneously assemble beams into a bridge—you have to have some understanding about load and steel and bridges and such. I use this to counter “no blockers” arguments, but I’m not denying that we’re in a special regime due to the existence of minds (humans); the point is that those minds still have to understand a bunch of specific stuff. As mentioned here: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#The__no_blockers__intuition )
Yeah, I almost added a caveat about the physicalist thing probably not being your view. But it was my interpretation.
Your clarification does make more sense. I do still feel like there’s some reference class gerrymandering with the “you, a mind with understanding and agency” because if you select for people who have already accumulated the steel beams, the probability does seem pretty high that they will be able to construct the bridge. Obviously this isn’t a very crucial nit to pick: the important part of the analogy is the part where if you’re trying to construct a bridge when trigonometry hasn’t been invented, you’ll face some trouble.
The important question is: how adequate are existing ideas wrt the problem of constructing ASI?
In some sense we both agree that current humans don’t understand what they’re doing. My ASI-soon picture is somewhat analogous to an architect simply throwing so many steel beams at the problem that they create a pile tall enough to poke out of the water so that you can, technically, drive across it (with no guarantee of safety).
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
Yeah, if I had to guess, I’d guess it’s more like this. (I’d certainly say so w.r.t. alignment—we have no fucking idea what mind-consequence-determiners even are.)
Though I suppose I don’t object to your analogy here, given that it wouldn’t actually work! That “bridge” would collapse the first time you drive a truck over it.
It seems fine to me; bridges were a long way off at most times at which bridges didn’t exist! (What wouldn’t be fine is continuing to make the a priori argument once there is evidence that we have many of the ideas.)
I guess it depends on what “a priori” is taken to mean (and also what “bridges” is taken to mean). If “a priori” includes reasoning from your own existence, then (depending on “bridge”) it seems like bridges were never “far off” while humans were around. (Simple bridges being easy to construct & commonly useful.)
I don’t think there is a single correct “a priori” (or if there is, it’s hard to know about), so I think it is easy to move work between this step and the next step in Tsvi’s argument (which is about the a posteriori view) by shifting perspectives on what is prior vs evidence. This creates a risk of shifting things around to quietly exclude the sort of reasoning I’m doing from either the prior or the evidence.
The language Tsvi is using wrt the prior suggests a very physicalist, entropy-centric prior, EG “steel beams don’t spontaneously form themselves into bridges”—the sort of prior which doesn’t expect to be on a planet with intelligent life. Fair enough, so far as it goes. It does seem like bridges are a long way off from this prior perspective. However, Tsvi is using this as an intuition pump to suggest that the priors of ASI are very low, so it seems worth pointing out that the priors of just about everything we commonly have today are very low by this prior. Simply put, this prior needs a lot of updating on a lot of stuff, before it is ready to predict the modern world. It doesn’t make sense to ONLY update this prior on evidence that pattern-matches to “evidence that ASI is coming soon” in the obvious sense. First you have to find a good way to update it on being on a world with intelligent life & being a few centuries after an industrial revolution and a few decades into a computing revolution. This is hard to do from a purely physicalist type of perspective, because the physical probability of ASI under these circumstances is really hard to know; it doesn’t account for our uncertainty about how things will unfold & how these things work in general. (We could know the configuration of every physical particle on Earth & still only be marginally less uncertain about ASI timelines, since we can’t just run the simulation forward.)
I can’t strongly defend my framing of this as a critique of step 2.1 as opposed to step 3, since there isn’t a good objective stance on what should go in the prior vs the posterior.
(Noting that I don’t endorse the description of my argument as “physicalist”, though I acknowledge that the “spontaneously” thing kinda sounds like that. Allow me to amend / clarify: I’m saying that you, a mind with understanding and agency, cannot spontaneously assemble beams into a bridge—you have to have some understanding about load and steel and bridges and such. I use this to counter “no blockers” arguments, but I’m not denying that we’re in a special regime due to the existence of minds (humans); the point is that those minds still have to understand a bunch of specific stuff. As mentioned here: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#The__no_blockers__intuition )
Yeah, I almost added a caveat about the physicalist thing probably not being your view. But it was my interpretation.
Your clarification does make more sense. I do still feel like there’s some reference class gerrymandering with the “you, a mind with understanding and agency” because if you select for people who have already accumulated the steel beams, the probability does seem pretty high that they will be able to construct the bridge. Obviously this isn’t a very crucial nit to pick: the important part of the analogy is the part where if you’re trying to construct a bridge when trigonometry hasn’t been invented, you’ll face some trouble.
The important question is: how adequate are existing ideas wrt the problem of constructing ASI?
In some sense we both agree that current humans don’t understand what they’re doing. My ASI-soon picture is somewhat analogous to an architect simply throwing so many steel beams at the problem that they create a pile tall enough to poke out of the water so that you can, technically, drive across it (with no guarantee of safety).
However, you don’t believe we know enough to get even that far (by 2030). To you it is perhaps more closely analogous to trying to construct a bridge without having even an intuitive understanding of gravity.
Yeah, if I had to guess, I’d guess it’s more like this. (I’d certainly say so w.r.t. alignment—we have no fucking idea what mind-consequence-determiners even are.)
Though I suppose I don’t object to your analogy here, given that it wouldn’t actually work! That “bridge” would collapse the first time you drive a truck over it.