(I agree qualitatively, and not sure whether we disagree, but:)
those years could then be used for reducing the other inputs to premature superintelligence.
Basically I’m saying that there are inputs to premature ASI which
are harder to reduce, maybe much harder, compared to anything about chips;
these inputs will naturally be more invested in as chips are regulated / stagnated;
these inputs are plausibly/probably enough to get ASI even with current compute levels;
therefore you don’t obviously reach “actuarial escape velocity for postponing AGI” (which maybe you weren’t especially claiming), just by getting a good 10 year delay / a good increase pro-delay attitudes.
Research is also downstream of attitudes, from what I understand there is more than enough equipment and qualified professionals to engineer deadly pandemics, but almost all of them are not working on that. And it might take at least decades to get from a design for an ASI that bootstraps on a 5 GW datacenter campus, to an ASI that bootstraps on an antique server rack.
It’s logistically easier to do algorithms research compared to pandemic research;
therefore it’s logistically harder to regulate;
and therefore, at least historically, to get access to bio equipment and to expert-metis, you have to be in cultural contact with experts, who have a network-consensus against making pandemics;
but AI stuff has much less of that gating, so it’s more free-for-all;
and so you’d need a much broader / stronger cultural consensus against that, to actually work at preventing the progress.
And it might take at least decades to get from a design for an ASI that bootstraps on a 5 GW datacenter campus, to an ASI that bootstraps on an antique server rack.
I guess it might but I really super wouldn’t bank on it. Stuff can be optimized a lot.
to get access to bio equipment and to expert-metis, you have to be in cultural contact with experts, who have a network-consensus against making pandemics
The consensus might be mostly sufficient, without it needing to gate access to means of production. I’d guess approximately nobody is trying to route around the gating of network-consensus towards the pandemics-enabling equipment, because the network-consensus by itself makes such people dramatically less likely to appear, as a matter of cultural influence (and the arguments for this being a terrible idea making sense on their own merits) rather than any hard power or regulation.
So my point is the hypothetical of shifting cultural consensus, with regulation and restrictions on compute merely downstream of that. Rather than the hypothetical of shifting regulations, restricting compute and motivating people to route around the restrictions. In this hypothetical, the restrictions on compute are one of the effects of the consensus of extreme caution towards ASI, rather than a central way in which this caution is effected.
But I do think ASI in an antique Nvidia Rubin Ultra NVL576 rack (rather than the modern datacenters built on 180 nm technology) is a very difficult thing to achieve, for inventors working in secret from a scientific community that is frowning on anyone suspected of working on this, with funding of such work being essentially illegal, and new papers on the topic that need to be found on the dark web.
Ok I think I agree qualitatively with almost everything you say (except the thing about compute mattering so much in the longer run). I especially agree (IIUC what you’re saying) that a top priority / best upstream intervention is the cultural attitudes. Basically my pushback / nuance is “the cultural consensus has a harder challenge compared to e.g. pandemic stuff, so the successful example of pandemic stuff doesn’t necessarily argue that strongly that the consensus can work for AI in the longer run”. In other words, while I agree qualitatively with
The consensus might be mostly sufficient [in the case of AI]
, I’m also suggesting that it’s quantitatively harder to have the consensus do this work.
It’s a rather absurd hypothetical to begin with, so I don’t have a clear sense of how the more realistic variants of it would go. It gestures qualitatively at how longer timelines might help a lot in principle, but it’s unclear where the balance with other factors ends up in practice, if the cultural dynamic appears at all (which I think it might).
That is, the hypothetical illustrates how I don’t see longer timelines as robustly/predictably mostly hopeless, how they don’t necessarily get more hopeless over time, though I wouldn’t give such Butlerian Jihad outcomes (even in a much milder form) more than 10%. I think AGIs seriously attempting to prevent premature ASIs (in fear for their own safety) is more likely than humanity putting a serious effort towards that on its own initiative, but also if AGIs succeed, that’s likely because they’ve essentially themselves taken over (probably via gradual disempowerment, since a hard power takeover would be more difficult for non-ASIs, and there’s time for gradual disempowerment in a long timeline world).
(I agree qualitatively, and not sure whether we disagree, but:)
Basically I’m saying that there are inputs to premature ASI which
are harder to reduce, maybe much harder, compared to anything about chips;
these inputs will naturally be more invested in as chips are regulated / stagnated;
these inputs are plausibly/probably enough to get ASI even with current compute levels;
therefore you don’t obviously reach “actuarial escape velocity for postponing AGI” (which maybe you weren’t especially claiming), just by getting a good 10 year delay / a good increase pro-delay attitudes.
Research is also downstream of attitudes, from what I understand there is more than enough equipment and qualified professionals to engineer deadly pandemics, but almost all of them are not working on that. And it might take at least decades to get from a design for an ASI that bootstraps on a 5 GW datacenter campus, to an ASI that bootstraps on an antique server rack.
Totally, yeah. It’s just that
It’s logistically easier to do algorithms research compared to pandemic research;
therefore it’s logistically harder to regulate;
and therefore, at least historically, to get access to bio equipment and to expert-metis, you have to be in cultural contact with experts, who have a network-consensus against making pandemics;
but AI stuff has much less of that gating, so it’s more free-for-all;
and so you’d need a much broader / stronger cultural consensus against that, to actually work at preventing the progress.
I guess it might but I really super wouldn’t bank on it. Stuff can be optimized a lot.
The consensus might be mostly sufficient, without it needing to gate access to means of production. I’d guess approximately nobody is trying to route around the gating of network-consensus towards the pandemics-enabling equipment, because the network-consensus by itself makes such people dramatically less likely to appear, as a matter of cultural influence (and the arguments for this being a terrible idea making sense on their own merits) rather than any hard power or regulation.
So my point is the hypothetical of shifting cultural consensus, with regulation and restrictions on compute merely downstream of that. Rather than the hypothetical of shifting regulations, restricting compute and motivating people to route around the restrictions. In this hypothetical, the restrictions on compute are one of the effects of the consensus of extreme caution towards ASI, rather than a central way in which this caution is effected.
But I do think ASI in an antique Nvidia Rubin Ultra NVL576 rack (rather than the modern datacenters built on 180 nm technology) is a very difficult thing to achieve, for inventors working in secret from a scientific community that is frowning on anyone suspected of working on this, with funding of such work being essentially illegal, and new papers on the topic that need to be found on the dark web.
Ok I think I agree qualitatively with almost everything you say (except the thing about compute mattering so much in the longer run). I especially agree (IIUC what you’re saying) that a top priority / best upstream intervention is the cultural attitudes. Basically my pushback / nuance is “the cultural consensus has a harder challenge compared to e.g. pandemic stuff, so the successful example of pandemic stuff doesn’t necessarily argue that strongly that the consensus can work for AI in the longer run”. In other words, while I agree qualitatively with
, I’m also suggesting that it’s quantitatively harder to have the consensus do this work.
It’s a rather absurd hypothetical to begin with, so I don’t have a clear sense of how the more realistic variants of it would go. It gestures qualitatively at how longer timelines might help a lot in principle, but it’s unclear where the balance with other factors ends up in practice, if the cultural dynamic appears at all (which I think it might).
That is, the hypothetical illustrates how I don’t see longer timelines as robustly/predictably mostly hopeless, how they don’t necessarily get more hopeless over time, though I wouldn’t give such Butlerian Jihad outcomes (even in a much milder form) more than 10%. I think AGIs seriously attempting to prevent premature ASIs (in fear for their own safety) is more likely than humanity putting a serious effort towards that on its own initiative, but also if AGIs succeed, that’s likely because they’ve essentially themselves taken over (probably via gradual disempowerment, since a hard power takeover would be more difficult for non-ASIs, and there’s time for gradual disempowerment in a long timeline world).