Hrm. Let me try to give some examples of things I find comprehensible “in the limit” and other things I do not, to try to get it across. In general, grappling for principles, I think that
(1) reasoning in the limit requires you to have a pretty specific notion of what you’re pushing to the limit. If you’re uncertain what function f(x) does stands for, or what “x” is, then talking about what f(x + 1000) looks like is gonna be tough. It doesn’t get clearer just because it’s further away.
(2) if you can reason in the limit, you should be able to reason about the not-limit well. If you’re really confused about what f(x + 1) looks like, even though you know f(x), then thinking about f(x + 10000) doesn’t look any better.
So, examples and counterexamples and analogies.
The Neural Tangent Kernel is theoretical framework meant to help understand what NNs do. It is meant to apply in the limit of an “infinite width” neural network. Notably, although I cannot test an infinite limit neural network, I can make my neural networks wider—I know what that means to move X to X + 1, even though X → inf is not available. People are (of course) uncertain if the NTK is true, but it at least, kinda, makes sense to me for this reason.
Black holes are what happen in the limit as you increase mass. They were (kinda) obvious, once you put together a few equations about gravity and light, at least in the sense they were hypothesized a while ago. But it was unclear what would actually happen—Einstein argued they were impossible with some weird arguments in 1939, but a few months later it turned out he was wrong.
BUT most relevantly for my point here, black holes are not like, in the limit of infinite mass. That still isn’t a thing—physically, infinite mass just consumes everything, I think? But black holes are of sufficiently high mass that weird things happen—and notably you need a specific theory to tell you where those weird things happen. But they aren’t just like a pure “in the limit of mass” argument—they’re a result of a specific belief about how things change continuously as you get massier, with clear predictions, that results in weirdness at a specific point past those predictions, happening because there were other, specific predictions about what would happen before things got weird.
Moving on to intelligence as an application of the above.
So like, Yudkowsky’s argument on corrigibility contains the following sentence:
Suppose that we trained an LLM-like AI to exhibit the behavior “don’t resist being modified” — and then applied some method to make it smarter.
<frustration>To which I scream WHAT METHOD</frustration>. Like, leaving to the side what it looks like as you apply this unnamed method a lot, what I really care about is what happens even when you apply this method a little!
Like let’s imagine that we apply some method to a more familiar object—myself. Suppose we apply some method to make 1a3orn smarter and more effective at accomplishing his goals. Different methods could conceivably work would be:
I take bunch of research chemicals, NSI-189, Dihexa, whatever that guy who said he could raise intelligence was on about, even more obscure and newer chemicals, while trying to do effortful practice at long-range goals.
A billionaire gives Ray a huge grant. I get into a new program he constructs, where we have like an adult-Montessori environment of “Baba is You” like problems and GPQA-like problems, according to an iterated schedule designed to keep you at the perfect level of interest and difficulty.
I get uploaded into a computer, and can start adjusting the “virtual chemistry” of my brain (at first) to learn effectively, but then can start altering myself any way I wish. I can—if I wish—spawn parent versions of myself with my un-edited brains, in case my value starts drifting.
Like the above upload, but without being able to spawn parent versions that supervise for value drift.
Like the above upload, but I’m like, one of a huge society of versions of me who can eject those who drift too far survivor-style
A billionaire gives Ray a huge grant, and separately Ray bitflips into evil-no-deontology-Ray because of errant cosmic radiation. He kidnaps me and several other people, makes us wear shock-collars, and has us do “Baba is You” planning and numerous other challenges, just at 3x the intensity of the prior program, because this is the best way to save the world. (He doesn’t shock us tooo much that wouldn’t be effective.)
Etc etc etc.
Even granting that all these scenarios might result in greater capability—which I think is at least possible* -- I expect that all these scenarios would result in me having very different degrees of coherence, capability profiles, corrigibility, and so on.
And like, my overall belief is that (1) reasoning about intelligence “in the limit” seems like reasoning about all the scenarios above at once. But whatever beliefs I have about intelligence in the limit are—generally—causally screened off once I contemplate the concrete details of the above scenarios; the actual feedback loops, the actual data. And I similarly expect whatever beliefs I have about AI intelligence in the limit to be causally screened off once I contemplate the details of whatever process produces the AI.
Put alternately: Intelligence “in the limit” implies that you’ve executed an iterative update process for that intelligence many times. But there are many such iterative update processes! It seems clear (?!?) that they can converge on surprisingly different areas of competence, even for identical underlying architectures. If you can explain what happens in the limit at iteration 10,000 you should be able to at least talk universally about iteration 100, but… I’m not sure what that is.
I’m a little dissatisfied with the above but I hope it at least gets across why I feel like “in the limit” is vague / underspecified to me.
I think it’s actually like, drawing on a math metaphor but without the underlying preciseness that makes the math actually work? So I think it sort of creates a mental “blank space” in one’s map, which then gets filled in with whatever various notions one has about intelligence drawn from a variety of sources, in a kind of analogical ad-hoc fashion. And that something like that process (????) is what implies dooms.
Maybe it would have been better to talk about why great power does not imply very high coherence idk.
One thing I maybe should note, I don’t think Yudkowsky ever actually said “in the limit” per se, that was me glosseing various things he said, and I’m suddenly worried about subtle games of telephone about whatever he meant.
Another thing I thought of reading this (and maybe @johnswentworth’s Framing Practicum finally paying off, is that a better word than “limit” might be “equilibrium.”
i.e. this isn’t (necessarily) about “there is some f(x), where if you dial up X from 10 to 11 to 100 to 10,000, you expect f(x) to approach some limit”. A different angle of looking it is “what are the plausible stable equilibria that a mind could end up in, or the solar-system-system could end up in?”
A system reaching equilibria includes multiple forces pushing on stuff and interacting with each other, until they settle into a shape where it’s hard to really move the outcome –until something new shocks the system.
...
Some ~specific things you might care about the equilibrium of:
A. One particular AI mind – given some initial conditions, after somehow achieving a minimum threshold of relentless-creative-resourcefulness, and the ability to modify itself and/or it’s environment, and it has whatever combo of goals/impulses it turns out to have.
The equilibrium includes “what will the mind end up doing with itself” and also “how will the outside world try to apply pressure to the mind, and how will the mind apply pressure back?”.
B. The human economy/geopolitical-system. Given that there are lots of groups trying to build AI, there’s a clear economic incentive to do so if you don’t believe in doom, and it’s going to get easier over time. (But also, there are reasons for various political factions to oppose this).
Does this eventually produce a mind, with the conditions to kick off the previous point?
C. The collection of AI minds that end up existing, once some of them hit the minimum relentless-creative-resourcesfullness necessary to kick off A?
...
But translating back into limits:
Looking at your list of “which of these f(x)s are we talking about?”, the answer is “the humanity meta-system that includes all of B.”
“X” is “human labor + resource capital + time, etc”.
The “F” I’m most focused on is “the process of looking at the current set of AI systems, and asking ‘is there a way to improve how much profit/fame/power we can get out of this?’, and then creatively selecting a thing (such as from your list of things above), and then trying it.”
(It’s also useful to ask “what’s F?” re: a given transformer gradient descent architecture, given a set of training data and a process for generating more training data. But, that’s a narrower question, and most such systems will not be the “It” that would kill everyone if anyone build it)
...
Having said that:
“f(x)”, where f is “all human ingenuity focused on building AGI, + all opposed political focuses”, is a confusing type, yes.
I mentioned elsewhere, the “confusing type” is the problem. (or, “a” problem). We are inside a “Find the Correct Types” problem. The thing to do when you’re in a Find the Correct Types problem, is bust out your Handle Confusion and Find the Correct Types toolkit.
I am not a Type Theory Catgirl, but, some early steps I’d want to take are:
map out everything I am confused by that seems relevant (see if some confusions dissolve when I look at them)
map out everything important that seems relevant that I’m not confused by
map out at least a few different ways of structuring the problem. (including, maybe this isn’t actually best thought of as a Type Theory problem)
And part of my response to “f(x) is confusing” is to articulate the stuff above, which hopefully narrows down the confusion slightly. But, I’d also say, before getting to the point of articulating the above, “a’ight, seems like the structure here is something like”
1. AI will probably eventually get built somewhere. It might FOOM. It might takeover. And later, evolution might destroy everything we care about. (You might be uncertain about these, and might confused about some sub-pieces, but I don’t think you-in-particular were confused about this bit)
2. There will be some processes that take in resources and turn them into more intelligence. [FLAG: confused about what this process is and what inputs it involved. But, call this confusing thing f(x)]
3. There are lots of different possible shapes of f(x), I’m confused about that
4. But, the reason I care about f(x) is so that I know either a) will a given AI system FOOM or Takeover? or b) is it capable of stopping other things from FOOMing or taking over? and c) is it capable of preventing death-by-evolution, without causing worse side effects?
And #4 is what specifies which possible ways of resolving confusing bits are most useful. It specifically implies we need to be talking about pretty high powerlevels. However you choose to wrap your brain around it, it somehow need to eventually help you think about extremely high power levels.
So, like, yep “in the limit” is confusing and underspecified. But, it’s meant to be directing your attention to aspects of the confusingness that are more relevant.
Hrm. Let me try to give some examples of things I find comprehensible “in the limit” and other things I do not, to try to get it across. In general, grappling for principles, I think that
(1) reasoning in the limit requires you to have a pretty specific notion of what you’re pushing to the limit. If you’re uncertain what function f(x) does stands for, or what “x” is, then talking about what f(x + 1000) looks like is gonna be tough. It doesn’t get clearer just because it’s further away.
(2) if you can reason in the limit, you should be able to reason about the not-limit well. If you’re really confused about what f(x + 1) looks like, even though you know f(x), then thinking about f(x + 10000) doesn’t look any better.
So, examples and counterexamples and analogies.
The Neural Tangent Kernel is theoretical framework meant to help understand what NNs do. It is meant to apply in the limit of an “infinite width” neural network. Notably, although I cannot test an infinite limit neural network, I can make my neural networks wider—I know what that means to move X to X + 1, even though X → inf is not available. People are (of course) uncertain if the NTK is true, but it at least, kinda, makes sense to me for this reason.
Black holes are what happen in the limit as you increase mass. They were (kinda) obvious, once you put together a few equations about gravity and light, at least in the sense they were hypothesized a while ago. But it was unclear what would actually happen—Einstein argued they were impossible with some weird arguments in 1939, but a few months later it turned out he was wrong.
BUT most relevantly for my point here, black holes are not like, in the limit of infinite mass. That still isn’t a thing—physically, infinite mass just consumes everything, I think? But black holes are of sufficiently high mass that weird things happen—and notably you need a specific theory to tell you where those weird things happen. But they aren’t just like a pure “in the limit of mass” argument—they’re a result of a specific belief about how things change continuously as you get massier, with clear predictions, that results in weirdness at a specific point past those predictions, happening because there were other, specific predictions about what would happen before things got weird.
Moving on to intelligence as an application of the above.
So like, Yudkowsky’s argument on corrigibility contains the following sentence:
<frustration>To which I scream WHAT METHOD</frustration>. Like, leaving to the side what it looks like as you apply this unnamed method a lot, what I really care about is what happens even when you apply this method a little!
Like let’s imagine that we apply some method to a more familiar object—myself. Suppose we apply some method to make 1a3orn smarter and more effective at accomplishing his goals. Different methods could conceivably work would be:
I take bunch of research chemicals, NSI-189, Dihexa, whatever that guy who said he could raise intelligence was on about, even more obscure and newer chemicals, while trying to do effortful practice at long-range goals.
A billionaire gives Ray a huge grant. I get into a new program he constructs, where we have like an adult-Montessori environment of “Baba is You” like problems and GPQA-like problems, according to an iterated schedule designed to keep you at the perfect level of interest and difficulty.
I get uploaded into a computer, and can start adjusting the “virtual chemistry” of my brain (at first) to learn effectively, but then can start altering myself any way I wish. I can—if I wish—spawn parent versions of myself with my un-edited brains, in case my value starts drifting.
Like the above upload, but without being able to spawn parent versions that supervise for value drift.
Like the above upload, but I’m like, one of a huge society of versions of me who can eject those who drift too far survivor-style
A billionaire gives Ray a huge grant, and separately Ray bitflips into evil-no-deontology-Ray because of errant cosmic radiation. He kidnaps me and several other people, makes us wear shock-collars, and has us do “Baba is You” planning and numerous other challenges, just at 3x the intensity of the prior program, because this is the best way to save the world. (He doesn’t shock us tooo much that wouldn’t be effective.)
Etc etc etc.
Even granting that all these scenarios might result in greater capability—which I think is at least possible* -- I expect that all these scenarios would result in me having very different degrees of coherence, capability profiles, corrigibility, and so on.
And like, my overall belief is that (1) reasoning about intelligence “in the limit” seems like reasoning about all the scenarios above at once. But whatever beliefs I have about intelligence in the limit are—generally—causally screened off once I contemplate the concrete details of the above scenarios; the actual feedback loops, the actual data. And I similarly expect whatever beliefs I have about AI intelligence in the limit to be causally screened off once I contemplate the details of whatever process produces the AI.
Put alternately: Intelligence “in the limit” implies that you’ve executed an iterative update process for that intelligence many times. But there are many such iterative update processes! It seems clear (?!?) that they can converge on surprisingly different areas of competence, even for identical underlying architectures. If you can explain what happens in the limit at iteration 10,000 you should be able to at least talk universally about iteration 100, but… I’m not sure what that is.
I’m a little dissatisfied with the above but I hope it at least gets across why I feel like “in the limit” is vague / underspecified to me.
I think it’s actually like, drawing on a math metaphor but without the underlying preciseness that makes the math actually work? So I think it sort of creates a mental “blank space” in one’s map, which then gets filled in with whatever various notions one has about intelligence drawn from a variety of sources, in a kind of analogical ad-hoc fashion. And that something like that process (????) is what implies dooms.
Maybe it would have been better to talk about why great power does not imply very high coherence idk.
Nod, makes sense.
One thing I maybe should note, I don’t think Yudkowsky ever actually said “in the limit” per se, that was me glosseing various things he said, and I’m suddenly worried about subtle games of telephone about whatever he meant.
Another thing I thought of reading this (and maybe @johnswentworth’s Framing Practicum finally paying off, is that a better word than “limit” might be “equilibrium.”
i.e. this isn’t (necessarily) about “there is some f(x), where if you dial up X from 10 to 11 to 100 to 10,000, you expect f(x) to approach some limit”. A different angle of looking it is “what are the plausible stable equilibria that a mind could end up in, or the solar-system-system could end up in?”
A system reaching equilibria includes multiple forces pushing on stuff and interacting with each other, until they settle into a shape where it’s hard to really move the outcome –until something new shocks the system.
...
Some ~specific things you might care about the equilibrium of:
A. One particular AI mind – given some initial conditions, after somehow achieving a minimum threshold of relentless-creative-resourcefulness, and the ability to modify itself and/or it’s environment, and it has whatever combo of goals/impulses it turns out to have.
The equilibrium includes “what will the mind end up doing with itself” and also “how will the outside world try to apply pressure to the mind, and how will the mind apply pressure back?”.
B. The human economy/geopolitical-system. Given that there are lots of groups trying to build AI, there’s a clear economic incentive to do so if you don’t believe in doom, and it’s going to get easier over time. (But also, there are reasons for various political factions to oppose this).
Does this eventually produce a mind, with the conditions to kick off the previous point?
C. The collection of AI minds that end up existing, once some of them hit the minimum relentless-creative-resourcesfullness necessary to kick off A?
...
But translating back into limits:
Looking at your list of “which of these f(x)s are we talking about?”, the answer is “the humanity meta-system that includes all of B.”
“X” is “human labor + resource capital + time, etc”.
The “F” I’m most focused on is “the process of looking at the current set of AI systems, and asking ‘is there a way to improve how much profit/fame/power we can get out of this?’, and then creatively selecting a thing (such as from your list of things above), and then trying it.”
(It’s also useful to ask “what’s F?” re: a given transformer gradient descent architecture, given a set of training data and a process for generating more training data. But, that’s a narrower question, and most such systems will not be the “It” that would kill everyone if anyone build it)
...
Having said that:
“f(x)”, where f is “all human ingenuity focused on building AGI, + all opposed political focuses”, is a confusing type, yes.
I mentioned elsewhere, the “confusing type” is the problem. (or, “a” problem). We are inside a “Find the Correct Types” problem. The thing to do when you’re in a Find the Correct Types problem, is bust out your Handle Confusion and Find the Correct Types toolkit.
I am not a Type Theory Catgirl, but, some early steps I’d want to take are:
map out everything I am confused by that seems relevant (see if some confusions dissolve when I look at them)
map out everything important that seems relevant that I’m not confused by
map out at least a few different ways of structuring the problem. (including, maybe this isn’t actually best thought of as a Type Theory problem)
And part of my response to “f(x) is confusing” is to articulate the stuff above, which hopefully narrows down the confusion slightly. But, I’d also say, before getting to the point of articulating the above, “a’ight, seems like the structure here is something like”
And #4 is what specifies which possible ways of resolving confusing bits are most useful. It specifically implies we need to be talking about pretty high powerlevels. However you choose to wrap your brain around it, it somehow need to eventually help you think about extremely high power levels.
So, like, yep “in the limit” is confusing and underspecified. But, it’s meant to be directing your attention to aspects of the confusingness that are more relevant.