I don’t really have an empirical basis for this, but: If you trained something otherwise comparable to, if not current, then near-future reasoning models without any mention of angular momentum, and gave it a context with several different problems to which angular momentum was applicable, I’d be surprised if it couldn’t notice that →r×→p was a common interesting quantity, and then, in an extension of that context, correctly answer questions about it. If you gave it successive problem sets where the sum of that quantity was applicable, the integral, maybe other things, I’d be surprised if a (maybe more powerful) reasoning model couldn’t build something worth calling the ability to correctly answer questions about angular momentum. Do you expect otherwise, and/or is this not what you had in mind?
It’s a good question. Looking back at my example, now I’m just like “this is a very underspecified/confused example”. This deserves a better discussion, but IDK if I want to do that right now. In short the answer to your question is
I at least would not be very surprised if gippity-seek-o5-noAngular could do what I think you’re describing.
That’s not really what I had in mind, but I had in mind something less clear than I thought. The spirit is about “can the AI come up with novel concepts”, but the issue here is that “novel concepts” are big things, and their material and functioning and history are big and smeared out.
I started writing out a bunch of thoughts, but they felt quite inadequate because I knew nothing about the history of the concept of angular momentum; so I googled around a tiny little bit. The situation seems quite awkward for the angular momentum lesion experiment. What did I “mean to mean” by “scrubbed all mention of stuff related to angular momentum”—presumably this would have to include deleting all subsequent ideas that use angular moment in their definitions, but e.g. did I also mean to delete the notion of cross product?
It seems like angular momentum was worked on in great detail well before the cross product was developed at all explicitly. See https://arxiv.org/pdf/1511.07748 and https://en.wikipedia.org/wiki/Cross_product#History. Should I still expect gippity-seek-o5-noAngular to notice the idea if it doesn’t have the cross product available? Even if not, what does and doesn’t this imply about this decade’s AI’s ability to come up with novel concepts?
(I’m going to mull on why I would have even said my previous comment above, given that on reflection I believe that “most” concepts are big and multifarious and smeared out in intellectual history. For some more examples of smearedness, see the subsection here: https://tsvibt.blogspot.com/2023/03/explicitness.html#the-axiom-of-choice)
That’s not really what I had in mind, but I had in mind something less clear than I thought. The spirit is about “can the AI come up with novel concepts”,
I think one reason I think the current paradigm is “general enough, in principle”, is that I don’t think “novel concepts” is really The Thing. I think creativity / intelligence mostly is about is combining concepts, it’s just that really smart people are
a) faster in raw horsepower and can handle more complexity at a time
b) have a better set of building blocks to combine or apply to make new concepts (which includes building blocks for building better building blocks)
c) have a more efficient search for useful/relevant building blocks (both metacognitive and object-level).
Maybe you believe this, and think that “well yeah, it’s the efficient search that’s the important part, which we still don’t actually have a real working version of?”?
It seems like the current models have basically all the tools a moderately smart human have, with regards to generating novel ideas, and the thing that they’re missing is something like “having a good metacognitive loop such that they notice when they’re doing a fake/dumb version of things, and course correcting” and “persistently pursue plans over long time horizons.” And it doesn’t seem to have zero of either of those, just not enough to get over some hump.
I don’t see what’s missing that a ton of training on a ton of diverse, multimodal tasks + scaffoldin + data flywheel isn’t going to figure out.
Differences between people are less directly revelative of what’s important in human intelligence. My guess is that all or very nearly all human children have all or nearly all the intelligence juice. We just, like, don’t appreciate how much a child is doing in constructing zer world.
the current models have basically all the tools a moderately smart human have, with regards to generating novel ideas
Why on Earth do you think this? (I feel like I’m in an Asch Conformity test, but with really really high production value. Like, after the experiment, they don’t tell you what the test was about. They let you take the card home. On the walk home you ask people on the street, and they all say the short line is long. When you get home, you ask your housemates, and they all agree, the short line is long.)
I don’t see what’s missing that a ton of training on a ton of diverse, multimodal tasks + scaffoldin + data flywheel isn’t going to figure out.
I don’t really have an empirical basis for this, but: If you trained something otherwise comparable to, if not current, then near-future reasoning models without any mention of angular momentum, and gave it a context with several different problems to which angular momentum was applicable, I’d be surprised if it couldn’t notice that →r×→p was a common interesting quantity, and then, in an extension of that context, correctly answer questions about it. If you gave it successive problem sets where the sum of that quantity was applicable, the integral, maybe other things, I’d be surprised if a (maybe more powerful) reasoning model couldn’t build something worth calling the ability to correctly answer questions about angular momentum. Do you expect otherwise, and/or is this not what you had in mind?
It’s a good question. Looking back at my example, now I’m just like “this is a very underspecified/confused example”. This deserves a better discussion, but IDK if I want to do that right now. In short the answer to your question is
I at least would not be very surprised if gippity-seek-o5-noAngular could do what I think you’re describing.
That’s not really what I had in mind, but I had in mind something less clear than I thought. The spirit is about “can the AI come up with novel concepts”, but the issue here is that “novel concepts” are big things, and their material and functioning and history are big and smeared out.
I started writing out a bunch of thoughts, but they felt quite inadequate because I knew nothing about the history of the concept of angular momentum; so I googled around a tiny little bit. The situation seems quite awkward for the angular momentum lesion experiment. What did I “mean to mean” by “scrubbed all mention of stuff related to angular momentum”—presumably this would have to include deleting all subsequent ideas that use angular moment in their definitions, but e.g. did I also mean to delete the notion of cross product?
It seems like angular momentum was worked on in great detail well before the cross product was developed at all explicitly. See https://arxiv.org/pdf/1511.07748 and https://en.wikipedia.org/wiki/Cross_product#History. Should I still expect gippity-seek-o5-noAngular to notice the idea if it doesn’t have the cross product available? Even if not, what does and doesn’t this imply about this decade’s AI’s ability to come up with novel concepts?
(I’m going to mull on why I would have even said my previous comment above, given that on reflection I believe that “most” concepts are big and multifarious and smeared out in intellectual history. For some more examples of smearedness, see the subsection here: https://tsvibt.blogspot.com/2023/03/explicitness.html#the-axiom-of-choice)
I think one reason I think the current paradigm is “general enough, in principle”, is that I don’t think “novel concepts” is really The Thing. I think creativity / intelligence mostly is about is combining concepts, it’s just that really smart people are
a) faster in raw horsepower and can handle more complexity at a time
b) have a better set of building blocks to combine or apply to make new concepts (which includes building blocks for building better building blocks)
c) have a more efficient search for useful/relevant building blocks (both metacognitive and object-level).
Maybe you believe this, and think that “well yeah, it’s the efficient search that’s the important part, which we still don’t actually have a real working version of?”?
It seems like the current models have basically all the tools a moderately smart human have, with regards to generating novel ideas, and the thing that they’re missing is something like “having a good metacognitive loop such that they notice when they’re doing a fake/dumb version of things, and course correcting” and “persistently pursue plans over long time horizons.” And it doesn’t seem to have zero of either of those, just not enough to get over some hump.
I don’t see what’s missing that a ton of training on a ton of diverse, multimodal tasks + scaffoldin + data flywheel isn’t going to figure out.
Differences between people are less directly revelative of what’s important in human intelligence. My guess is that all or very nearly all human children have all or nearly all the intelligence juice. We just, like, don’t appreciate how much a child is doing in constructing zer world.
Why on Earth do you think this? (I feel like I’m in an Asch Conformity test, but with really really high production value. Like, after the experiment, they don’t tell you what the test was about. They let you take the card home. On the walk home you ask people on the street, and they all say the short line is long. When you get home, you ask your housemates, and they all agree, the short line is long.)
My response is in the post.