how rare frontier-expanding intelligence is among humans,
On my view, all human children (except in extreme cases, e.g. born without a brain) have this type of intelligence. Children create their conceptual worlds originarily. It’s not literally frontier-expanding because the low-hanging fruit have been picked, but it’s roughly the same mechanism.
Maybe this is a matter of shots-on-goal, as much as anything else, and better methods and insights are mostly reducing the number of shots on goal needed to superhuman rates rather than expanding the space of possibilities those shots can access.
Yeah but drawing from the human distribution is very different from drawing from the LP25 distribution. Humans all have the core mechanisms, and then you’re selecting over variation in genetic and developmental brain health / inclination towards certain kinds of thinking / life circumstances enabling thinking / etc. For LP25, you’re mostly sampling from a very narrow range of Architectures, probably none of which are generally intelligent.
So technically you could set up your laptop to generate a literally random python script and run it every 5 minutes. Eventually this would create an AGI, you just need more shots on goal—but that tells you basically nothing. “Expanding the space” and “narrowing the search” are actually interchangeable in the relevant sense; by narrowing the search, you expand the richness of variations that are accessible to your search (clustered in the areas you’ve focused on). The size of what you actually explore is roughly fixed (well, however much compute you have), like an incompressible fluid—squish it in one direction, it bloops out bigger in another direction.
On my view, all human children (except in extreme cases, e.g. born without a brain) have this type of intelligence. Children create their conceptual worlds originarily. It’s not literally frontier-expanding because the low-hanging fruit have been picked, but it’s roughly the same mechanism.
Yeah but drawing from the human distribution is very different from drawing from the LP25 distribution. Humans all have the core mechanisms, and then you’re selecting over variation in genetic and developmental brain health / inclination towards certain kinds of thinking / life circumstances enabling thinking / etc. For LP25, you’re mostly sampling from a very narrow range of Architectures, probably none of which are generally intelligent.
So technically you could set up your laptop to generate a literally random python script and run it every 5 minutes. Eventually this would create an AGI, you just need more shots on goal—but that tells you basically nothing. “Expanding the space” and “narrowing the search” are actually interchangeable in the relevant sense; by narrowing the search, you expand the richness of variations that are accessible to your search (clustered in the areas you’ve focused on). The size of what you actually explore is roughly fixed (well, however much compute you have), like an incompressible fluid—squish it in one direction, it bloops out bigger in another direction.