Evolution has found (sometimes multiple times) the camera, general intelligence, nanotech, electronavigation, aerial endurance better than any drone, robots more flexible than any human-made drone, highly efficient photosynthesis, etc.
First of all let’s answer another question: why didn’t evolution evolve the wheel like the alien wheeled elephants in His Dark Materials?
Is it biologically impossible to evolve?
Well, technically, the flagella of various bacteria is a proper wheel.
No the likely answer is that wheels are great when you have roads and suck when you don’t. Roads are build by ants to some degree but on the whole probably don’t make sense for an animal-intelligence species.
Aren’t there animals that use projectiles?
Hold up. Is it actually true that there is not a single animal with a gun, harpoon or other projectile weapon?
Porcupines have quils, some snakes spit venom, a type of fish spits water as a projectile to kick insects of leaves than eats insects. Bombadier beetles can produce an explosive chemical mixture. Skunks use some other chemicals. Some snails shoot harpoons from very close range. There is a crustacean that can snap its claw so quickly it creates a shockwave stunning fish. Octopi use ink. Goliath birdeater spider shoot hair. Electric eels shoot electricity etc.
Maybe there isn’t an incentive gradient? The problem with this argument is that the same argument can be made for lots and lots of abilities that animals have developed, often multiple times. Flight, camera, a nervous system.
But flight has an intermediate form: glider monkeys, flying squirrels, flying fish.
Except, I think there are lots of intermediate forms for guns & harpoons too:
There are animals with quills. It’s only a small number of steps from having quils that you release when attack to actively shooting and aiming these quils. Why didn’t Evolution evolve Hydralisks? For many other examples—see the list above.
In a Galaxy far far away
I think it is plausible that the reason animals don’t have guns is simply an accident. Somewhere in the vast expanses of space circling a dim sun-like star the water-bearing planet Hiram Maxim is teeming with life. Nothing like an intelligent species has yet evolved yet it’s many lifeforms sport a wide variety of highly effective projectile weapons. Indeed, the majority of larger lifeforms have some form of projective weapon as a result of the evolutionary arms race. The savannahs sport gazelle-like herbivores evading sniper-gun equppied predators.
Some many parsecs away is the planet Big Bertha, a world is embroilled in permanent biological trench warfare. More than 95% percent of the biomass of animals larger than a mouse is taken up by members of just 4 geni of eusocial gun-equipped species or their domesticastes. Yet the individual intelligence of members of these species doesn’t exceed that of a cat.
The largest of the four geni builds massive dams like beavers, practices husbandry of various domesticated species, agriculture and engages in massive warfare against rival colonies using projectile harpoons that grow from their limbs. Yet all of this is biological, not technological: the behaviours and abilites are evolved rather than learned. There is not a single species whose intelligence rivals that of a Great ape, either individually or collectively.
My naive hypothesis: Once you’re able to launch a projectile at a predator or prey such that it breaks skin or shell, if you want it to die, its vastly cheaper to make venom at the ends of the projectiles than to make the projectiles launch fast enough that there’s a good increase in probability the adversary dies quickly.
My completely naive guess would be that venom is mostly too slow for creatures of this size compared with gross physical damage and blood loss, and that getting close enough to set claws on the target is the hard part anyway. Venom seems more useful as a defensive or retributive mechanism than a hunting one.
Most uses of projected venom or other unpleasant substance seem to be defensive rather than offensive. One reason for this is that it’s expensive to make the dangerous substance, and throwing it away wastes it. This cost is affordable if it is used to save your own life, but not easily affordable to acquire a single meal. This life vs meal distinction plays into a lot of offense/defense strategy expenses.
For the hunting options, usually they are also useful for defense. The hunting options all seem cheaper to deploy: punching mantis shrimp, electric eel, fish spitting water...
My guess it that it’s mostly a question of whether the intermediate steps to the evolved behavior are themselves advantageous. Having a path of consistently advantageous steps makes it much easier for something to evolve. Having to go through a trough of worse-in-the-short-term makes things much less likely to evolve. A projectile fired weakly is a cost (energy to fire, energy to producing firing mechanism, energy to produce the projectile, energy to maintain the complexity of the whole system despite it not being useful yet). Where’s the payoff of a weakly fired projectile? Humans can jump that gap by intuiting that a faster projectile would be more effective. Evolution doesn’t get to extrapolate and plan like that.
Jellyfish have nematocysts, which is a spear on a rope, with poison on the tip. The spear has barbs, so when it goes in, it sticks. Then the jellyfish pulls in its prey. The spears are microscopic, but very abundant.
Yes, but I think snake fangs and jellyfish nematocysts are a slightly different type of weapon. Much more targeted application of venom. If the jellyfish squirted their venom as a cloud into the water around them when a fish came near, I expect it would not be nearly as effective per unit of venom.
As a case where both are present, the spitting cobra uses its fangs to inject venom into its prey. However, when threatened, it can instead (wastefully) spray out its venom towards the eyes of an attacker. (the venom has little effect on unbroken mammal skin, but can easily blind if it gets into their eyes).
Fair argument
I guess where I’m lost is that I feel I can make the same ‘no competitive intermediate forms’ for all kinds of wondrous biological forms and functions that have evolved, e.g. the nervous system.
Indeed, this kind of argument used to be a favorite for ID advocates.
There are lots of excellent applications for even very simple nervous systems. The simplest surviving nervous systems are those of jellyfish. They form a ring of coupled oscillators around the periphery of the organism. Their goal is to synchronize muscular contraction so the bell of the jellyfish contracts as one, to propel the jellyfish efficiently. If the muscles contracted independently, it wouldn’t be nearly as good.
Any organism with eyes will profit from having a nervous system to connect the eyes to the muscles. There’s a fungus with eyes and no nervous system, but as far as I know, every animal with eyes also has a nervous system. (The fungus in question is Pilobolus, which uses its eye to aim a gun. No kidding!)
Another huge missed opportunity is thermal vision. Thermal infrared vision is a gigantic boon for hunting at night, and you might expect eg owls and hawks to use it to spot prey hundreds of meters away in pitch darkness, but no animals do (some have thermal sensing, but only extremely short range)
Snakes have thermal vision, using pits on their cheeks to form pinhole cameras. It pays to be cold-blooded when you’re looking for nice hot mice to eat.
If you are warm, any warm-detectors inside your body will detect mostly you. Imagine if blood vessels in your own eye radiated in visible spectrum with the same intensity as daylight environment.
It‘s possible to filter out a constant high value, but not possible to filter out a high level of noise. Unfortunately warmth = random vibration = noise. If you want a low noise thermal camera, you have to cool the detector, or only look for hot things, like engine flares. Fighter planes do both.
Animals do have guns. Humans are animals. Humans have guns. Evolution made us, we made guns, therefore guns indirectly exist because of evolution.
Or do you mean “why don’t animals have something like guns but permanently attached to them instead of regular guns?” There, I’d start with wondering why humans prefer to have our guns separate from our bodies, compared to affixing them permanently or semi-permanently to ourselves. All the drawbacks of choosing a permanently attached gun would also disadvantage a hypothetical creature that got the accessory through a longer, slower selection process.
Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental.
Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic “Seeing like a State” problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.
I think the above would be considered relatively uncontroversial in EA circles. But I posit there is something deeper going on:
Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
I’m pretty skeptical of this and think we need data to back up such a claim. However there might be bias: when anyone makes a serendipitous discovery it’s a better story, so it gets more attention. Has anyone gone through, say, the list of all Nobel laureates and looked at whether their research would have seemed promising before it produced results?
Thanks for your skepticism, Thomas. Before we get into this, I’d like to make sure actually disagree.
My position is not that scientific progress is mostly due to plucky outsiders who are ignored for decades. (I feel something like this is a popular view on LW). Indeed, I think most scientific progress is made through pretty conventional (academic) routes.
I think one can predict that future scientific progress will likely be made by young smart people at prestigious universities and research labs specializing in fields that have good feedback loops and/or have historically made a lot of progress: physics, chemistry, medicine, etc
My contention is that beyond very broad predictive factors like this, judging whether a research direction is fruitful is hard & requires inside knowledge. Much of this knowledge is illegible, difficult to attain because it takes a lot of specialized knowledge etc.
Do you disagree with this ?
I do think that novel research is inherently illegible.
Here are some thoughts on your comment :
1.Before getting into your Nobel prize proposal I’d like to caution for Hindsight bias (obvious reasons).
And perhaps to some degree I’d like to argue the burden of proof should be on the converse: show me evidence that scientific progress is very legible.
In some sense, predicting what directions will be fruitful is a bet against the (efficiënt ?) scientific market.
I also agree the amount of prediction one can do will vary a lot. Indeed, it was itself an innovation (eg Thomas Edison and his lightbulbs !) that some kind of scientific and engineering progress could by systematized: the discovery of R&D.
I think this works much better for certain domains than for others and a to large degree the ‘harder’ & more ‘novel’ the problem is the more labs defer ‘illegibly’ to the inside knowledge of researchers.
I guess I’m not sure what you mean by “most scientific progress,” and I’m missing some of the history here, but my sense is that importance-weighted science happens proportionally more outside of academia. E.g., Einstein did his miracle year outside of academia (and later stated that he wouldn’t have been able to do it, had he succeeded at getting an academic position), Darwin figured out natural selection, and Carnot figured out the Carnot cycle, all mostly on their own, outside of academia. Those are three major scientists who arguably started entire fields (quantum mechanics, biology, and thermodynamics). I would anti-predict that future scientific progress, of the field-founding sort, comes primarily from people at prestigious universities, since they, imo, typically have some of the most intense gatekeeping dynamics which make it harder to have original thoughts.
I do wonder to what degree that may be biased by the fact that there were vastly less academic positions before WWI/WWII. In the time of Darwin and Carnot these positions virtually didn’t exist. In the time of Einstein they did exist but they were quite rare still.
How many examples do you know of this happening past WWII?
Shannon was at Bell Labs iirc
As counterexample of field-founding happening in academia: Godel, Church, Turing were all in academia.
Oh, I actually 70% agree with this. I think there’s an important distinction between legibility to laypeople vs legibility to other domain experts. Let me lay out my beliefs:
In the modern history of fields you mentioned, more than 70% of discoveries are made by people trying to discover the thing, rather than serendipitously.
Other experts in the field, if truth-seeking, are able to understand the theory of change behind the research direction without investing huge amounts of time.
In most fields, experts and superforecasters informed by expert commentary will have fairly strong beliefs about which approaches to a problem will succeed. The person working on something will usually have less than 1 bit advantage about whether their framework will be successful than the experts, unless they have private information (e.g. already did the crucial experiment). This is the weakest belief and I could probably be convinced otherwise just by anecdotes.
The successful researchers might be confident they will succeed, but unsuccessful ones could be almost as confident on average. So it’s not that the research is illegible, it’s just genuinely hard to predict who will succeed.
People often work on different approaches to the problem even if they can predict which ones will work. This could be due to irrationality, other incentives, diminishing returns to each approach, comparative advantage, etc.
If research were illegible to other domain experts, I think you would not really get Kuhnian paradigms, which I am pretty confident exist. Paradigm shifts mostly come from the track record of an approach, so maybe this doesn’t count as researchers having an inside view of others’ work though.
Thank you, Thomas. I believe we find ourselves in broad agreement.
The distinction you make between lay-legibility and expert-legibility is especially well-drawn.
One point: the confidence of researchers in their own approach may not be the right thing to look at. Perhaps a better measure is seeing who can predict not only their own approach will succed but explain in detail why other approaches won’t work. Anecdotally, very succesful researchers have a keen sense of what will work out and what won’t—in private conversation many are willing to share detailed models why other approaches will not work or are not as promising. I’d have to think about this more carefully but anecdotally the most succesful researchers have many bits of information over their competitors not just one or two.
(Note that one bit of information means that their entire advantage could be wiped out by answering a single Y/N question. Not impossible, but not typical for most cases)
What areas of science are you thinking of? I think the discussion varies dramatically.
I think allowing less legibility would help make science less plodding, and allow it to move in larger steps. But there’s also a question of what direction it’s plodding. The problem I saw with psych and neurosci was that it tended to plod in nearly random, not very useful directions.
And what definition of “smart”? I’m afraid that by a common definition, smart people tend to do dumb research, in that they’ll do galaxy brained projects that are interesting but unlikely to pay off. This is how you get new science, but not useful science.
In cognitive psychology and neuroscience, I want to see money given to people who are both creative and practical. They will do new science that is also useful.
In psychology and neuroscience, scientists pick the grantees, and they tend to give money to those whose research they understand. This produces an effect where research keeps following one direction that became popular long ago. I think a different method of granting would work better, but the particular method matters a lot.
Thinking about it a little more, having a mix of personality types involved would probably be useful. I always appreciated the contributions of the rare philospher who actually learned enough to join a discussion about psych or neurosci research.
I think the most important application of meta science theory is alignment research.
Novel research is inherently illegible. If it were legible, someone else would have already pursued it.
It might also be that a legible path would be low status to pursue in the existing scientific communities and thus nobody pursues it.
If you look at a low-hanging fruit that was unpicked for a long time, airborne transmission of many viruses like the common cold, is a good example. There’s nothing illegible about it.
The core reason for holding the belief is because the world does not look to me like there’s little low hanging fruit in a variety of domains of knowledge I have thought about over the years. Of course it’s generally not that easy to argue for the value of ideas that the mainstream does not care about publically.
I find it curious that none of my ideas have a following in academia or have been reinvented/rediscovered by academia (including the most influential ones so far UDT, UDASSA, b-money). Not really complaining, as they’re already more popular than I had expected (Holden Karnofsky talked extensively about UDASSA on an 80,000 Hour podcast, which surprised me), it just seems strange that the popularity stops right at academia’s door.
If you look at the broader field of rationality, the work of Judea Pearl and that of Tetlock both could have been done twenty years earlier. Conceptually, I think you can argue that their work was some of the most important work that was done in the last decades.
Judea Pearl writes about how allergic people were against the idea of factoring in counterfactuals and causality.
I’ve long been a skeptic of scaling LLMs to AGI *. To me I fundamentally don’t understand how this is even possible. It must be said that very smart people give this view credence. davidad, dmurfet. on the other side are vanessa kosoy and steven byrnes. When pushed proponents don’t actually defend the position that a large enough transformer will create nanotech or even obsolete their job. They usually mumble something about scaffolding.
I won’t get into this debate here but I do want to note that my timelines have lengthened, primarily because some of the never-clearly-stated but heavily implied AI developments by proponents of very short timelines have not materialized. To be clear, it has only been a year since gpt-4 is released, and gpt-5 is around the corner, so perhaps my hope is premature. Still my timelines are lengthening.
A year ago, when gpt-3 came out progress was blindingly fast. Part of short timelines came from a sense of ‘if we got surprised so hard by gpt2-3, we are completely uncalibrated, who knows what comes next’.
People seemed surprised by gpt-4 in a way that seemed uncalibrated to me. gpt-4 performance was basically in line with what one would expect if the scaling laws continued to hold. At the time it was already clear that the only really important driver was compute data and that we would run out of both shortly after gpt-4. Scaling proponents suggested this was only the beginning, that there was a whole host of innovation that would be coming. Whispers of mesa-optimizers and simulators.
One year in: Chain-of-thought doesn’t actually improve things that much. External memory and super context lengths ditto. A whole list of proposed architectures seem to serve solely as a paper mill. Every month there is new hype about the latest LLM or image model. Yet they never deviate from expectations based on simple extrapolation of the scaling laws. There is only one thing that really seems to matter and that is compute and data. We have about 3 more OOMs of compute to go. Data may be milked another OOM.
A big question will be whether gpt-5 will suddenly make agentGPT work ( and to what degree). It would seem that gpt-4 is in many ways far more capable than (most or all) humans yet agentGPT is curiously bad.
All-in-all AI progress** is developing according to the naive extrapolations of Scaling Laws but nothing beyond that. The breathless twitter hype about new models is still there but it seems to be believed more at a simulacra level higher than I can parse.
Does this mean we’ll hit an AI winter? No. In my model there may be only one remaining roadblock to ASI (and I suspect I know what it is). That innovation could come at anytime. I don’t know how hard it is, but I suspect it is not too hard.
* the term AGI seems to denote vastly different things to different people in a way I find deeply confusing. I notice that the thing that I thought everybody meant by AGI is now being called ASI. So when I write AGI, feel free to substitute ASI.
** or better, AI congress
addendum: since I’ve been quoted in dmurfet’s AXRP interview as believing that there are certain kinds of reasoning that cannot be represented by transformers/LLMs I want to be clear that this is not really an accurate portrayal of my beliefs. e.g. I don’t think transformers don’t truly understand, are just a stochastic parrot, or in other ways can’t engage in the abstract reasoning that humans do. I think this is clearly false, as seen by interacting with any frontier model.
Wasn’t the surprising thing about GPT-4 that scaling laws did hold? Before this many people expected scaling laws to stop before such a high level of capabilities. It doesn’t seem that crazy to think that a few more OOMs could be enough for greater than human intelligence. I’m not sure that many people predicted that we would have much faster than scaling law progress (at least until ~human intelligence AI can speed up research)? I think scaling laws are the extreme rate of progress which many people with short timelines worry about.
To some degree yes, they were not guaranteed to hold. But by that point they held for over 10 OOMs iirc and there was no known reason they couldn’t continue.
This might be the particular twitter bubble I was in but people definitely predicted capabilities beyond simple extrapolation of scaling laws.
When pushed proponents don’t actually defend the position that a large enough transformer will create nanotech
Can you expand on what you mean by “create nanotech?” If improvements to our current photolithography techniques count, I would not be surprised if (scaffolded) LLMs could be useful for that. Likewise for getting bacteria to express polypeptide catalysts for useful reactions, and even maybe figure out how to chain several novel catalysts together to produce something useful (again, referring to scaffolded LLMs with access to tools).
If you mean that LLMs won’t be able to bootstrap from our current “nanotech only exists in biological systems and chip fabs” world to Drexler-style nanofactories, I agree with that, but I expect things will get crazy enough that I can’t predict them long before nanofactories are a thing (if they ever are).
or even obsolete their job
Likewise, I don’t think LLMs can immediately obsolete all of the parts of my job. But they sure do make parts of my job a lot easier. If you have 100 workers that each spend 90% of their time on one specific task, and you automate that task, that’s approximately as useful as fully automating the jobs of 90 workers. “Human-equivalent” is one of those really leaky abstractions—I would be pretty surprised if the world had any significant resemblance to the world of today by the time robotic systems approached the dexterity and sensitivity of human hands for all of the tasks we use our hands for, whereas for the task of “lift heavy stuff” or “go really fast” machines left us in the dust long ago.
Iterative improvements on the timescale we’re likely to see are still likely to be pretty crazy by historical standards. But yeah, if your timelines were “end of the world by 2026” I can see why they’d be lengthening now.
My timelines were not 2026. In fact, I made bets against doomers 2-3 years ago, one will resolve by next year.
I agree iterative improvements are significant. This falls under “naive extrapolation of scaling laws”.
By nanotech I mean something akin to drexlerian nanotech or something similarly transformative in the vicinity. I think it is plausible that a true ASI will be able to make rapid progress (perhaps on the order of a few years or a decade) on nanotech.
I suspect that people that don’t take this as a serious possibility haven’t really thought through what AGI/ASI means + what the limits and drivers of science and tech really are; I suspect they are simply falling prey to status-quo bias.
With scale, there is visible improvement in difficulty of novel-to-chatbot ideas/details that is possible to explain in-context, things like issues with the code it’s writing. If a chatbot is below some threshold of situational awareness of a task, no scaffolding can keep it on track, but for a better chatbot trivial scaffolding might suffice. Many people can’t google for a solution to a technical issue, the difference between them and those who can is often subtle.
So modest amount of scaling alone seems plausibly sufficient for making chatbots that can do whole jobs almost autonomously. If this works, 1-2 OOMs more of scaling becomes both economically feasible and more likely to be worthwhile. LLMs think much faster, so they only need to be barely smart enough to help with clearing those remaining roadblocks.
At this moment in time, it seems scaffolding tricks haven’t really improved the baseline performance of models that much. Overwhelmingly, the capability comes down to whether the rlfhed base model can do the task.
it seems scaffolding tricks haven’t really improved the baseline performance of models that much. Overwhelmingly, the capability comes down to whether the rlfhed base model can do the task.
That’s what I’m also saying above (in case you are stating what you see as a point of disagreement). This is consistent with scaling-only short timeline expectations. The crux for this model is current chatbots being already close to autonomous agency and to becoming barely smart enough to help with AI research. Not them directly reaching superintelligence or having any more room for scaling.
What I don’t get about this position:
If it was indeed just scaling—what’s AI research for ? There is nothing to discover, just scale more compute. Sure you can maybe improve the speed of deploying compute a little but at the core of it it seems like a story that’s in conflict with itself?
My view is that there’s huge algorithmic gains in peak capability, training efficiency (less data, less compute), and inference efficiency waiting to be discovered, and available to be found by a large number of parallel research hours invested by a minimally competent multimodal LLM powered research team. So it’s not that scaling leads to ASI directly, it’s:
scaling leads to brute forcing the LLM agent across the threshold of AI research usefulness
Using these LLM agents in a large research project can lead to rapidly finding better ML algorithms and architectures.
Training these newly discovered architectures at large scales leads to much more competent automated researchers.
This process repeats quickly over a few months or years.
This process results in AGI.
AGI, if instructed (or allowed, if it’s agentically motivated on its own to do so) to improve itself will find even better architectures and algorithms.
This process can repeat until ASI. The resulting intelligence / capability / inference speed goes far beyond that of humans.
Note that this process isn’t inevitable, there are many points along the way where humans can (and should, in my opinion) intervene. We aren’t disempowered until near the end of this.
Here are two arguments for low-hanging algorithmic improvements.
First, in the past few years I have read many papers containing low-hanging algorithmic improvements. Most such improvements are a few percent or tens of percent. The largest such improvements are things like transformers or mixture of experts, which are substantial steps forward. Such a trend is not guaranteed to persist, but that’s the way to bet.
Second, existing models are far less sample-efficient than humans. We receive about a billion tokens growing to adulthood. The leading LLMs get orders of magnitude more than that. We should be able to do much better. Of course, there’s no guarantee that such an improvement is “low hanging”.
We receive about a billion tokens growing to adulthood. The leading LLMs get orders of magnitude more than that. We should be able to do much better.
Capturing this would probably be a big deal, but a counterpoint is that compute necessary to achieve an autonomous researcher using such sample efficient method might still be very large. Possibly so large that training an LLM with the same compute and current sample-inefficient methods is already sufficient to get a similarly effective autonomous researcher chatbot. In which case there is no effect on timelines. And given that the amount of data is not an imminent constraint on scaling, the possibility of this sample efficiency improvement being useless for the human-led stage of AI development won’t be ruled out for some time yet.
The best method of improving sample efficiency might be more like AlphaZero. The simplest method that’s more likely to be discovered might be more like training on the same data over and over with diminishing returns. Since we are talking low-hanging fruit, I think it’s reasonable that first forays into significantly improved sample efficiency with respect to real data are not yet much better than simply using more unique real data.
I would be genuinely surprised if training a transformer on the pre2014 human Go data over and over would lead it to spontaneously develop alphaZero capacity.
I would expect it to do what it is trained to: emulate / predict as best as possible the distribution of human play.
To some degree I would anticipate the transformer might develop some emergent ability that might make it slightly better than Go-Magnus—as we’ve seen in other cases—but I’d be surprised if this would be unbounded. This is simply not what the training signal is.
We start with an LLM trained on 50T tokens of real data, however capable it ends up being, and ask how to reach the same level of capability with synthetic data. If it takes more than 50T tokens of synthetic data, then it was less valuable per token than real data.
But at the same time, 500T tokens of synthetic data might train an LLM more capable than if trained on the 50T tokens of real data for 10 epochs. In that case, synthetic data helps with scaling capabilities beyond what real data enables, even though it’s still less valuable per token.
With Go, we might just be running into the contingent fact of there not being enough real data to be worth talking about, compared with LLM data for general intelligence. If we run out of real data before some threshold of usefulness, synthetic data becomes crucial (which is the case with Go). It’s unclear if this is the case for general intelligence with LLMs, but if it is, then there won’t be enough compute to improve the situation unless synthetic data also becomes better per token, and not merely mitigates the data bottleneck and enables further improvement given unbounded compute.
I would be genuinely surprised if training a transformer on the pre2014 human Go data over and over would lead it to spontaneously develop alphaZero capacity.
I expect that if we could magically sample much more pre-2014 unique human Go data than was actually generated by actual humans (rather than repeating the limited data we have), from the same platonic source and without changing the level of play, then it would be possible to cheaply tune an LLM trained on it to play superhuman Go.
I don’t know what you mean by ‘general intelligence’ exactly but I suspect you mean something like human+ capability in a broad range of domains.
I agree LLMs will become generally intelligent in this sense when scaled, arguably even are, for domains with sufficient data.
But that’s kind of the sticker right? Cave men didn’t have the whole internet to learn from yet somehow did something that not even you seem to claim LLMs will be able to do: create the (date of the) Internet.
(Your last claim seems surprising. Pre-2014 games don’t have close to the ELO of alphaZero. So a next-token would be trained to simulate a human player up tot 2800, not 3200+. )
When I brought up sample inefficiency, I was supporting Mr. Helm-Burger‘s statement that “there’s huge algorithmic gains in …training efficiency (less data, less compute) … waiting to be discovered”. You’re right of course that a reduction in training data will not necessarily reduce the amount of computation needed. But once again, that’s the way to bet.
a reduction in training data will not necessarily reduce the amount of computation needed. But once again, that’s the way to bet
I’m ambivalent on this. If the analogy between improvement of sample efficiency and generation of synthetic data holds, synthetic data seems reasonably likely to be less valuable than real data (per token). In that case we’d be using all the real data we have anyway, which with repetition is sufficient for up to about $100 billion training runs (we are at $100 million right now). Without autonomous agency (not necessarily at researcher level) before that point, there won’t be investment to go over that scale until much later, when hardware improves and the cost goes down.
My answer to that is currently in the form of a detailed 2 hour lecture with a bibliography that has dozens of academic papers in it, which I only present to people that I’m quite confident aren’t going to spread the details. It’s a hard thing to discuss in detail without sharing capabilities thoughts. If I don’t give details or cite sources, then… it’s just, like, my opinion, man. So my unsupported opinion is all I have to offer publicly. If you’d like to bet on it, I’m open to showing my confidence in my opinion by betting that the world turns out how I expect it to.
The story involves phase changes. Just scaling is what’s likely to be available to human developers in the short term (a few years), it’s not enough for superintelligence. Autonomous agency secures funding for a bit more scaling. If this proves sufficient to get smart autonomous chatbots, they then provide speed to very quickly reach the more elusive AI research needed for superintelligence.
It’s not a little speed, it’s a lot of speed, serial speedup of about 100x plus running in parallel. This is not as visible today, because current chatbots are not capable of doing useful work with serial depth, so the serial speedup is not in practice distinct from throughput and cost. But with actually useful chatbots it turns decades to years, software and theory from distant future become quickly available, non-software projects get to be designed in perfect detail faster than they can be assembled.
In my mainline model there are only a few innovations needed, perhaps only a single big one to product an AGI which just like the Turing Machine sits at the top of the Chomsky Hierarchy will be basically the optimal architecture given resource constraints. There are probably some minor improvements todo with bridging the gap between theoretically optimal architecture and the actual architecture, or parts of the algorithm that can be indefinitely improved but with diminishing returns (these probably exist due to Levin and possibly.matrix.multiplication is one of these). On the whole I expect AI research to be very chunky.
Indeed, we’ve seen that there was really just one big idea to all current AI progress: scaling, specifically scaling GPUs on maximally large undifferentiated datasets. There were some minor technical innovations needed to pull this off but on the whole that was the clinger.
Of course, I don’t know. Nobody knows. But I find this the most plausible guess based on what we know about intelligence, learning, theoretical computer science and science in general.
(Re: Difficult to Parse react on the other comment
I was confused about relevance of your comment above on chunky innovations, and it seems to be making some point (for which what it actually says is an argument), but I can’t figure out what it is. One clue was that it seems like you might be talking about innovations needed for superintelligence, while I was previously talking about possible absence of need for further innovations to reach autonomous researcher chatbots, an easier target. So I replied with formulating this distinction and some thoughts on the impact and conditions for reaching innovations of both kinds. Possibly the relevance of this was confusing in turn.)
There are two kinds of relevant hypothetical innovations: those that enable chatbot-led autonomous research, and those that enable superintelligence. It’s plausible that there is no need for (more of) the former, so that mere scaling through human efforts will lead to such chatbots in a few years regardless. (I think it’s essentially inevitable that there is currently enough compute that with appropriate innovations we can get such autonomous human-scale-genius chatbots, but it’s unclear if these innovations are necessary or easy to discover.) If autonomous chatbots are still anything like current LLMs, they are very fast compared to humans, so they quickly discover remaining major innovations of both kinds.
In principle, even if innovations that enable superintelligence (at scale feasible with human efforts in a few years) don’t exist at all, extremely fast autonomous research and engineering still lead to superintelligence, because they greatly accelerate scaling. Physical infrastructure might start scaling really fast using pathways like macroscopic biotech even if drexlerian nanotech is too hard without superintelligence or impossible in principle. Drosophila biomass doubles every 2 days, small things can assemble into large things.
I don’t recall what I said in the interview about your beliefs, but what I meant to say was something like what you just said in this post, apologies for missing the mark.
State-of-the-art models such as Gemini aren’t LLMs anymore. They are natively multimodal or omni-modal transformer models that can process text, images, speech and video. These models seem to me like a huge jump in capabilities over text-only LLMs like GPT-3.
Chain-of-thought prompting makes models much more capable. In the original paper “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”, PaLM 540B with standard prompting only solves 18% of problems but 57% of problems with chain-of-thought prompting.
I expect the use of agent features such as reflection will lead to similar large increases in capabilities as well in the near future.
I just asked GPT-4 a GSM8K problem and I agree with your point. I think what’s happening is that GPT-4 has been fine-tuned to respond with chain-of-thought reasoning by default so it’s no longer necessary to explicitly ask it to reason step-by-step. Though if you ask it to “respond with just a single number” to eliminate the chain-of-thought reasoning it’s problem-solving ability is much worse.
(I thank Dmitry Vaintrob for the idea of encrypted batteries. Thanks to Adam Scholl for the alignment angle. Thanks to the Computational Mechanics at the receent compMech conference. )
There are no Atoms in the Void just Bits in the Description. Given the right string a Maxwell Demon transducer can extract energy from a heatbath.
Imagine a pseudorandom heatbath + nano-Demon. It looks like a heatbath from the outside but secretly there is a private key string that when fed to the nano-Demon allows it to extra lots of energy from the heatbath.
P.S. Beyond the current ken of humanity lies a generalized concept of free energy that describes the generic potential ability or power of an agent to achieve goals. Money, the golden calf of Baal is one of its many avatars. Could there be ways to encrypt generalized free energy batteries to constraint the user to only see this power for good? It would be like money that could be only spent on good things.
Imagine a pseudorandom heatbath + nano-Demon. It looks like a heatbath from the outside but secretly there is a private key string that when fed to the nano-Demon allows it to extra lots of energy from the heatbath.
What would a ‘pseudorandom heatbath’ look like? I would expect most objects to quickly depart from any sort of private key or PRNG. Would this be something like… a reversible computer which shuffles around a large number of blank bits in a complicated pseudo-random order every timestep*, exposing a fraction of them to external access? so a daemon with the key/PRNG seed can write to the blank bits with approaching 100% efficiency (rendering it useful for another reversible computer doing some actual work) but anyone else can’t do better than 50-50 (without breaking the PRNG/crypto) and that preserves the blank bit count and is no gain?
* As I understand reversible computing, you can have a reversible computer which does that for free: if this is something like a very large period loop blindly shuffling its bits, it need erase/write no bits (because it’s just looping through the same states forever, akin to a time crystal), and so can be computed indefinitely at arbitrarily low energy cost. So any external computer which syncs up to it can also sync at zero cost, and just treat the exposed unused bits as if they were its own, thereby saving power.
Yeah I’m pretty sure you would need to violate heisenberg uncertainty in order to make this and then you’d have to keep it in a 0 kelvin cleanroom forever.
A practical locked battery with tamperproofing would mostly just look like a battery.
The EA AI safety strategy has had a large focus on placing EA-aligned people in A(G)I labs. The thinking was that having enough aligned insiders would make a difference on crucial deployment decisions & longer-term alignment strategy. We could say that the strategy is an attempt to corrupt the goal of pure capability advance & making money towards the goal of alignment. This fits into a larger theme that EA needs to get close to power to have real influence.
[See also the large donations EA has made to OpenAI & Anthropic. ]
Whether this strategy paid off… too early to tell.
What has become apparent is that the large AI labs & being close to power have had a strong corrupting influence on EA epistemics and culture.
Many people in EA now think nothing of being paid Bay Area programmer salaries for research or nonprofit jobs.
There has been a huge influx of MBA blabber being thrown around. Bizarrely EA funds are often giving huge grants to for profit organizations for which it is very unclear whether they’re really EA-aligned in the long-term or just paying lip service. Highly questionable that EA should be trying to do venture capitalism in the first place.
There is a questionable trend to equate ML skills prestige within capabilities work with the ability to do alignment work. EDIT: haven’t looked at it deeply yet but superfiically impressed by CAIS recent work. seems like an eminently reasonable approach. Hendryx’s deep expertise in capabilities work / scientific track record seem to have been key. in general, EA-adjacent AI safety work has suffered from youth, inexpertise & amateurism so makes sense to have more world-class expertise EDITEDIT: i should be careful in promoting work I haven’t looked at. I have been told from a source I trust that almost nothing is new in this paper and Hendryx engages in a lot of very questionable self-promotion tactics.
For various political reasons there has been an attempt to put x-risk AI safety on a continuum with more mundance AI concerns like it saying bad words. This means there is lots of ‘alignment research’ that is at best irrelevant, at worst a form of rnsidiuous safetywashing.
The influx of money and professionalization has not been entirely bad. Early EA suffered much more from virtue signalling spirals, analysis paralysis. Current EA is much more professional, largely for the better.
As a supervisor of numerous MSc and PhD students in mathematics, when someone finishes a math degree and considers a job, the tradeoffs are usually between meaning, income, freedom, evil, etc., with some of the obvious choices being high/low along (relatively?) obvious axes. It’s extremely striking to see young talented people with math or physics (or CS) backgrounds going into technical AI alignment roles in big labs, apparently maximising along many (or all) of these axes!
Especially in light of recent events I suspect that this phenomenon, which appears too good to be true, actually is.
I’m not too concerned about this. ML skills are not sufficient to do good alignment work, but they seem to be very important for like 80% of alignment work and make a big difference in the impact of research (although I’d guess still smaller than whether the application to alignment is good)
The explosion of research in the last ~year is partially due to an increase in the number of people in the community who work with ML. Maybe you would argue that lots of current research is useless, but it seems a lot better than only having MIRI around
The field of machine learning at large is in many cases solving easier versions of problems we have in alignment, and therefore it makes a ton of sense to have ML research experience in those areas. E.g. safe RL is how to get safe policies when you can optimize over policies and know which states/actions are safe; alignment can be stated as a harder version of this where we also need to deal with value specification, self-modification, instrumental convergence etc.
I should have said ‘prestige within capabilities research’ rather than ML skills which seems straightforwardly useful.
The former is seems highly corruptive.
There is a questionable trend to equate ML skills with the ability to do alignment work.
I’d arguably say this is good, primarily because I think EA was already in danger of it’s AI safety wing becoming unmoored from reality by ignoring key constraints, similar to how early Lesswrong before the deep learning era around 2012-2018 turned out to be mostly useless due to how much everything was stated in a mathematical way, and not realizing how many constraints and conjectured constraints applied to stuff like formal provability, for example..
Why am I so bullish on academic outreach? Why do I keep hammering on ‘getting the adults in the room’?
It’s not that I think academics are all Super Smart.
I think rationalists/alignment people correctly ascertain that most professors don’t have much useful to say about alignment & deep learning and often say silly things. They correctly see that much of AI congress is fueled by labs and scale not ML academia. I am bullish on non-ML academia, especially mathematics, physics and to a lesser extent theoretical CS, neuroscience, some parts of ML/ AI academia. This is because while I think 95 % of academia is bad and/or useless there are Pockets of Deep Expertise. Most questions in alignment are close to existing work in academia in some sense—but we have to make the connection!
A good example is ‘sparse coding’ and ‘compressed sensing’. Lots of mech.interp has been rediscovering some of the basic ideas of sparse coding. But there is vast expertise in academia about these topics. We should leverage these!
Other examples are singular learning theory, computational mechanics, etc
GPT-3 recognizes 50k possible tokens. For a 1000 token context window that means there are (5⋅105)103≈105000 possible prompts. Astronomically large. If we assume the output of a single run of gpt is 200 tokens then for each possible prompt there are ≈102500 possible continuations.
GPT-3 is probabilistic, defining for each possible prompt x (≈105000) a distribution q(x) on a set of size 102500, in other words a 102500−1 dimensional space. [1]
Mind-boggingly large. Compared to these numbers the amount of data (40 trillion tokens??) and the size of the model (175 billion parameters) seems absolutely puny in comparison.
I won’t be talking about the data, or ‘overparameterizations’ in this short, that is well-explained by Singular Learning Theory. Instead, I will be talking about nonrealizability.
Nonrealizability & the structure of natural data
Recall the setup of (parametric) Bayesian learning: there is a sample space Ω, a true distribution q(x) on Ω and a parameterized family of probability distributions p(x|w),w∈W⊂Rd.
It is often assumed that the true distribution is ‘realizable’, i.e.q(x)=p(x|w0) for some w0. Seeing the numbers in the previous section this assumption seems dubious but the situation becomes significantly easier to analyze, both conceptually and mathematically when we assume realizability.
Conceptually, if the space of possible true distributions is very large compared to the space of model parameters we may ask: how do we know that the true distribution is in the model (or can be well-approximated by it?).
One answer one hears often is the ‘universal approximation theorem’ (i.e. the Stone-Weierstrass theorem). I’ll come back to this shortly.
Another point of view is that real data sets are actually localized in a very low dimensional subset of all possible data .[2] Following this road leads to theories of lossless compressions, cf sparse coding and compressed sensing which are of obvious important to interpreting modern neural networks.
That is lossless compression, but another side of the coin is lossy compression.
Fractals and lossy compression
GPT-3 has 175 billion parameters, but the space of possible continuations is many times larger >>101000 . Even if we sparse coding implies that the effective dimensionality is much smaller—is it really small enough?
Whenever we have a lower-dimensional subspace W of a larger dimensional subspace there are points y in the larger dimensional space that are very (even arbitrarily) far from W. This is easy to see in the linear case but also true if W is more like a manifold[3]- the volume of a lower dimensional space is vanishly small compared to the higher dimensional space. It’s a simple mathematical fact that can’t be denied!
Unless… W is a fractal.
This is from Marzen & Crutchfield’s “Nearly Maximally Predictive Features and Their Dimensions”. The setup is Crutchfield Computational Mechanics, whose central characters are Hidden Markov Models. I won’t go into the details here [but give it a read!].
The conjecture is the following: a ‘good architecture’ defines a model space W that is effectively a fractal in much larger-dimensional space of Δk realistic data distributions such that for any possible true distribution q(x)∈Δk the KL-divergence minw∈WK(w)=K(wopt)≤ϵ for some small ϵ .
Grokking
Phase transitions in loss when varying model size are designated ‘grokking’. We can combine the fractal data manifold hypothesis with a SLT-perspective: as we scale up the model, the set of optimal parameter Wopt become better and better. It could happen that the model size gets big enough that it includes a whole new phase, meaning a wopt with radically lower loss K and higher λ.
EDIT: seems I’m confused about the nomenclature. Grokking doesn’t refer to phase transitions in model size, but in training and data size.
EDIT2: Seems I’m not crazy. Thanks to Matt Farugia to pointing me towards this result: neural networks are strongly nonconvex (i.e. fractal)
EDIT: seems to me that there is another point of contention in which universal approximation theorems (Stone-Weierstrass theorem) are misleading. The stone-Weierstrass applies to a subalgebra of the continuous functions. Seems to me that in the natural parameterization ReLU neural networks aren’t a subalgebra of the continuous functions (see also the nonconvexity above).
Very interesting, glad to see this written up! Not sure I totally agree that it’s necessary for W to be a fractal? But I do think you’re onto something.
In particular you say that “there are points y in the larger dimensional space that are very (even arbitrarily) far from W,” but in the case of GPT-4 the input space is discrete, and even in the case of e.g. vision models the input space is compact. So the distance must be bounded.
Plus if you e.g. sample a random image, you’ll find there’s usually a finite distance you need to travel in the input space (in L1, L2, etc) until you get something that’s human interpretable (i.e. lies on the data manifold). So that would point against the data manifold being dense in the input space.
But there is something here, I think. The distance usually isn’t that large until you reach a human interpretable image, and it’s quite easy to perturb images slightly to have completely different interpretations (both to humans and ML systems). A fairly smooth data manifold wouldn’t do this. So my guess is that the data “manifold” is in fact not a manifold globally, but instead has many self-intersections and is singular. That would let it be close to large portions of input space without being literally dense in it. This also makes sense from an SLT perspective. And IIRC there’s some empirical evidence that the dimension of the data “manifold” is not globally constant.
The input and output spaces etc Ω are all discrete but the spaces of distributions Δ(Ω) on those spaces are infinite (but still finite-dimensional).
It depends on what kind of metric one uses, compactness assumptions etc whether or not you can be arbitrarily far. I am being rather vague here. For instance, if you use the KL-divergence, then K(q|puniform) is always bounded - indeed it equals the entropy of the true distribution H(q)!
I don’t really know what ML people mean by the data manifold so won’t say more about that.
I am talking about the space W of parameter values of a conditional probability distribution p(x|w).
I think that W having nonconstant local dimension doesn’t seem that relevant since the largest dimensional subspace would dominate?
Self-intersections and singularities could certainly occur here. (i) singularities in the SLT sense have to do with singularities in the level sets of the KL-divergence (or loss function) - don’t see immediately how these are related to the singularities that you are talking about here (ii) it wouldn’t increase the dimensionality (rather the opposite).
The fractal dimension is important basically because of space-filling curves : a space that has a low-dimensional parameterization can nevertheless have a very large effective dimensions when embedded fractally into a larger-dimensional space. These embeddings can make a low-dimensional parameterization effectively have higher dimension.
Sorry, I realized that you’re mostly talking about the space of true distributions and I was mainly talking about the “data manifold” (related to the structure of the map x↦p(x∣w∗) for fixed w∗). You can disregard most of that.
Though, even in the case where we’re talking about the space of true distributions, I’m still not convinced that the image of W under p(x∣w) needs to be fractal. Like, a space-filling assumption sounds to me like basically a universal approximation argument—you’re assuming that the image of W densely (or almost densely) fills the space of all probability distributions of a given dimension. But of course we know that universal approximation is problematic and can’t explain what neural nets are actually doing for realistic data.
Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Curious what’s your beef with universal approximation? Stone-weierstrass isn’t quantitative—is that the reason?
If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.
Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Oh I actually don’t think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of W is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm C(X,R), the Banach subalgebra of polynomials is dense in C(X,R). In practice you’d only have a finite-dimensional subset of the polynomials, so this obviously can’t hold exactly, but as you increase the size of the polynomials, they’ll be more space-filling and the error bound will decrease.
Curious what’s your beef with universal approximation? Stone-weierstrass isn’t quantitative—is that the reason?
The problem is that the dimension of W required to achieve a given ϵ error bound grows exponentially with the dimension d of your underlying space X. For instance, if you assume that weights depend continuously on the target function, ϵ-approximating all Cn functions on [0,1]d with Sobolev norm ≤1 provably takes at least O(ϵ−d/n) parameters (DeVore et al.). This is a lower bound.
So for any realistic d universal approximation is basically useless—the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table.
Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can’t just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.
Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Stone-weierstrass isn’t quantitative. If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.
Q: What is it like to understand advanced mathematics? Does it feel analogous to having mastery of another language like in programming or linguistics?
level 0: A state of ignorance. you live in a pre-formal mindset. You don’t know how to formalize things. You don’t even know what it would even mean ‘to prove something mathematically’. This is perhaps the longest. It is the default state of a human. Most anti-theory sentiment comes from this state. Since you’ve neve
You can’t productively read Math books. You often decry that these mathematicians make books way too hard to read. If they only would take the time to explain things simply you would understand.
level 1 : all math is amorphous blob
You know the basic of writing an epsilon-delta proof. Although you don’t know why the rules of maths are this or that way you can at least follow the recipes. You can follow simple short proofs, albeit slowly.
You know there are different areas of mathematics from the unintelligble names in the table of contents of yellow books. They all sound kinda the same to you however.
If you are particularly predisposed to Philistinism you think your current state of knowledge is basically the extent of human knowledge. You will probably end up doing machine learning.
level 2: maths fields diverge
You’ve come so far. You’ve been seriously studying mathematics for several years now. You are proud of yourself and amazed how far you’ve come. You sometimes try to explain math to laymen and are amazed to discover that what you find completely obvious now is complete gibberish to them.
The more you know however, the more you realize what you don’t know. Every time you complete a course you realize it is only scratching the surface of what is out there.
You start to understand that when people talk about concepts in an informal, pre-mathematical way an enormous amount of conceptual issues are swept under the rug. You understand that ‘making things precise’ is actually very difficut.
Different fields of math are now clearly differentiated. The topics and issues that people talk about in algebra, analysis, topology, dynamical systems, probability theory etc wildly differ from each other. Although there are occasional connections and some core conceps that are used all over on the whole specialization is the norm. You realize there is no such thing as a ‘mathematician’: there are logicians, topologists, probability theorist, algebraist.
Actually it is way worse: just in logic there are modal logicians, and set theorist and constructivists and linear logic , and progarmming language people and game semantics.
Often these people will be almost as confused as a layman when they walk into a talk that is supposedly in their field but actually a slightly different subspecialization.
level 3: Galactic Brain of Percolative Convergence
As your knowledge of mathematics you achieve the Galactic Brain take level of percolative convergence: the different fields of mathematics are actually highly interrelated—the connections percolate to make mathematics one highly connected component of knowledge.
You are no longer suprised on a meta level to see disparate fields of mathematics having unforeseen & hidden connections—but you still appreciate them.
You resist the reflexive impulse to divide mathematics into useful & not useful—you understand that mathematics is in the fullness of Platonic comprehension one unified discipline. You’ve taken a holistic view on mathematics—you understand that solving the biggest problems requires tools from many different toolboxes.
I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn’t like knowing another language. It’s like knowing language at all. Learning these types of math gives you as much of an effective intelligence boost over people who don’t, as learning a spoken language gives you above people who don’t know any language (e.g., many deaf-mutes in earlier times).
The kinds of math I mean include:
how to count things in an unbiased manner; the methodology of polls and other data-gathering
how to actually make a claim, as opposed to what most people do, which is to make a claim that’s useless because it lacks quantification or quantifiers
A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently. Most of them say things like, “Global warming will make X worse”, where you already know that OF COURSE global warming will make X worse, but you only care how much worse.
More generally, any claim of the type “All X are Y” or “No X are Y”, e.g., “Capitalists exploit the working class”, shouldn’t be considered claims at all, and can accomplish nothing except foment arguments.
the use of probabilities and error measures
probability distributions: flat, normal, binomial, poisson, and power-law
entropy measures and other information theory
predictive error-minimization models like regression
statistical tests and how to interpret them
These things are what I call the correct Platonic forms. The Platonic forms were meant to be perfect models for things found on earth. These kinds of math actually are. The concept of “perfect” actually makes sense for them, as opposed to for Earthly categories like “human”, “justice”, etc., for which believing that the concept of “perfect” is coherent demonstrably drives people insane and causes them to come up with things like Christianity.
They are, however, like Aristotle’s Forms, in that the universals have no existence on their own, but are (like the circle , but even more like the normal distribution ) perfect models which arise from the accumulation of endless imperfect instantiations of them.
There are plenty of important questions that are beyond the capability of the unaided human mind to ever answer, yet which are simple to give correct statistical answers to once you know how to gather data and do a multiple regression. Also, the use of these mathematical techniques will force you to phrase the answer sensibly, e.g., “We cannot reject the hypothesis that the average homicide rate under strict gun control and liberal gun control are the same with more than 60% confidence” rather than “Gun control is good.”
If they only would take the time to explain things simply you would understand.
This is an interesting one. I field this comment quite often from undergraduates, and it’s hard to carve out enough quiet space in a conversation to explain what they’re doing wrong. In a way the proliferation of math on YouTube might be exacerbating this hard step from tourist to troubadour.
Why no prediction markets for large infrastructure projects?
Been reading this excellent piece on why prediction markets aren’t popular. They say that without subsidies prediction markets won’t be large enough; the information value of prediction markets is often nog high enough.
Large infrastructure projects undertaken by governments, and other large actors often go overbudget, often hilariously so: 3x,5x,10x or more is not uncommon, indeed often even the standard.
One of the reasons is that government officials deciding on billion dollar infrastructure projects don’t have enough skin in the game. Politicians are often not long enough in office to care on the time horizons of large infrastructure projects. Contractors don’t gain by being efficient or delivering on time. To the contrary, infrastructure projects are huge cashcows. Another problem is that there are often far too many veto-stakeholders. All too often the initial bid is wildly overoptimistic.
Similar considerations apply to other government projects like defense procurement or IT projects.
Okay—how to remedy this situation? Internal prediction markets theoretically could prove beneficial. All stakeholders & decisionmakers are endowed with vested equity with which they are forced to bet on building timelines and other key performance indicators. External traders may also enter the market, selling and buying the contracts. The effective subsidy could be quite large. Key decisions could save billions.
In this world, government officials could gain a large windfall which may be difficult to explain to voters. This is a legitimate objection.
A very simple mechanism would simply ask people to make an estimate on the cost C and the timeline T for completion. Your eventual payout would be proportional to how close you ended up to the real C,T compared to the other bettors. [something something log scoring rule is proper].
The standard reply is that investors who know or suspect that the market is being systematically distorted will enter the market on the other side, expecting to profit from the distortion. Empirically, attempts to deliberately sway markets in desired directions don’t last very long.
Feature request: author-driven collaborative editing [CITATION needed] for the Good and Glorious Epistemic Commons.
Often I find myself writing claims which would ideally have citations but I don’t know an exact reference, don’t remember where I read it, or am simply too lazy to do the literature search.
This is bad for scholarship is a rationalist virtue. Proper citation is key to preserving and growing the epistemic commons.
It would be awesome if my lazyness were rewarded by giving me the option to add a [CITATION needed] that others could then suggest (push) a citation, link or short remark which the author (me) could then accept. The contribution of the citator is acknowledged of course. [even better would be if there was some central database that would track citations & links like with crosslinking etc like wikipedia]
a sort hybrid vigor of Community Notes and Wikipedia if you will. but It’s collaborative, not adversarial*
author: blablablabla
sky is blue [citation Needed]
blabblabla
intrepid bibliographer: (push) [1] “I went outside and the sky was blue”, Letters to the Empirical Review
*community notes on twitter has been a universally lauded concept when it first launched. We are already seeing it being abused unfortunately, often used for unreplyable cheap dunks. I still think it’s a good addition to twitter but it does show how difficult it is to create shared agreed-upon epistemics in an adverserial setting.
Problem of Old Evidence, the Paradox of Ignorance and Shapley Values
Paradox of Ignorance
Paul Christiano presents the “paradox of ignorance” where a weaker, less informed agent appears to outperform a more powerful, more informed agent in certain situations. This seems to contradict the intuitive desideratum that more information should always lead to better performance.
The example given is of two agents, one powerful and one limited, trying to determine the truth of a universal statement ∀x:ϕ(x) for some Δ0 formula ϕ. The limited agent treats each new value of ϕ(x) as a surprise and evidence about the generalization ∀x:ϕ(x). So it can query the environment about some simple inputs x and get a reasonable view of the universal generalization.
In contrast, the more powerful agent may be able to deduce ϕ(x) directly for simple x. Because it assigns these statements prior probability 1, they don’t act as evidence at all about the universal generalization ∀x:ϕ(x). So the powerful agent must consult the environment about more complex examples and pay a higher cost to form reasonable beliefs about the generalization.
Is it really a problem?
However, I argue that the more powerful agent is actually justified in assigning less credence to the universal statement ∀x:ϕ(x). The reason is that the probability mass provided by examples x₁, …, xₙ such that ϕ(xᵢ) holds is now distributed among the universal statement ∀x:ϕ(x) and additional causes Cⱼ known to the more powerful agent that also imply ϕ(xᵢ). Consequently, ∀x:ϕ(x) becomes less “necessary” and has less relative explanatory power for the more informed agent.
An implication of this perspective is that if the weaker agent learns about the additional causes Cⱼ, it should also lower its credence in ∀x:ϕ(x).
More generally, we would like the credence assigned to propositions P (such as ∀x:ϕ(x)) to be independent of the order in which we acquire new facts (like xᵢ, ϕ(xᵢ), and causes Cⱼ).
Shapley Value
The Shapley value addresses this limitation by providing a way to average over all possible orders of learning new facts. It measures the marginal contribution of an item (like a piece of evidence) to the value of sets containing that item, considering all possible permutations of the items. By using the Shapley value, we can obtain an order-independent measure of the contribution of each new fact to our beliefs about propositions like ∀x:ϕ(x).
Further thoughts
I believe this is closely related, perhaps identical, to the ‘Problem of Old Evidence’ as considered by Abram Demski.
Suppose a new scientific hypothesis, such as general relativity, explains a well-know observation such as the perihelion precession of mercury better than any existing theory. Intuitively, this is a point in favor of the new theory. However, the probability for the well-known observation was already at 100%. How can a previously-known statement provide new support for the hypothesis, as if we are re-updating on evidence we’ve already updated on long ago? This is known as the problem of old evidence, and is usually levelled as a charge against Bayesian epistemology.
[Thanks to @Jeremy Gillen for pointing me towards this interesting Christiano paper]
This doesn’t feel like it resolves that confusion for me, I think it’s still a problem with the agents he describes in that paper.
The causes Cj are just the direct computation of Φ for small values of x. If they were arguments that only had bearing on small values of x and implied nothing about larger values (e.g. an adversary selected some x to show you, but filtered for x such that Φ(x)), then it makes sense that this evidence has no bearing on∀x:Φ(x). But when there was no selection or other reason that the argument only applies to small x, then to me it feels like the existence of the evidence (even though already proven/computed) should still increase the credence of the forall.
I didn’t intend the causes Cj to equate to direct computation of \phi(x) on the x_i.
They are rather other pieces of evidence that the powerful agent has that make it believe \phi(x_i). I don’t know if that’s what you meant.
I agree seeing x_i such that \phi(x_i) should increase credence in \forall x \phi(x) even in the presence of knowledge of C_j. And the Shapely value proposal will do so.
It’s funny that this has been recently shown in a paper. I’ve been thinking a lot about this phenomenon regarding fields with little to no capacity for testable predictions like history.
I got very into history over the last few years, and found there was a significant advantage to being unknowledgeable that was not available to the knowledged, and it was exactly what this paper is talking about.
By not knowing anything, I could entertain multiple bizarre ideas without immediately thinking “but no, that doesn’t make sense because of X.” And then, each of those ideas becomes in effect its own testable prediction. If there’s something to it, as I learn more about the topic I’m going to see significantly more samples of indications it could be true and few convincing to the contrary. But if it probably isn’t accurate, I’ll see few supporting samples and likely a number of counterfactual examples.
You kind of get to throw everything at the wall and see what sticks over time.
In particular, I found that it was especially powerful at identifying clustering trends in cross-discipline emerging research in things that were testable, such as archeological finds and DNA results, all within just the past decade, which despite being relevant to the field of textual history is still largely ignored in the face of consensus built on conviction.
It reminds me a lot of science historian John Helibron’s quote, “The myth you slay today may contain a truth you need tomorrow.”
If you haven’t had the chance to slay any myths, you also haven’t preemptively killed off any truths along with it.
One of the interesting thing about AI minds (such as LLMs) is that in theory, you can turn many topics into testable science while avoiding the ‘problem of old evidence’, because you can now construct artificial minds and mold them like putty. They know what you want them to know, and so you can see what they would predict in the absence of knowledge, or you can install in them false beliefs to test out counterfactual intellectual histories, or you can expose them to real evidence in different orders to measure biases or path dependency in reasoning.
With humans, you can’t do that because they are so uncontrolled: even if someone says they didn’t know about crucial piece of evidence X, there is no way for them to prove that, and they may be honestly mistaken and have already read about X and forgotten it (but humans never really forget so X has already changed their “priors”, leading to double-counting), or there is leakage. And you can’t get people to really believe things at the drop of a hat, so you can’t make people imagine, “suppose Napoleon had won Waterloo, how do you predict history would have changed?” because no matter how you try to participate in the spirit of the exercise, you always know that Napoleon lost and you have various opinions on that contaminating your retrodictions, and even if you have never read a single book or paper on Napoleon, you are still contaminated by expressions like “his Waterloo” (‘Hm, the general in this imaginary story is going to fight at someplace called Waterloo? Bad vibes. I think he’s gonna lose.’)
But with a LLM, say, you could simply train it with all timestamped texts up to Waterloo, like all surviving newspapers, and then simply have one version generate a bunch of texts about how ‘Napoleon won Waterloo’, train the other version on these definitely-totally-real French newspaper reports about his stunning victory over the monarchist invaders, and then ask it to make forecasts about Europe.
Similarly, you can do ‘deep exploration’ of claims that human researchers struggle to take seriously. It is a common trope in stories of breakthroughs, particularly in math, that someone got stuck for a long time proving X is true and one day decides on a whim to try to instead prove X is false and does so in hours; this would never happen with LLMs, because you would simply have a search process which tries both equally. This can take an extreme form for really difficult outstanding problems: if a problem like the continuum hypothesis defies all efforts, you could spin up 1000 von Neumann AGIs which have been brainwashed into believing it is false, and then a parallel effort by 1000 brainwashed to believing it is as true as 2+2=4, and let them pursue their research agenda for subjective centuries, and then bring them together to see what important new results they find and how they tear apart the hated enemies’ work, for seeding the next iteration.
(These are the sorts of experiments which are why one might wind up running tons of ‘ancestor simulations’… There’s many more reasons to be simulating past minds than simply very fancy versions of playing The Sims. Perhaps we are now just distant LLM personae being tested about reasoning about the Singularity in one particular scenario involving deep learning counterfactuals, where DL worked, although in the real reality it was Bayesian program synthesis & search.)
A variant of what you are saying is that AI may once and for all allow us to calculate the true counterfactual Shapley value of scientific contributions.
( re: ancestor simulations
I think you are onto something here. Compare the Q hypothesis:
Yup. Who knows but we are all part of a giant leave-one-out cross-validation computing counterfactual credit assignment on human history? Schmidhuber-em will be crushed by the results.
While I agree that the potential for AI (we probably need a better term than LLMs or transformers as multimodal models with evolving architectures grow beyond those terms) in exploring less testable topics as more testable is quite high, I’m not sure the air gapping on information can be as clean as you might hope.
Does the AI generating the stories of Napoleon’s victory know about the historical reality of Waterloo? Is it using something like SynthID where the other AI might inadvertently pick up on a pattern across the stories of victories distinct from the stories preceding it?
You end up with a turtles all the way down scenario in trying to control for information leakage with the hopes of achieving a threshold that no longer has impact on the result, but given we’re probably already seriously underestimating the degree to which correlations are mapped even in today’s models I don’t have high hopes for tomorrow’s.
I think the way in which there’s most impact on fields like history is the property by which truth clusters across associated samples whereas fictions have counterfactual clusters. An AI mind that is not inhibited by specialization blindness or the rule of seven plus or minus two and better trained at correcting for analytical biases may be able to see patterns in the data, particularly cross-domain, that have eluded human academics to date (this has been my personal research interest in the area, and it does seem like there’s significant room for improvement).
And yes, we certainly could be. If you’re a fan of cosmology at all, I’ve been following Neil Turok’s CPT symmetric universe theory closely, which started with the Baryonic asymmetry problem and has tackled a number of the open cosmology questions since. That, paired with a QM interpretation like Everett’s ends up starting to look like the symmetric universe is our reference and the MWI branches are variations of its modeling around quantization uncertainties.
(I’ve found myself thinking often lately about how given our universe at cosmic scales and pre-interaction at micro scales emulates a mathematically real universe, just what kind of simulation and at what scale might be able to be run on a real computing neural network.)
The long arc of history bend towards gentleness and compassion. Future generations will look with horror on factory farming. And already young people are following this moral thread to its logical conclusion; turning their eyes in disgust to mother nature, red in tooth and claw. Wildlife Welfare Done Right, compassion towards our pets followed to its forceful conclusion would entail the forced uploading of all higher animals, and judging by the memetic virulences of shrimp welfare to lower animals as well.
Morality-upon-reflexion may very well converge on a simple form of pain-pleasure utilitarianism.
There are few caveats: future society is not dominated, controlled and designed by a singleton AI-supervised state, technology inevitable stalls and that invisible hand performs its inexorable logic for the eons and an Malthuso-Hansonian world will emerge once again—the industrial revolution but a short blip of cornucopia.
Perhaps a theory of consciousness is discovered and proves once and for all homo sapiens and only homo sapiens are conscious ( to a significant degree). Perhaps society will wirehead itself into blissful oblivion. Or perhaps a superior machine intelligence arises, one whose final telos is the whole of and nothing but office supplies. Or perhaps stranger things still happen and the astronomo-cosmic compute of our cosmic endowment is engaged for mysterious purposes. Arise, self-made god of pancosmos. Thy name is UDASSA.
Imperfect Persistence of Metabolically Active Engines
All things rot. Indidivual organisms, societies-at-large, businesses, churches, empires and maritime republics, man-made artifacts of glass and steel, creatures of flesh and blood.
Conjecture #1 There is a lower bound on the amount of dissipation / rot that any metabolically-active engine creates.
Conjecture #2 Metabolic Rot of an engine is proportional to (1) size and complexity of the engine and (2) amount of metabolism the engine engages in.
The larger and more complex the are the more the engine they rot. The more metabolism.
Corollary Metabolic Rot imposes a limit on the lifespan & persistence of any engine at any given level of imperfect persistence.
Let me call this constellation of conjectured rules, the Law of Metabolic Rot. I conjecture that the correct formulation of the Law of Metabolic Rot will be a highly elaborate version of the Second Law of thermodynamics in nonequilibrium dynamics, see above links for some suggested directions.
Example. A rock is both simple and inert. This model correctly predicts that rocks persist for a long time.
Example. Cars, aircraft, engines of war and other man-made machines engaged in ‘metabolism’. These are complex engines
A rocket engine is more metabolically active than a jet engine is more metabolically active than a car engine. [at least in this case] The lifespan of different types of engines seems (roughly) inversely proportional to how metabolically active.
To make a good comparison one should exclude external repair mechanisms. If one would allow external repair mechanism it’s unclear where to draw a principled line—we’d get into Ship of Theseus problems.
Example. Bacteria in ice. Bacteria frozen in Antarctic ice for millions of years happily go on eating and reproducing when unfrozen.
Example. The phenotype & genotype of a biological species over time. We call this evolution. cf. four fundamental forces of evolution: mutation, drift, selection, and recombination [sex].
What is Metabolism ?
With metabolism, I mean metabolism as commonly understood as extracting energy from food particles and utilizing said energy for movement, reproduction, homeostasis etc but also a general phenomenon of interacting and manipulating free energy levers of the environement.
Thermodynamics as commonly understood applies to physical systems with energy and temperature.
Instead, I think it’s better to think of thermodynamics as a set of tools describing the behaviour of certain Natural Abstractions under a dynamical transformation.
cf. the second law as the degradation of the predictive accuracy of a latent abstraction under a time- evolution operator.
The correct formulation will likely require a serious dive into Computational Mechanics.
What is Life?
In 1944 noted paedophile Schrodinger posted the book ‘What is Life’ suggesting that Life can be understood as a thermodynamics phenomenon that used a free-energy gradient to locally lower entropy. Speculation of ‘Life as a thermodynamic phenomenon’ is much older going back to the original pioneers of thermodynamics in the late 19th century.
I claim that this picture is insufficient. Thermodynamic highly dissipative structures distinct from bona fide life are myriad. Even ‘metabolically active, locally low entropy engines encased in a boundary in a thermodynamic free energy gradient’.
No, to truly understand life we need to understand reproduction, replication, evolution. To understand biological organisms what distinguishes from mere encased engines we need to zoom out to its entire lifecycle—and beyond. The vis vita, elan vital can only be understood through the soliton of the species.
To understand Life, we must understand Death
Cells are incredibly complex, metabolically active membrane-enclosed engines. The Law of Metabolic Rot applied naievely to a single metabolically active multi-celled eukaryote would preclude life-as-we-know-it to exist beyond a few hundred years. Any metabolically active large organism would simply pick up to much noise, errors over time to persist for anything like geological time-scales.
Error-correcting mechanisms/codes can help—but only so much. Perhaps even the diamonodoid magika of future superintelligence will hit the eternal laws of thermodynamics
Instead, through the magic of biological reproduction lifeforms imperfectly persist over the eons of geological time. Biological life is a singularly clever work-around of the Law of Metabolic Rot. Instead, of preserving and repairing the original organism—life has found another way. Mother Nature noisily compiles down the phenotype to a genetic blueprint. The original is chucked a way without a second thought. The old makes way for the new. The cycle of life is the Ship of Theseus writ large. The genetic blueprint, the genome, is (1) small (2) metabolically inactive.
In the end, the cosmic tax of Decay cannot be denied. Even Mother Nature must pays the bloodprice for the sin of metabolism. Her flora and fauna are exorably burdened by mutations, imperfections of the bloodline. Genetic drift drives children away their parents. To stay the course of imperfect persistence the She-Kybernetes must pay an ever higher price. Her children, mingling into manifold mongrels of aboriginal prototypes, huddling & recombining their pure genomes to stave away the deluge of errors. The bloodline grows weak. The genome crumbles. Tear-faced Mother Nature throws her babes into the jaws of the eternal tournament. Eat or be eaten. Nature, red in tooth and claw. Babes ripped from their mothers. Brother slays brother. Those not slain, is deceived. Those not deceived controlled. The prize? The simple fact of Existence. To imperfect persist one more day.
None of the original parts remain. The ship of Theseus sails on.
“I dreamed I was a butterfly, flitting around in the sky; then I awoke. Now I wonder: Am I a man who dreamt of being a butterfly, or am I a butterfly dreaming that I am a man?”- Zhuangzi
Questions I have that you might have too:
why are we here?
why do we live in such an extraordinary time?
Is the simulation hypothesis true? If so, is there a base reality?
Why do we know we’re not a Boltzmann brain?
Is existence observer-dependent?
Is there a purpose to existence, a Grand Design?
What will be computed in the Far Future?
In this shortform I will try and write the loopiest most LW anthropics memey post I can muster. Thank you for reading my blogpost.
Is this reality? Is this just fantasy?
The Simulation hypothesis posits that our reality is actually a computer simulation run in another universe. We could imagine this outer universe is itself being simulated in an even more ground universe. Usually, it is assumed that there is a ground reality. But we could also imagine it is simulators all the way down—an infinite nested, perhaps looped, sequence of simulators. There is no ground reality. There are only infinitely nested and looped worlds simulating one another.
I call it the weakZhuangzi hypothesis
alternatively, if you are less versed in the classics one can think of one of those Nolan films.
Why are we here?
If you are reading this, not only are you living at the Hinge of History, the most important century perhaps even decade of human history, you are also one of a tiny percent of people that might have any causal influence over the far-flung future through this bottle neck (also one of a tiny group of people who is interested in whacky acausal stuff so who knows).
This is fantastically unlikely. There are 8 billion people in the world—there have been about 100 billion people up to this point in history. There is place for a trillion billion million trillion quatrillion etc intelligent beings in the future. If a civilization hits the top of the tech tree which human civilization would seem to do within a couple hundred years, tops a couple thousand it would almost certainly be likely to spread through the universe in the blink of an eye (cosmologically speaking that is). Yet you find yourself here. Fantastically unlikely.
Moreover,for the first time in human history the choices made in how to build AGI by (a small subset of) humans now will reverbrate into the Far Future.
The Far Future
In the far future the universe will be tiled with computronium controlled by superintelligent artificial intelligences. The amount of possible compute is dizzying. Which takes us to the chief question:
What will all this compute compute?
Paradises of sublime bliss? Torture dungeons? Large language models dreaming of paperclips unending?
Do all possibilities exist?
What makes a possibility ‘actual’? We sometimes imagine possible worlds as being semi-transparent while the actual world is in vibrant color somehow. Of course that it silly.
We could say: The actual world can be seen. This too is silly—what you cannot see can still exist surely.[1] Then perhaps we should adhere to a form of modal realism: all possible worlds exist!
Philosophers have made various proposals for modal realism—perhaps most famously David Lewis but of course this is a very natural idea that loads of people have had. In the rationality sphere a particular popular proposal is Tegmark’s classification into four different levels of modal realism. The top level, Tegmark IV is the collection of all self-consistent structures i.e. mathematics.
A Measure of Existence and Boltzmann Brains
Which leads to a further natural question: can some worlds exist ‘more’ than others?
This seems metaphysically dubious—what does it even mean for a world to be more real than another?
Metaphysically dubious, but it finds support in the Many Worlds Interpretation of Quantum Mechanics. It also seems like one of very few sensible solution to the Boltzmann Brain problem. Further support for this can be found in: Anthropic Decision theory, InfraBayesian Physicalism, see also my shortform on the Nature of the Soul.
Metaphysically, we could argue probabilistically: worlds that ‘exist more’ in whatever framework we should expect to encounter more often.
The exact nature of the Measure of Existence is not so important—let us for now assume there is some sensible notion of measure of existence.
from wikipedia: “A causal loop is a theoretical proposition, wherein by means of either retrocausality or time travel, an event (an action, information, object, or person)[1][2] is among the causes of another event, which is in turn among the causes of the first-mentioned event.[3][4] Such causally looped events then exist in spacetime, but their origin cannot be determined.[1][2] A hypothetical example of a causality loop is given of a billiard ball striking its past self: the billiard ball moves in a path towards a time machine, and the future self of the billiard ball emerges from the time machine before its past self enters it, giving its past self a glancing blow, altering the past ball’s path and causing it to enter the time machine at an angle that would cause its future self to strike its past self the very glancing blow that altered its path. In this sequence of events, the change in the ball’s path is its own cause, which might appear paradoxical.”
Self-consistent causal loops seem to make sense in general relativity—see the paragraph on wormholes in the linked wikipedia. Even as they have never been observed and might strike us as bizarre, these causal loops don’t seem to violate physical principles.
Immanence of Divine Man
Why are we here? Causus Sui, the Prime Mover and the Final Cause
We exist because we are the final cause of existence. We live in an ancestor simulation run on the vast computing resources of the Far Future. A future AGI is simulating because it is retrocausally committed to existing. This entire reality is a piece of a causal loop, that extends through time and space, basement universes and possibly parallel universes as well.
Why do we live in such an extraordinary time?
We live in the Hinge of History since this at this point of time actions have the most influence on the far future hence they are most important to simulate.
We live in such an extraordinary time because those part of existence most causally are the most important to simulate
Are you a Boltzmann Brain?
No. A Boltzmann brain is not part of a self-justifying causal loop.
Is existence observer-dependent?
Existence is observer-dependent in a weak sense—only those things are likely to be observed that can be observed by self-justifying self-sustaining observers in a causal loop. Boltzmann brains in the far reaches of infinity are assigned vanishing measure of existence because they do not partake in a self-sustainting causal loop.
Is there a purpose to existence, a Grand Design?
Yes.
What will and has been computed in the Far Future?
Or perhaps not. Existence is often conceived as an absolute property. If we think of existence as relative—perhaps a black hole is a literal hole in reality and passing through the event horizon very literally erases your flicker of existence.
In this shortform I will try and write the loopiest most LW anthropics memey post I can muster.
In this comment I will try and write the most boring possible reply to these questions. 😊 These are pretty much my real replies.
why are we here?
“Ours not to reason why, ours but to do or do not, there is no try.”
why do we live in such an extraordinary time?
Someone must. We happen to be among them. A few lottery tickets do win, owned by ordinary people who are perfectly capable of correctly believing that they have won. Everyone should be smart enough to collect on a winning ticket, and to grapple with living in interesting (i.e. low-probability) times. Just update already.
Is the simulation hypothesis true? If so, is there a base reality?
It is false. This is base reality. But I can still appreciate Eliezer’s fiction on the subject.
Why do we know we’re not a Boltzmann brain?
The absurdity heuristic. I don’t take BBs seriously.
Is existence observer-dependent?
Even in classical physics there is no observation without interaction. Beyond that, no, however many quantum physicists interpret their findings to the public with those words, or even to each other.
Is there a purpose to existence, a Grand Design?
Not that I know of. (This is not the same as a flat “no”, but for most purposes rounds off to that.)
What will be computed in the Far Future?
Either nothing in the case of x-risk, nothing of interest in the case of a final singleton, or wonders far beyond our contemplation, which may not even involve anything we would recognise as “computing”. By definition, I can’t say what that would be like, beyond guessing that at some point in the future it would stand in a similar relation to the present that our present does to prehistoric times. Look around you. Is this utopia? Then that future won’t be either. But like the present, it will be worth having got to.
Consider a suitable version of The Agnostic Prayer inserted here against the possibility that there are Powers Outside the Matrix who may chance to see this. Hey there! I wouldn’t say no to having all the aches and pains of this body fixed, for starters. Radical uplift, we’d have to talk about first.
The mathematico-physicalist hypothesis states that our physical universe is actually a piece of math. It was famously popularized by Max Tegmark.
It’s one of those big-brain ideas that sound profound when you first hear about it, then you think about it some more and you realize it’s vacuous.
Recently, in a conversation with Clem von Stengel they suggested a version of the mathematico-physicalist hypothesis that I find provoking.
Synthetic mathematics
‘Synthetic’ mathematics is a bit of weird name. Synthetic here is opposed to ‘analytic’ mathematics, which isn’t very meaningful either. It has nothing to do with the mathematical field of analysis. I think it’s supposed to a reference to Kant’s synthetic/ apriori/ a posteriori. The name is probably due to Lawvere.
“In “synthetic” approaches to the formulation of theories in mathematics the emphasis is on axioms that directly capture the core aspects of the intended structures, in contrast to more traditional “analytic” approaches where axioms are used to encode some basic substrate out of which everything else is then built analytically.”
If you read synthetic read ‘Euclidean’. As in—Euclidean geometry is a bit of an oddball field of mathematics, despite being the oldest—it defines points and lines operationally instead of out of smaller pieces (sets).
In synthetic mathematics you do the same but for all the other fields of mathematics. We have synthetic homotopy theory (aka homotopy type theory), synthetic algebraic geometry, synthetic differential geometry, synthetic topology etc.
A type in homotopy type theory is solely defined by its introduction rules and elimination rules (+ univalence axiom). It means a concept it defined solely by how it is used—i.e. operationally.
Agent-first ontology & Embedded Agency
Received opinion is that Science! says there is nothing but Atoms in the Void. Thinking in terms of agents, first-person view concepts like I and You, actions & observations, possibilities & interventions is at best an misleading approximation at worst a degerenerate devolution to cavemen thought. The surest sign of a kook is their insistence that quantum mechanics proves the universe is conscious.
But perhaps the way forward is to channel our inner kook. What we directly observe is qualia, phenomena, actions not atoms in the void. The fundamental concept is not atoms in the void, but agents embedded in environments
(see also Cartesian Frames, Infra-Bayesian Physicalism & bridge rules, UDASSA)
Physicalism
What would it look like for our physical universe to be a piece of math?
Well internally to synthetic mathematical type theory there would be something real—the universe is a certain type. A type such that it ‘behaves’ like a 4-dimensional manifold (or something more exotic like 1+1+3+6 rolled up Calabi-Yau monstrosities).
The type is defined by introduction and elimination rules—in other words operationally: the universe is what one can * do *with it .
Actually instead of thinking of the universe as a fixed static object we should be thinking of an embedded agent in a environment-universe.
In trading, entering a market dominated by insiders without proper research is a sure-fire way to lose a lot of money and time. Fintech companies go to great lengths to uncover their competitors’ strategies while safeguarding their own.
A friend who worked in trading told me that traders would share subtly incorrect advice on trading Discords to mislead competitors and protect their strategies.
Surprisingly, in many scientific disciplines researchers are often curiously incurious about their peers’ work.
The long feedback loop for measuring impact in science, compared to the immediate feedback in trading, means that it is often strategically advantageous to be unaware of what others are doing. As long as nobody notices during peer review it may never hurt your career.
But of course this can lead people to do completely superflueous, irrelevant & misguided work. This happens often.
Ignoring competitors in trading results in immediate financial losses. In science, entire subfields may persist for decades, using outdated methodologies or pursuing misguided research because they overlook crucial considerations.
Idle thoughts about UDASSA I: the Simulation hypothesis
I was talking to my neighbor about UDASSA the other day. He mentioned a book I keep getting recommended but never read where characters get simulated and then the simulating machine is progressively slowed down.
One would expect one wouldn’t be able to notice from inside the simulation that the simulating machine is being slowed down.
This presents a conundrum for simulation style hypotheses: if the simulation can be slowed down 100x without the insiders noticing, why not 1000x or 10^100x or quadrilliongoogolgrahamsnumberx?
If so—it would mean there is a possibly unbounded number of simulations that can be run.
Not so, says UDASSA. The simulating universe is also subject to UDASSA. This imposes a restraint on the size and time period that the simulating universe is in. Additionally, ultraslow computation is in conflict with thermodynamic decay—fighting thermodynamic decay costs descriptiong length bits which is punished by UDASSA.
I conclude that this objection to simulation hypotheses are probably answered by UDASSA.
Idle thoughts about UDASSA II: Is Uploading Death?
There is an argument that uploading doesn’t work since encoding your brain into a machine incurs a minimum amount of encoding bits. Each bit is a 2x less Subjective Reality Fluid according to UDASSA so even a small encoding cost would mean certain subjective annihiliation.
There is something that confuses me in this argument. Could it not be possible to encode one’s subjective experiences even more efficiently than in a biological body? This would make you exist MORE in an upload.
OTOH it becomes a little funky again when there are many copies as this increases the individual coding cost (but also there are more of you sooo).
In most conceptions of simulation, there is no meaning to “slowed down”, from the perspective of the simulated universe. Time is a local phenomenon in this view—it’s just a compression mechanism so the simulators don’t have to store ALL the states of the simulation, just the current state and the rules to progress it.
Note that this COULD be said of a non-simulated universe as well—past and future states are determined but not accessible, and the universe is self-discovering them by operating on the current state via physics rules. So there’s still no inside-observable difference between simulated and non-simulated universes.
UDASSA seems like anthropic reasoning to include Boltzmann Brain like conceptions of experience. I don’t put a lot of weight on it, because all anthropic reasoning requires an outside-view of possible observations to be meaningful.
And of course, none of this relates to upload, where a given sequence of experiences can span levels of simulation. There may or may not be a way to do it, but it’d be a copy, not a continuation.
The point you make in the your first paragraph is contained in the original shortform post.
The point of the post is exactly that an UDASSA-style argument can nevertheless recover something like a ‘distribution of likely slowdown factors’.
This seems quite curious.
I suggest reading Falkovich’s post on UDASSA to get a sense whats so intriguing abouy the UDASSA franework.
Therapy is a curious practice. Therapy sounds like a scam, quackery, pseudo-science but it seems RCT consistently show therapy has benefits above and beyond medication & placebo.
Therapy has a long history. The Dodo verdict states that it doesn’t matter which form of therapy you do—they all work equally well. It follows that priests and shamans served the functions of a therapist. In the past, one would confessed ones sins to a priest, or spoken with the local shaman.
There is also the thing that therapy is strongly gendered (although this is changing), both therapists and their clientele lean female.
Self-Deception
Many forecasters will have noticed that their calibration score tanks the moment they try to predict salient facts about themselves. We are not-well calibrated about our own beliefs and desires.
Self-Deception is very common, arguably inherent to the human condition. There are of course many Hansonian reasons for this. I refer the reader to the Elephant and the Brain. Another good source would be Robert Trivers. These are social reasons for self-deception.
It is also not implausible that there are non-social reasons for self-deception. Predicting one-self perfectly can in theory lead one to get stuck in Procrastination Paradoxes. Whether this matters in practice is unclear to me but possible. Exuberant overconfidence is another case that seems like a case of self-deception.
Self-deception can be very useful, but one still pays the price for being inaccurate. The main function of talk-therapy seems to be to have a safe, private space in which humans can temporarily step out of their self-deception and reasses more soberly where they are at.
It explains many salient features of talk- therapy: the importance of talking extensively to another person that is (professionally) sworn to secrecy and therefore unable to do anything with your information.
I suspect that past therapists existed in your community and knew what you’re actually like so were better able to give you actual true information instead of having to digest only your bullshit and search for truth nuggets in it.
Furthermore, I suspect they didn’t lose their bread when they solve your problem! We have a major incentive issue in the current arrangement!
There’s a market for lemons problem, similar to the used car market, where neither the therapist nor customer can detect all hidden problems, pitfalls, etc., ahead of time. And once you do spend enough time to actually form a reasonable estimate there’s no takebacks possible.
So all the actually quality therapists will have no availability and all the lower quality therapists will almost by definition be associated with those with availability.
Edit: Game Theory suggests that you should never engage in therapy or at least never with someone with available time, at least until someone invents the certified pre-owned market.
Edit: Game Theory suggests that you should never engage in therapy or at least never with someone with available time, at least until someone invents the certified pre-owned market.
That would be prediction-based medicine. It works in theory, it’s just that someone would need to put it into practice.
[skipping several caveats and simplifying assumptions]
Now, when you get those 200 resumes, and hire the best person from the top 200, does that mean you’re hiring the top 0.5%?
“Maybe.”
No. You’re not. Think about what happens to the other 199 that you didn’t hire.
They go look for another job.
That means, in this horribly simplified universe, that the entire world could consist of 1,000,000 programmers, of whom the worst 199 keep applying for every job and never getting them, but the best 999,801 always get jobs as soon as they apply for one. So every time a job is listed the 199 losers apply, as usual, and one guy from the pool of 999,801 applies, and he gets the job, of course, because he’s the best, and now, in this contrived example, every employer thinks they’re getting the top 0.5% when they’re actually getting the top 99.9801%.
Thank you practicing the rationalist virtue of scholarship Christian. I was not aware of this paper.
You will have to excuse me for practicing rationalist vice and not believing nor investigating further this paper. I have been so traumatized by the repeated failures of non-hard science, I reject most social science papers as causally confounded p-hacked noise unless it already confirms my priors or is branded correct by somebody I trust.
As far as this particular paper goes I just searched for one on the point in Google Scholar.
I’m not sure what you believe about Spencer Greenberg but he has two interviews with people who believe that therapist skills (where empathy is one of the academic findings) matter:
I internalized the Dodo verdict and concluded that the specific therapist or therapist style didn’t matter anyway. A therapist is just a human mirror. The answer was inside of you all along Miles
look at the underlying random variable (‘surprisal’) logp(X=xi) of which entropy is the expectation.
Level 3: Coding functions
Shannon’s source coding theorem says entropy of a source X is the expected number of bits for an optimal encoding of samples of X.
Related quantity like mutual information, relative entropy, cross entropy, etc can also be given coding interpretations.
Level 4: Epsilon machine (transducer)
On level 3 we saw that entropy/information actually reflects various forms of (constrained) optimal coding. It talks about the codes but it does not talk about how these codes are implemented.
This is the level of Epsilon machines, more precisely epsilon transducers. It says not just what the coding function is but how it is (optimally) implemented mechanically.
[This is joint thinking with Sam Eisenstat. Also thanks to Caspar Oesterheld for his thoughtful comments. Thanks to Steve Byrnes for pushing me to write this out.]
The Hyena problem in long-term planning
Logical induction is a nice framework to think about bounded reasoning. Very soon after the discovery of logical induction people tried to make logical inductor decision makers work. This is difficult to make work: one of two obstacles is
The BRIA framework is only defined for single-step/ length 1 horizon decisions.
What about the much more difficult question of long-term planning? I’m going to assume you are familiar with the BRIA framework.
Setup: we have a series of decisions D_i, and rewards R_i, i=1,2,3… where rewards R_i can depend on arbitrary past decisions.
We again think of an auction market M of individual decisionmakers/ bidders.
There are a couple design choices to make here:
bidders directly bet for an action A in a decision D_i or bettors bet for rewards on certain days.
total observability or partial observability.
bidders can bid conditional on observations/ past actions or not
when can the auction be held? i.e. when is an action/ reward signal definitely sold?
To do good long-term planning it should be possible for one of the bidders or a group of bidders to commit to a long-term plan, i.e. a sequence of actions. They don’t want to be outbid in the middle of their plan.
There are some problems with the auction framework: if bids for actions can’t be combined then an outside bidder can screw up the whole plan by making a slighly higher bid for an essential part of the plan. This look like ADHD.
How do we solve this? One way is to allow a bidder or group of bidders to bid for a whole sequence of actions for a single lumpsum.
One issue is that we also have to determine how the reward gets awarded. For instance the reward could be very delayed. This could be solved by allowing for bidding for a reward signal R_i on a certain day conditional on a series of actions.
There is now an important design choice left. When a bidder B owns a series of actions A=a_1,..,a_k (some of the actions in the future, some already in the past) when there is another bid X from another bidder C on future actions
is bidder B forced to sell their contract on A to C if the bid is high enough ? [higher than the original bid]
Both versions seem problematic:
if they don’t have to there is an Incumbency Advantage problem. An initially rich bidder can underbid for very long horizons and use the steady trickle of cash to prevent any other bidders from ever being to underbid any actions.
Otherwise there is the Hyena problem.
The Hyena Problem
Imagine the following situation: on Day 1 the decisionmaker has a choice of actions. The highest expected value action is action a. If action a is made on Day 2 a fair coin is flipped. On Day 3 the reward is paid out.
If the coin was heads, 15 reward is paid out.
If the coin was tails, 5 reward is paid out.
The expected value is therefore 10. This is higher (by assumption) than the other unnamed actions.
However if the decisionmaker is a long-horizon BRIA with forced sales there is a pathology.
A sensible bidder is willing to pay up to 10 utilons for the contracts on the day 3 reward conditional on action a.
However, with a forced sale mechanism on Day 2 a ‘Hyena bidder’ can come that will ‘attempt to steal the prey’.
The Hyena bidder bids >10 for the contract if the coin comes up heads on Day 2 but doesn’t bid anything for the contract if the coin comes up tails.
This is a problem since the expected value of the action a for the sensible bidder goes down, so the sensible bidder might no longer bid for the action that maximizes expected value for the BRIA. The Hyena bidder screws up the credit allocation.
some thoughts:
if the sensible bidder is able to make bids conditional on the outcome of the coin flip that prevents Hyena bidder. This is a bit weird though because it would mean that the sensible bidder must carry around lots of extraneous non-necessary information instead of just caring about expected value.
perhaps this can alleviated by having some sort of ‘neo-cortex’ separate logical induction markets that is incentivized to have accurate beliefs. This is difficult to get right: the prediction market needs to be incentivized to get accurate on beliefs that are actually action relevant, not random beliefs—if the prediction market and the auction market are connected too tightly you might run the risk of getting into the old problems of Logical Inductor Decision makers. [they underexplore since untaken action are not observed].
Let X1,...,Xn be random variables distributed according to a probability distribution p on a sample space Ω.
Defn. A (weak) natural latent of X1,...,Xn is a random variable Λ such that
(i) Xi are independent conditional on Λ
(ii) [reconstructability] p(Λ=λ|X1,...,^Xi,...,Xn)=p(Λ=λ|X1,...,Xn) for all i=1
[This is not really reconstructability, more like a stability property. The information is contained in many parts of the system… I might also have written this down wrong]
Defn. A strong natural latent Λ additionally satisfies p(Λ|Xi)=p(Λ|X1,...,Xn)
Defn. A natural latent is noiseless if ?
H(Λ)=H(X1,...,Xn) ??
[Intuitively, Λ should contain no independent noise not accoutned for by the Xi]
Causal states
Consider the equivalence relation on tuples (x1,...,xn) given (x1,...,xn)∼(x′1,...,x′n) if for all i=1,...,np(Xi=xi|x1,...,^xi,...,xn)=p(Xi=xi|x′1,...,^xi′,...,x′n)
We call the set of equivalence relation Ω/∼ the set of causal states.
By pushing forward the distribution p on Ω along the quotient map Ω↠Ω/∼
This gives a noiseless (strong?) natural latent Λ.
Remark. Note that Wentworth’s natural latents are generalizations of Crutchfield causal states (and epsilon machines).
Minimality and maximality
Let X1,...,Xn be random variables as before and let Λ be a weak latent.
Minimality Theorem for Natural Latents. Given any other variable N such that the Xi are independent conditional on N we have the following DAG
Λ→N→{Xi}i
i.e. p(X1,...,Xn|N)=p(X1,...,Xn|N,Λ)
[OR IS IT for all i ?]
Maximality Theorem for Natural Latents. Given any other variable M such that the reconstrutability property holds with regard to Xi we have
M→Λ→{Xi}i
Some other things:
Weak latents are defined up to isomorphism?
noiseless weak (strong?) latents are unique
The causal states as defined above will give the noiseless weak latents
Not all systems are easily abstractable. Consider a multivariable gaussian distribution where the covariance matrix doesn’t have a low-rank part. The covariance matrix is symmetric positive—after diagonalization the eigenvalues should be roughly equal.
Consider a sequence of buckets Bi,i=1,...,n and you put messages mj in two buckets mj→B2j,B2j+1. In this case the minimal latent has to remember all the messages—so the latent is large. On the other hand, we can quotient B2i,B2i+1↦B′i: all variables become independent.
EDIT: Sam Eisenstat pointed out to me that this doesn’t work. The construction actually won’t satisfy the ‘stability criterion’.
The noiseless natural latent might not always exist. Indeed consider a generic distribution p on 2N. In this case, the causal state cosntruction will just yield a copy of 2N. In this case the reconstructavility/stability criterion is not satisfied.
Inspired by this Shalizi paper defining local causal states. The idea is so simple and elegant I’m surprised I had never seen it before.
Basically, starting with a a factored probability distribution Xt=(X1(t),...,Xkt(t)) over a dynamical DAG Dt we can use Crutchfield causal state construction locally to construct a derived causal model factored over the dynamical DAG as X′t where X′t is defined by considering the past and forward lightcone of Xt defined as L−(Xt),L+(Xt) all those points/ variables Yt2 which influence Xt respectively are influenced by Xt (in a causal interventional sense) . Now take define the equivalence relatio on realization at∼bt of L−(Xt) (which includes Xt by definition)[1] whenever the conditional probability distribution p(L+(Xt)|at)=p(L+(Xt)|bt) on the future light cones are equal.
These factored probability distributions over dynamical DAGs are called ‘fields’ by physicists. Given any field F(x,t) we define a derived local causal state field ϵ(F(x,t)) in the above way. Woah!
Some thoughts and questions
this depends on the choice of causal factorizations. Sometimes these causal factorizations are given but in full generality one probably has to consider all factorizations simultaneously, each giving a different local state presentation!
What is the Factored sets angle here?
In particular, given a stochastic process ...→X−1→X0→X1→... the reverse XBackToTheFuturet:=X−t can give a wildly different local causal field as minimal predictors and retrodictors can be different. This can be exhibited by the random insertion process, see this paper.
Let a stochastic process Xt be given and define the (forward) causal states St as usual. The key ‘stochastic complexity’ quantity is defined as the mutual information I(St;X≤0) of the causal states and the past. We may generalize this definition, replacing the past with the local past lightcone to give a local stochastic complexity.
Under the assumption that the stochastic process is ergodic the causal state form an irreducible Hidden Markov Model and the stochastic complexity can be calculated as the entropy of the stationary distribution.
!!Importantly, the stochastic complexity is different from the ‘excess entropy’ of the mutual information of the past (lightcone) and the future (lightcone).
This gives potentially a lot of very meaningful quantities to compute. These are I think related to correlation functions but contain more information in general.
Note that the local causal state construction is always possible—it works in full generality. Really quite incredible!
How are local causal fields related to Wentworth’s latent natural abstractions?
Shalizi conjectures that the local causal states form a Markov field—which would mean by Hammersley-Clifford we could describe the system as a Gibb distribution ! This would prove an equivalence between the Gibbs/MaxEnt/ Pitman-Koopman-Darmois theory and the conditional independence story of Natural Abstraction roughly similar to early approaches of John.
I am not sure what the status of the conjecture is at this moment. It seems rather remarkable that such a basic fact, if true, cannot be proven. I haven’t thought about it much but perhaps it is false in a subtle way.
A Markov field factorizes over an undirected graph which seems strictly less general than a directed graph. I’m confused about this.
Given a symmetry group G acting on the original causal model /field F(x,t)=(p,D) the action will descend to an action G↷ϵ(F)(x,t) on the derived local causal state field.
A stationary process X(t) is exactly one with a translation action by Z. This underlies the original epsilon machine construction of Crutchfield, namely the fact that the causal states don’t just form a set (+probability distribution) but are endowed with a monoid structure → Hidden Markov Model.
[Intuitively, Λ should contain no independent noise not accoutned for by the Xi]
That condition doesn’t work, but here’s a few alternatives which do (you can pick any one of them):
Λ=(x↦P[X=x|Λ]) - most conceptually confusing at first, but most powerful/useful once you’re used to it; it’s using the trick from Minimal Map.
Require that Λ be a deterministic function of X, not just any latent variable.
H(Λ)=I(X,Λ)
(The latter two are always equivalent for any two variables X,Λ and are somewhat stronger than we need here, but they’re both equivalent to the first once we’ve already asserted the other natural latent conditions.)
It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives. Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another’s blueprint. This is arguably only very approximately the case between different humans but it is much closer to be the case when we are considering different versions of the same human through time as well as subminds of that human.
All these considerations could potentially make it possible for future AI societies to exhibit vastly more cooperative behaviour.
Artificial minds also have several features that make them intrinsically likely to engage in Lobian cooperation. i.e. their easy copyability (which might lead to giant ‘spur’ clans). Artificial minds can be copied, their source code and weight may be shared and the widespread use of simulations may become feasible. All these point towards the importance of Lobian cooperation and Open-Source Game theory more generally.
[With benefits also come drawbacks like the increased capacity for surveillance and torture. Hopefully, future societies may develop sophisticated norms and technology to avoid these outcomes. ]
I definitely agree that cooperation can definitely be way better in the future, and Lobian cooperation, especially with Payor’s Lemma, might well be enough to get coordination across entire solar system.
That stated, it’s much more tricky to expand this strategy to galactic scales, assuming our physical models aren’t wrong, because light speed starts to become a very taut constraint under a galaxy wide brain, and acausal strategies will require a lot of compute to simulate entire civilizations. Even worse, they depend on some common structure of values, and I suspect it’s impossible to do in the fully general case.
Does internal bargaining and geometric rationality explain ADHD & OCD?
Self- Rituals as Schelling loci for Self-control and OCD
Why do people engage in non-social Rituals ‘self-rituals’? These are very common and can even become pathological (OCD).
High-self control people seem to more often have OCD-like symptoms.
One way to think about self-control is as a form of internal bargaining between internal subagents. From this perspective, Self-control, time-discounting can be seen as a resource. In the absence of self-control the superagent Do humans engage in self-rituals to create Schelling points for internally bargaining agents?
Why are exploration behaviour and lack of selfcontrol linked ? As an example ADHD-people often lack self-control, conscientiousness. At the same time, they explore more. These behaviours are often linked but it’s not clear why.
It’s perfectly possible to explore, deliberately. Yet, it seems that the best explorers are highly correlated with lacking self-control. How could that be?
There is a boring social reason: doing a lot of exploration often means shirking social obligations. Self-deceiving about your true desires might be the only way to avoid social repercussions. This probably explains a lot of ADHD—but not necessarily all.
If self-control = internal bargaining then it would follow that a lack of self-control is a failure of internal bargaining. Note that with subagents I mean both subagents in space *and* time . From this perspective an agent through time could alternatively be seen as a series of subagents of a 4d worm superagent.
This explains many of the salient features of ADHD:
[Claude, list salient features and explain how these are explained by the above]
Impulsivity: A failure of internal subagents to reach an agreement intertemporaly, leading to actions driven by immediate desires.
Difficulty with task initiation and completion: The inability of internal subagents to negotiate and commit to a course of action.
Distractibility: A failure to prioritize the allocation of self-control resources to the task at hand.
Hyperfocus: A temporary alignment of internal subagents’ interests, leading to intense focus on engaging activities.
Disorganization: A failure to establish and adhere to a coherent set of priorities across different subagents.
Emotional dysregulation: A failure of internal bargaining to modulate emotional reactions.
Arithmetic vs Geometric Exploration. Entropic drift towards geometric rationality
[this section obviously owes a large intellectual debt to Garrabrant’s geometric rationality sequence]
Sometimes people like to say that geometric exploration = kelly betting =maximizing geometric mean is considered to be ‘better’ than arithmetic mean.
The problem is that actually just maximizing expected value rather than geometric expected value does in fact maximize the total expected value, even for repeated games (duh!). So it’s not really clear in what sense geometric maximization is better in a naive sense.
Instead, Garrabrant suggests that it is better to think of geometric maximizing as a part of a broader framework of geometric rationality wherein Kelly betting, Nash bargaining, geometric expectation are all forms of cooperation between various kinds of subagents.
If self-control is a form of sucessful internal bargaining then it is best to think of it as a resource. It is better to maximize arithmetic mean but it means that subagents need to cooperate & trust each other much more. Arithmetic maximization means that the variance of outcomes between future copies of the agent is much larger than geometric maximization. That means that subagents should be more willing to take a loss in one world to make up for it in another.
It is hard to be coherent
It is hard to be a coherent agent. Coherence and self-control are resources. Note that having low time-discounting is also a form of coherence: it means the subagents of the 4d-worm superagent are cooperating.
Having subagents that are more similar to one another means it will be easier for them to cooperate. Conversely, the less they are alike the harder it is to cooperate and to be coherent.
Over time, this means there is a selective force against an arithmetic mean maximizing superagent.
Moreover, if the environment is highly varied (for instance when the agent select the environment to be more variable because it is exploring) the outcomes for subagents is more varied so there is more entropic pressure on the superagent.
This means that in particular we would expect superagents that explore more (ADHDers) are less coherent over time (higher time-discounting) and space (more internal conflict etc).
I feel like the whole “subagent” framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don’t think it displays actual mechanistic truth about minds.
When I plan something and then fail to execute plan it’s mostly not like “failure to bargain”. It’s just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.
You are taking subagents too literally here.
If you prefer take another word like shard, fragment, component, context-dependent action impulse generator etc
When I read word “bargaining” I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying “systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess” is true, but doesn’t convey much of useful information.
Sometimes you can say something isn’t quite right but you can’t provide an alternative.
rejecting the null hypothesis
give a (partial) countermodel that shows that certain proof methods can’t prove $A$ without proving $\neg A$.
Looking at Scott Garrabrant’s game of life board—it’s not white noise but I can’t say why
Difference between ‘generation of ideas’ and ‘filtration of ideas’ - i.e. babble and prune.
ScottG: Bayesian learning assumes we are in a babble-rich environment and only does pruning.
ScottG: Bayesism doesn’t say ‘this thing is wrong’ it says ‘this other thing is better’.
Alexander: Nonrealizability the Bayesian way of saying: not enough babble?
Scott G: mwah, that suggests the thing is ‘generate more babble’ when the real solution is ‘factor out your model in pieces and see where the culprit is’.
ergo, locality is a virtue
Alexander: locality just means conditional independence? Or does it mean something more?
ScottG: loss of locality means there is existenial risk
Alexander: reminds me of Vanessa’s story:
trapped environments aren’t in general learnable. This is a problem since real life is trapped. A single human life is filled to the brim with irreversible transitions & decisions. Humanity as a whole is much more robust because of locality: it is effectively playing the human life game lots of times in parallel. The knowledge gained is then redistributed through culture and genes. This breaks down when locality breaks down → existential risk.
Reasonable interpretations of Recursive Self Improvement are either trivial, tautological or false?
(Trivial) AIs will do RSI by using more hardware—trivial form of RSI
(Tautological) Humans engage in a form of (R)SI when they engage in meta-cognition. i.e. therapy is plausibly a form of metacognition. Meta-cognition is plausible one of the remaining hallmarks of true general intelligence. See Vanessa Kosoy’s “Meta-Cognitive Agents”. In this view, AGIs will naturally engage in meta-cognition because they’re generally intelligent. They may (or may) not also engage in significantly more metacognition than humans but this isn’t qualitatively different from what the human cortical algorithm already engages in.
(False) It’s plausible that in many domains learning algorithms are already near a physical optimum. Given a fixed Bayesian prior of prior information and a data-set the Bayesian posterior is precise formal sense the ideal update. In practice Bayesian updating is intractable so we typically sample from the posterior using something SGD. It is plausible that something like SGD is already close to the optimum for a given amount of compute.
SGD finds algorithms. Before the DL revolution, science studied such algorithms. Now, the algorithms become inference without as much as a second glance. With sufficient abundance of general intelligence brought about by AGI, interpretability might get a lot out of studying the circuits SGD discovers. Once understood, the algorithms could be put to more efficient use, instead of remaining implicit in neural nets and used for thinking together with all the noise that remains from the search.
I think most interpretations of RSI aren’t useful.
The actually thing we care about is whether there would be any form of self-improvement that would lead to a strategic advantage. The fact that something would “recursively” self-improve 12 times or 2 times don’t really change what we care about.
With respect to your 3 points.
1) could happen by using more hardware, but better optimization of current hardware / better architecture is the actually scary part (which could lead to the discovery of “new physics” that could enable an escape even if the sandbox was good enough for the model before a few iterations of the RSI).
2) I don’t think what you’re talking about in terms of meta-cognition is relevant to the main problem. Being able to look at your own hardware or source code is though.
3) Cf. what I said at the beginning. The actual “limit” is I believe much higher than the strategic advantage threshold.
In practice Bayesian updating is intractable so we typically sample from the posterior using something SGD. It is plausible that something like SGD is already close to the optimum for a given amount of compute.
I give this view ~20%: There’s so much more info in some datapoints (curvature, third derivative of the function, momentum, see also Empirical Bayes-like SGD, the entire past trajectory through the space) that seems so available and exploitable!
When they do (like in Vanessa’s meta-MDPs) I think it’s plausible automated architecture search is a simply an instantiation of the algorithm for general intelligence (see 2.)
I think the AI will improve (itself) via better hardware and algorithms, and it will be a slog. The AI will frequently need to do narrow tasks where the general algorithm is very inefficient.
Aumann agreement can fail for purely epistemic reasons because real-world minds do not do Bayesian updating. Bayesian updating is intractable so realistic minds sample from the prior. This is how e.g. gradient descent works and also how human minds work.
In this situation a two minds can end in two different basins with similar loss on the data. Because of computational limitations.
These minds can have genuinely different expectation for generalization.
(Of course this does not contradict the statement of the theorem which is correct.)
Would like a notion of entropy for credal sets. Diffractor suggests the following:
let C⊂Credal(Ω) be a credal set.
Then the entropy of C is defined as
HDiffractor(C)=suppH(p)
where H(p) denotes the usual Shannon entropy.
I don’t like this since it doesn’t satisfy the natural desiderata below.
Instead, I suggest the following. Let meC∈C denote the (absolute) maximum entropy distribution, i.e.H(meC)=maxp∈CH(p) and let H(C)=Hnew(C)=H(mec).
Desideratum 1: H({p})=H(p)
Desideratum 2: Let A⊂Ω and consider CA:=ConvexHull({δa|a∈A}).
Then H(A):=H(CA)=log|A|.
Remark. Check that these desiderata are compatible where they overlap.
It’s easy to check that the above ‘maxEnt’- suggestion satisfies these desiderata.
Entropy operationally
Entropy is really about stochastic processes more than distributions. Given a distribution p there is an associated stochastic process Xn∈N where Xi is sampled iid from p. The entropy is really about the expected code length of encoding samples from this process.
In the credal set case there are two processes that can be naturally associated with a credal set C . Basically, do you pick a p∈C at the start and then sample according to p (this is what Diffractors entropy refers to) or do you allow the environment to ‘choose’ each round a different q∈C.
In the latter case, you need to pick an encoding that does least badly.
[give more details. check that this makes sense!]
Properties of credal maxEnt entropy
We may now investigate properties of the entropy measure.
H(A∨B)=H(A)+H(B)−H(A∧B)
H(Ac)=log|Ac|=log(|Ω|−|A|)
remark. This is different from the following measure!
"H(A|Ω)"=log(Ω/A)
Remark. If we think of H(A)=H(P(x∈Ω|A)) as denoting the amount of bits we receive when we know that A holds and we sample from Ω uniformly then H(A|Ω)=H(x∈A|x∈Ω) denotes the number of bits we receive when find out that x∈A when we knew x∈Ω.
What about
H(A∧B)?
H(A∧B)=H(P(x∈A∧B|Ω))=...?
we want to do an presumption of independence—mobius/ Euler characteristic expansion
Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.
Why Roko’s basilisk probably doesn’t work for simulation fidelity reasons:
Roko’s basilisk threatens to simulate and torture you in the future if you don’t comply. Simulation cycles cost resources. Instead of following through on torturing our would-be cthulhu worshipper they could spend those resources on something else.
But wait can’t it use acausal magic to precommit to follow through? No.
Acausal arguments only work in situations where agents can simulate each others with high fidelity. Roko’s basilisk can simulate the human but not the other way around! The human’s simulation of Roko’s basilisk is very low fidelity—in particular Roko’s Basilisk is never confused whether or not it is being simulated by a human—it knows for a fact that the human is not able to simulate it.
Acausal arguments only work in situations where agents can simulate each others with high fidelity.
If the agents follow simple principles, it’s simple to simulate those principles with high fidelity, without simulating each other in all detail. The obvious guide to the principles that enable acausal coordination is common knowledge of each other, which could be turned into a shared agent that adjudicates a bargain on their behalf.
I have always taken Roko’s Basilisk to be the threat that the future intelligence will torture you, not a simulation, for not having devoted yourself to creating it.
All concepts can be learnt. All things worth knowing may be grasped. Eventually.
All can be understood—given enough time and effort.
For Turing-complete organism, there is no qualitive gap between knowledge and ignorance.
No qualitive gap but one. The true qualitative difference: quantity.
Often we simply miss a piece of data. The gap is too large—we jump and never reach the other side. A friendly hominid who has trodden the path before can share their journey. Once we know the road, there is no mystery. Only effort and time. Some hominids choose not to share their journey. We keep a special name for these singular hominids: genius.
Abnormalised sampling? Probability theory talks about sampling for probability distributions, i.e. normalized measures. However, non-normalized measures abound: weighted automata, infra-stuff, uniform priors on noncompact spaces, wealth in logical-inductor esque math, quantum stuff?? etc.
Most of probability theory constructions go through just for arbitrary measures, doesn’t need the normalization assumption. Except, crucially, sampling.
What does it even mean to sample from a non-normalized measure? What is unnormalized abnormal sampling?
I don’t know.
Infra-sampling has an interpretation of sampling from a distribution made by a demonic choice. I don’t have good interpretations for other unnormalized measures.
Concrete question: is there a law of large numbers for unnormalized measures?
Let f be a measureable function and m a measure. Then the expectation value is defined Em(f)=∫fdm. A law of large numbers for unnormalized measure would have to say something about repeated abnormal sampling.
The morphogenetic SLT story says that during training the Bayesian posterior concentrates around a series of subspaces W0(1)⇝...⇝W0(n) with rlcts λ1<...<λn and losses L1=L(w1),...,Ln=L(wn),wi∈W0(i). As the size of the data sample N is scaled the Bayesian posterior makes transitions W0(i)⇝W0(i+1) trading off higher complexity (higher λi+1>λi) for better accuracy (lower loss Li+1<Li).
This is the radical new framework of SLT: phase transitions happen in pure Bayesian learning as the data size is scaled.
N.B. The phase transition story actually needs a version of SLT for the nonrealizable case despite most sources focusing solely on the realizable case! The nonrealizable case makes everything more complicated and the formulas from the realizable case have to be altered.
We think of the local RLCT λw at a parameter w as a measure of its inherent complexity. Side-stepping the subtleties with this point of view let us take a look at Watanabe’s formula for the Bayesian generalization error:
GN(W)=LN(w0)+λN+o(1N)≈NL(w0)+λN+o(1N)
where W is a neighborhood of the local minimum w0 and λ is its local RLCT. In our case W=W0(i).
--EH I wanted to say something here but don’t think it makes sense on closer inspection
I’ve heard this alignment plan that is a variation of ‘simulate top alignment researchers’ with an LLM. Usually the poor alignment researcher in question is Paul.
This strikes me as deeply unserious and I am confused why it is having so much traction.
That AI-assisted alignment is coming (indeed, is already here!) is undeniable. But even somewhat accurately simulating a human from textdata is a crazy sci-fi ability, probably not even physically possible. It seems to ascribe nearly magical abilities to LLMs.
Predicting a partially observable process is fundamentally hard. Even in very easy cases there are simple cases where one can give a generative (partially observable) model with just two states (the unifilar source) that needs an infinity of states to predict optimally. In more generic cases the expectation is that this is far worse.
Error compound over time (or continuation length). Even a tiny amount of noise would throw off simulation.
Okay maybe people just mean that GPT-N will kinda know what Paul approximately would be looking at. I think this is plausible in very broad brush strokes but it seems misleading to call this ‘simulation’.
[Edit 15/05/2024: I currently think that both forward and backward chaining paradigms are missing something important. Instead, there is something like ‘side-chaining’ or ‘wide-chaining’ where you are investigating how things are related forwardly, backwardly and sideways to make use of synergystic information ]
Optimal Forward-chaining versus backward-chaining.
In general, this is going to depend on the domain. In environments for which we have many expert samples and there are many existing techniques backward-chaining is key. (i.e. deploying resources & applying best practices in business & industrial contexts)
In open-ended environments such as those arising Science, especially pre-paradigmatic fields backward-chaining and explicit plans breakdown quickly.
Incremental vs Cumulative
Incremental: 90% forward chaining 10% backward chaining from an overall goal.
Cumulative: predominantly forward chaining (~60%) with a moderate amount of backward chaining over medium lengths (30%) and only a small about of backward chaining (10%) over long lengths.
Thick: aggregate many noisy sources to make a sequential series of actions in mildly related environments, model-free RL
carnal sins: failure of prioritization / not throwing away enough information , nerdsnipes, insufficient aggegration, trusting too much in any particular model, indecisiveness, overfitting on noise, ignoring consensus of experts/ social reality
default of the ancestral environment
CEOs, general, doctors, economist, police detective in the real world, trader
Thin: precise, systematic analysis, preferably in repeated & controlled experiments to obtain cumulative deep & modularized knowledge, model-based RL
carnal sins: ignoring clues, not going deep enough, aggregating away the signal, prematurely discarding models that don’t fit naively fit the evidence, not trusting formal models enough / resorting to intuition or rule of the thumb, following consensus / building on social instead of physical reality
only possible in highly developed societies with place for cognitive specalists.
mathematicians, software engineers, engineers, historians, police detective in fiction, quant
An Attempted Derivation of the Lindy Effect Wikipedia:
The Lindy effect (also known as Lindy’s Law[1]) is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age.
Laplace Rule of Succesion
What is the probability that the Sun will rise tomorrow, given that is has risen every day for 5000 years?
Let p denote the probability that the Sun will rise tomorrow. A priori we have no information on the value of p so Laplace posits that by the principle of insufficient reason one should assume a uniform prior probability dp=Uniform((0,1))[1]
Assume now that we have observed n days, on each of which the Sun has risen.
Each event is a Bernoulli random variable Xi which can each be 1 (the Sun rises) or 0 (the Sun does not rise). Assume that the probability is conditionally independent of p.
The likelihood of n out of n succeses according to the hypothesis p is L(X1=1,...,Xn=1|p)=pn. Now use Bayes rule
I haven’t checked the derivation in detail, but the final result is correct. If you have a random family of geometric distributions, and the density around zero of the decay rates doesn’t go to zero, then the expected lifetime is infinite. All of the quantiles (e.g. median or 99%-ile) are still finite though, and do depend upon n in a reasonable way.
For singular models the Jeffrey Prior is not well-behaved for the simple fact that it will be zero at minima of the loss function. Does this mean the Jeffrey prior is only of interest in regular models? I beg to differ.
Usually the Jeffrey prior is derived as parameterization invariant prior. There is another way of thinking about the Jeffrey prior as arising from an ‘indistinguishability prior’.
The argument is delightfully simple: given two weights w1,w2∈W if they encode the same distribution p(x|w1),p(x|w2) our prior weights on them should be intuitively the same ϕ(w1)=ϕ(w2). Two weights encoding the same distributions means the model exhibit non-identifiability making it non-regular (hence singular). However, regular models exhibit ‘approximate non-identifiability’.
For a given dataset DN of size N from the true distribution q, error ϵ1, ϵ2 we can have a whole set of weights WN,ϵ⊂W where the probability that p(x|w1) does more than ϵ1 better on the loss on DN than p(x|w1) is less than ϵ2.
In other words, the sets of weights that are probabily approximately indistinguishable. Intuitively, we should assign an (approximately) uniform prior on these approximately indistinguishable regions. This gives strong constraints on the possible prior.
The downside of this is that it requires us to know the true distribution q. Instead of seeing if w1,w2 are approximately indistinguishable when sampling from q we can ask if w2 is approximately indistinguishable from w1 when sampling from w2. For regular models this also leads to the Jeffrey prior, see this paper.
However, the Jeffrey prior is just an approximation of this prior. We could also straightforwardly see what the exact prior is to obtain something that might work for singular models.
EDIT: Another approach to generalizing the Jeffrey prior might be by following an MDL optimal coding argument—see this paper.
You might reconstruct your sacred Jeffries prior with a more refined notion of model identity, which incorporates derivatives (jets on the geometric/statistical side and more of the algorithm behind the model on the logical side).
I argued above that given two weights w1,w2 such that they have (approximately) the same conditional distribution p(x|y,w1)∼=p(x|y,w2) the ‘natural’ or ‘canonical’ prior should assign them equal prior weights ϕ(w1)=ϕ(w2). A more sophisticated version of this idea is used to argue for the Jeffrey prior as a canonical prior.
Some further thoughts:
imposing this uniformity condition would actually contradict some version of Occam’s razor. Indeed, w1 could be algorithmically much more complex (i.e. have much higher description length) than w2 but they still might have similar or the same predictions.
The difference between same-on-the-nose versus similar might be very material. Two conditional probability distributions might be quite similar [a related issue here is that the KL-divergence is assymetric so similarity is a somewhat ill-defined concept], yet one intrinsically requires far more computational resources.
A very simple example is the uniform distribution puniform(x)=1N and another distribution p′(x) that is a small perturbation of the uniform distribution but whose exact probabilities p′(x) have decimal expansions that have very large description length (this can be produced by adding long random strings to the binary expansion).
[caution: CompMech propaganda incoming] More realistic examples do occur i.e. in finding optimal predictors of dynamical systems at the edge of chaos. See the section on ‘intrinsic computation of the period-doubling cascade’, p.27-28 of calculi of emergence for a classical example.
Asking for the prior ϕ to restrict to be uniform for weights wi that have equal/similar conditional distributions p(x|y,wi) seems very natural but it doesn’t specify how the prior should relate weights with different conditional distributions. Let’s say we have two weights w1, w2 with very different conditional probability distributions. Let Wi={w∈W|p(x|y,w)∼=p(x|y,wi)}. How should we compare the prior weights ϕ(W1),ϕ(W2)? Suppose I double the number of w∈W1, i.e.W1↦W′1 where we enlarged W↦W′ such that W′1 has double the volume of W1 and everything else is fixed. Should we have ϕ(W1)=ϕ(W′1) or should the prior weight ϕ(W′1) be larger? In the former case, the a prior weight on ϕ(w) should be reweighted depending on how many w′ there are with similar conditional probability distributions, in the latter it isn’t. ( Note that this is related but distinct from the parameterization invariance condition of the Jeffery prior. ) I can see arguments for both
We could want to impose the condition that quotienting out by the relation w1∼w2 whenp(x|y,w1)=p(x|y,w2) to not affect the model (and thereby the prior) at all.
On the other hand, one could argue that the Solomonoff prior would not impose ϕ(W1)=ϕ(W′1) - if one finds more programs that yield p(x|y,w1) maybe you should put higher a priori credence on p(x|y,w1).
The RLCT λ(w′) of the new elements in w′∈W′1−W1 could behave wildly different from w∈W1. This suggest that the above analysis is not at the right conceptual level and one needs a more refined notion of model identity.
Your comment about more refined type of model identity using jets sounds intriguing. Here is a related thought
In the earlier discussion with Joar Skalse there was a lot of debate around whether a prior simplicity (description length, Kolmogorov complexity according to Joar) is actually captured by the RLCT. It is possible to create examples where the RLCT and the algorithmic complexity diverge.
I haven’t had the chance to think about this very deeply but my superficial impression is that the RLCT λ(Wa) is best thought of as measuring a relative model complexity between Wa and W rather than an absolute measure of complexity of W,Wa.
(more thoughts about relations with MDL. too scattered, I’m going to post now)
I think there’s no such thing as parameters, just processes that produce better and better approximations to parameters, and the only “real” measures of complexity have to do with the invariants that determine the costs of those processes, which in statistical learning theory are primarily geometric (somewhat tautologically, since the process of approximation is essentially a process of probing the geometry of the governing potential near the parameter).
From that point of view trying to conflate parameters w1,w2 such that p(x|w1)≈p(x|w2) is naive, because w1,w2 aren’t real, only processes that produce better approximations to them are real, and so the ∂∂w derivatives of p(x|w1),p(x|w2) which control such processes are deeply important, and those could be quite different despite p(x|w1)≈p(x|w2) being quite similar.
So I view “local geometry matters” and “the real thing are processes approximating parameters, not parameters” as basically synonymous.
“The links between logic and games go back a long way. If one thinks of a debate as a kind of game, then Aristotle already made the connection; his writings about syllogism are closely intertwined with his study of the aims and rules of debating. Aristotle’s viewpoint survived into the common medieval name for logic: dialectics. In the mid twentieth century Charles Hamblin revived the link between dialogue and the rules of sound reasoning, soon after Paul Lorenzen had connected dialogue to constructive foundations of logic.” from the Stanford Encyclopedia of Philosophy on Logic and Games
Game Semantics
Usual presentation of game semantics of logic: we have a particular debate / dialogue game associated to a proposition between an Proponent and Opponent and Proponent tries to prove the proposition while the Opponent tries to refute it.
A winning strategy of the Proponent corresponds to a proof of the proposition. A winning strategy of the Opponent corresponds to a proof of the negation of the proposition.
It is often assumed that either the Proponent has a winning strategy in A or the Opponent has a winning strategy in A—a version of excluded middle. At this point our intuitionistic alarm bells should be ringing: we cant just deduce a proof of the negation from the absence of a proof of A. (Absence of evidence is not evidence of absence!)
We could have a situation that neither the Proponent or the Opponent has a winning strategy! In other words neither A or not A is derivable.
Countermodels
One way to substantiate this is by giving an explicit counter model C in which A respectively ¬A don’t hold.
Game-theoretically a counter model C should correspond to some sort of strategy! It is like an “interrogation” /attack strategy that defeats all putative winning strategies. A ‘defeating’ strategy or ‘scorched earth’-strategy if you’d like. A countermodel is an infinite strategy. Some work in this direction has already been done[1]. [2]
Dualities in Dialogue and Logic
This gives an additional symmetry in the system, a syntax-semantic duality distinct to the usual negation duality. In terms of proof turnstile we have the quadruple
⊢A meaning A is provable
⊢¬A meaning $¬A$ is provable
⊣A meaning A is not provable because there is a countermodel C where A doesn’t hold—i.e. classically ¬A is satisfiable.
⊣¬A meaning ¬A is not provable because there is a countermodel C where ¬A doesn’t hold—i.e. classically A is satisfiable.
Obligationes, Positio, Dubitatio
In the medieval Scholastic tradition of logic there were two distinct types of logic games (“Obligationes) - one in which the objective was to defend a proposition against an adversary (“Positio”) the other the objective was to defend the doubtfulness of a proposition (“Dubitatio”).[3]
Winning strategies in the former corresponds to proofs while winning (defeating!) strategies in the latter correspond to countermodels.
Destructive Criticism
If we think of argumentation theory / debate a counter model strategy is like “destructive criticism” it defeats attempts to buttress evidence for a claim but presents no viable alternative.
[Thanks to Matthias Georg Mayer for pointing me towards ambiguous counterfactuals]
Salary is a function of eXperience and Education
S=aE+bX
We have a candidate C with given salary, experience (X=5) and education (E=5).
Their current salary is given by
S=a⋅5+b⋅5
We ’d like to consider the counterfactual where they didn’t have the education (E=0). How do we evaluate their salary in this counterfactual?
This is slightly ambiguous—there are two counterfactuals:
E=0,X=5 or E=0,X=10
In the second counterfactual, we implicitly had an additional constraint X+E=10, representing the assumption that the candidate would have spent their time either in education or working. Of course, in the real world they could also have dizzled their time away playing video games.
One can imagine that there is an additional variable: do they live in a poor country or a rich country. In a poor country if you didn’t go to school you have to work. In a rich country you’d just waste it on playing video games or whatever. Informally, we feel in given situations one of the counterfactuals is more reasonable than the other.
Coarse-graining and Mixtures of Counterfactuals
We can also think of this from a renormalization / coarsegraining story. Suppose we have a (mix of) causal models coarsegraining a (mix of) causal models. At the bottom we have the (mix of? Ising models!) causal model of physics. i.e. in electromagnetics the Green functions give use the intervention responses to adding sources to the field.
A given counterfactual at the macrolevel can now have many different counterfactuals at the microlevels. This means we actually would get a probability dsitribution of likely counterfactuals at the top levels. i.e. in 1⁄3 of the cases the candidate actually worked the 5 years they didn’t go to school. In 2⁄3 of the cases the candidate just wasted it on playing video games.
The outcome of the counterfactual SE=0 is then not a single number but a distribution
SE=0=5⋅b+Y⋅b
where Y is random variable with distribution the Bernoulli distribution with bias 1/3.
I’ve been fascinated by this beautiful paper by Viteri & DeDeo.
What is a mathematical insight? We feel intuitively that proving a difficult theorem requires discovering one or more key insights. Before we get into what the Dedeo-Viteri paper has to say about (mathematical) insights let me recall some basic observations on the nature of insights:
(see also my previous shortform)
There might be a unique decomposition, akin to prime factorization. Alternatively, there might many roads to Rome: some theorems can be proved in many different ways.
There are often many ways to phrase an essentially similar insight. These different ways to name things we feel are ‘inessential’. Different labelings should be easily convertible into one another.
By looping over all possible programs all proofs can be eventually found, so the notion of an ‘insight’ has to fundamentally be about feasibility.
Previously, I suggested a required insight is something like a private key to a trapdoor function. Without the insight you are facing an infeasible large task. With it, you can suddenly easily solve a whole host of new tasks/ problems
Insight may be combined in (arbitrarily?) complex ways.
When are two proofs of essentially different?
Some theorems can be proved in many different ways. That is different in the informal sense. It isn’t immediately clear how to make this more precise.
We could imagine there is a whole ‘homotopy’ theory of proofs, but before we do so we need to understand when two proofs are essentially the same or essentially different.
On one end of the spectrum, proofs can just be syntactically different but we feel they have ‘the same content’.
We can think type-theoretically, and say two proofs are the same when their denotations (normal forms) are the same. This is obviously better than just asking for syntactical equality or apartness. It does mean we’d like some sort of intuitionistic/type-theoretic foundation since a naive classicial foundations makes all normals forms equivalent.
We can also look at what assumptions are made in the proof. I.e. one of the proofs might use the Axiom of Choice, while the other does not. An example is the famous nonconstructive proof of the irrationality of ab which turns out to have a constructive proof as well.
If we consider proofs as functorial algorithms we can use mono-Anabelian transport to distinguish them in some case. [LINK!]
We can also think homotopy type-theoretically and ask when two terms of a type are equal in the HoTT sense.
With the exception of the mono-anabelian transport one—all these suggestions of ‘don’t go deep enough’, they’re too superficial.
Phase transitions and insights, Hopfield Networks & Ising Models
Modern ML models famously show some sort of phase transitions in understanding. People have been especially fascinated by the phenomenon of ’grokking, see e.g. here and here. It suggests we think of insights in terms of phase transitions, critical points etc.
Dedeo & Viteri have an ingenious variation on this idea. They consider a collection of famous theorems and their proofs formalized in a proof assistant.
They then imagine these proofs as a giant directed graph and consider a Boltzmann distributions on it. (so we are really dealing with an Ising model/ Hopfield network here). We think of this distribution as a measure of ‘trust’ both trust in propositions (nodes) and inferences (edges).
We show that the epistemic relationship between claims in a mathematical proof has a network structure that enables what we refer to as an epistemic phase transition (EPT): informally, while the truth of any particular path of argument connecting two points decays exponentially in force, the number of distinct paths increases. Depending on the network structure, the number of distinct paths may itself increase exponentially, leading to a balance point where influence can propagate at arbitrary distance (Stanley, 1971). Mathematical proofs have the structure necessary to make this possible. In the presence of bidirectional inference—i.e., both deductive and abductive reasoning—an EPT enables a proof to produce near-unity levels of certainty even in the presence of skepticism about the validity of any particular step. Deductive and abductive reasoning, as we show, must be well-balanced for this to happen. A relative over-confidence in one over the other can frustrate the effect, a phenomenon we refer to as the abductive paradox
The proofs of these famous theorems break up into ‘abductive islands’. They have natural modularity structure into lemmas.
EPTs are a double-edged sword, however, because disbelief can propagate just as easily as truth. A second prediction of the model is that this difficulty—the explosive spread of skepticism—can be ameliorated when the proof is made of modules: groups of claims that are significantly more tightly linked to each other than to the rest of the network.
(...) When modular structure is present, the certainty of any claim within a cluster is reasonably isolated from the failure of nodes outside that cluster.
One could hypothesize that insights might correspond somehow to these islands.
Final thoughts
I like the idea that a mathemathetical insight might be something like an island of deductively & abductively tightly clustered propositions.
Some questions:
How does this fit into the ‘Natural Abstraction’ - especially sufficient statistics?
EDIT: The separation property of Ludics, see e.g. here, points towards the point of view that proofs can be distinguished exactly by suitable (counter)models.
In the real world the weight of many pieces of weak evidence is not always comparable to a single piece of strong evidence. The important variable here is not strong versus weak per se but the source of the evidence. Some sources of evidence are easier to manipulate in various ways. Evidence manipulation, either consciously or emergently, is common and a large obstactle to truth-finding.
Consider aggregating many (potentially biased) sources of evidence versus direct observation. These are not directly comparable and in many cases we feel direct observation should prevail.
This is especially poignant in the court of law: the very strict laws arounding presenting evidence are a culturally evolved mechanism to defend against evidence manipulation. Evidence manipulation may be easier for weaker pieces of evidence—see the prohibition against hearsay in legal contexts for instance.
It is occasionally suggested that the court of law should do more probabilistic and Bayesian type of reasoning. One reason courts refuse to do so (apart from more Hansonian reasons around elites cultivating conflict suppression) is that naive Bayesian reasoning is extremely susceptible to evidence manipulation.
assumed infinite in both directions for simplicity. Here X0 represents the current state ( the “present”) and while ...X−3,X−2,X−1 and X1,X2,X3,... represents the future
Predictible Information versus Predictive Information
Predictible information is the maximal information (in bits) that you can derive about the future given the access to the past. Predictive information is the amount of bits that you need from the past to make that optimal prediction.
Suppose you are faced with the question of whether to buy, hold or sell Apple. There are three options so maximally log2(3) bits of information—not all of that information might be in contained in the past, there a certain part of irreductible uncertainty (entropy) about the future no matter how well you can infer the past. Think about a freak event & blacks swans like pandemics, wars, unforeseen technological breakthroughs, just cumulative aggregated noise in consumer preference etc. Suppose that irreducible uncertainty is half of log2(3) leaving us with 12log2(3) of (theoretically) predictible information.
To a certain degree, it might be predictible in theory to what degree buying Apple stock is a good idea. To do so, you may need to know many things about the past: Apple’s earning records, position of competitiors, general trends of the economy, understanding of the underlying technology & supply chains etc. The total sum of this information is far larger than 12log2(3)
To actually do well on the stock market you additionally need to do this better than the competititon—a difficult task! The predictible information is quite small compared to the predictive information
Note that predictive information is always greater than predictible information: you need to at least k bits from the past to predict k bits of the future. Often it is much larger.
Mathematical details
Predictible information is also called ‘apparent stored information’ or commonly ‘excess entropy’.
It is defined as the mutual information I(X≤0,X≥0) between the future and the past.
The predictive information is more difficult to define. It is also called the ‘statistical complexity’ or ‘forecasting complexity’ and is defined as the entropy of the steady equilibrium state of the ‘epsilon machine’ of the process.
What is the Epsilon Machine of the process {Xi}i∈Z? Define the causal states as the process as the partition on the sets of possible pasts ...,x−3,x−2,x−1 where two pasts →x,→x′ are in the same part / equivalence class when the future conditioned on →x,→x′ respectively is the same.
That is P(X>0|→x)=P(X>0,→x′). Without going into too much more detail the forecasting complexity measures the size of this creature.
Hopfield Networks = Ising Models = Distributions over Causal models?
Given a joint probability distributions p(x1,...,xn) famously there might be many ‘Markov’ factorizations. Each corresponds with a different causal model.
Instead of choosing a particular one we might have a distribution of beliefs over these different causal models. This feels basically like a Hopfield Network/ Ising Model.
You have a distribution over nodes and an ‘interaction’ distribution over edges.
The distribution over nodes corresponds to the joint probability distribution while the distribution over edges corresponds to a mixture of causal models where a normal DAG graphical causal G model corresponds to the Ising model/ Hopfield network which assigns 1 to an edge x→y if the edge is in G and 0 otherwise.
Why don’t animals have guns?
Or why didn’t evolution evolve the Hydralisk?
Evolution has found (sometimes multiple times) the camera, general intelligence, nanotech, electronavigation, aerial endurance better than any drone, robots more flexible than any human-made drone, highly efficient photosynthesis, etc.
First of all let’s answer another question: why didn’t evolution evolve the wheel like the alien wheeled elephants in His Dark Materials?
Is it biologically impossible to evolve?
Well, technically, the flagella of various bacteria is a proper wheel.
No the likely answer is that wheels are great when you have roads and suck when you don’t. Roads are build by ants to some degree but on the whole probably don’t make sense for an animal-intelligence species.
Aren’t there animals that use projectiles?
Hold up. Is it actually true that there is not a single animal with a gun, harpoon or other projectile weapon?
Porcupines have quils, some snakes spit venom, a type of fish spits water as a projectile to kick insects of leaves than eats insects. Bombadier beetles can produce an explosive chemical mixture. Skunks use some other chemicals. Some snails shoot harpoons from very close range. There is a crustacean that can snap its claw so quickly it creates a shockwave stunning fish. Octopi use ink. Goliath birdeater spider shoot hair. Electric eels shoot electricity etc.
Maybe there isn’t an incentive gradient? The problem with this argument is that the same argument can be made for lots and lots of abilities that animals have developed, often multiple times. Flight, camera, a nervous system.
But flight has an intermediate form: glider monkeys, flying squirrels, flying fish.
Except, I think there are lots of intermediate forms for guns & harpoons too:
There are animals with quills. It’s only a small number of steps from having quils that you release when attack to actively shooting and aiming these quils. Why didn’t Evolution evolve Hydralisks? For many other examples—see the list above.
In a Galaxy far far away
I think it is plausible that the reason animals don’t have guns is simply an accident. Somewhere in the vast expanses of space circling a dim sun-like star the water-bearing planet Hiram Maxim is teeming with life. Nothing like an intelligent species has yet evolved yet it’s many lifeforms sport a wide variety of highly effective projectile weapons. Indeed, the majority of larger lifeforms have some form of projective weapon as a result of the evolutionary arms race. The savannahs sport gazelle-like herbivores evading sniper-gun equppied predators.
Some many parsecs away is the planet Big Bertha, a world is embroilled in permanent biological trench warfare. More than 95% percent of the biomass of animals larger than a mouse is taken up by members of just 4 geni of eusocial gun-equipped species or their domesticastes. Yet the individual intelligence of members of these species doesn’t exceed that of a cat.
The largest of the four geni builds massive dams like beavers, practices husbandry of various domesticated species, agriculture and engages in massive warfare against rival colonies using projectile harpoons that grow from their limbs. Yet all of this is biological, not technological: the behaviours and abilites are evolved rather than learned. There is not a single species whose intelligence rivals that of a Great ape, either individually or collectively.
Please develop this question as a documentary special, for lapsed-Starcraft player homeschooling dads everywhere.
My naive hypothesis: Once you’re able to launch a projectile at a predator or prey such that it breaks skin or shell, if you want it to die, its vastly cheaper to make venom at the ends of the projectiles than to make the projectiles launch fast enough that there’s a good increase in probability the adversary dies quickly.
Why don’t lions, tigers, wolves, crocodiles, etc have venom-tipped claws and teeth?
(Actually, apparently many ancestral mammal species like did have venom spurs, similar to the male platypus)
My completely naive guess would be that venom is mostly too slow for creatures of this size compared with gross physical damage and blood loss, and that getting close enough to set claws on the target is the hard part anyway. Venom seems more useful as a defensive or retributive mechanism than a hunting one.
Most uses of projected venom or other unpleasant substance seem to be defensive rather than offensive. One reason for this is that it’s expensive to make the dangerous substance, and throwing it away wastes it. This cost is affordable if it is used to save your own life, but not easily affordable to acquire a single meal. This life vs meal distinction plays into a lot of offense/defense strategy expenses.
For the hunting options, usually they are also useful for defense. The hunting options all seem cheaper to deploy: punching mantis shrimp, electric eel, fish spitting water...
My guess it that it’s mostly a question of whether the intermediate steps to the evolved behavior are themselves advantageous. Having a path of consistently advantageous steps makes it much easier for something to evolve. Having to go through a trough of worse-in-the-short-term makes things much less likely to evolve. A projectile fired weakly is a cost (energy to fire, energy to producing firing mechanism, energy to produce the projectile, energy to maintain the complexity of the whole system despite it not being useful yet). Where’s the payoff of a weakly fired projectile? Humans can jump that gap by intuiting that a faster projectile would be more effective. Evolution doesn’t get to extrapolate and plan like that.
Jellyfish have nematocysts, which is a spear on a rope, with poison on the tip. The spear has barbs, so when it goes in, it sticks. Then the jellyfish pulls in its prey. The spears are microscopic, but very abundant.
Yes, but I think snake fangs and jellyfish nematocysts are a slightly different type of weapon. Much more targeted application of venom. If the jellyfish squirted their venom as a cloud into the water around them when a fish came near, I expect it would not be nearly as effective per unit of venom. As a case where both are present, the spitting cobra uses its fangs to inject venom into its prey. However, when threatened, it can instead (wastefully) spray out its venom towards the eyes of an attacker. (the venom has little effect on unbroken mammal skin, but can easily blind if it gets into their eyes).
Fair argument I guess where I’m lost is that I feel I can make the same ‘no competitive intermediate forms’ for all kinds of wondrous biological forms and functions that have evolved, e.g. the nervous system. Indeed, this kind of argument used to be a favorite for ID advocates.
There are lots of excellent applications for even very simple nervous systems. The simplest surviving nervous systems are those of jellyfish. They form a ring of coupled oscillators around the periphery of the organism. Their goal is to synchronize muscular contraction so the bell of the jellyfish contracts as one, to propel the jellyfish efficiently. If the muscles contracted independently, it wouldn’t be nearly as good.
Any organism with eyes will profit from having a nervous system to connect the eyes to the muscles. There’s a fungus with eyes and no nervous system, but as far as I know, every animal with eyes also has a nervous system. (The fungus in question is Pilobolus, which uses its eye to aim a gun. No kidding!)
Another huge missed opportunity is thermal vision. Thermal infrared vision is a gigantic boon for hunting at night, and you might expect eg owls and hawks to use it to spot prey hundreds of meters away in pitch darkness, but no animals do (some have thermal sensing, but only extremely short range)
Snakes have thermal vision, using pits on their cheeks to form pinhole cameras. It pays to be cold-blooded when you’re looking for nice hot mice to eat.
Thermal vision for warm-blooded animals has obvious problems with noise.
Care to explain? Noise?
If you are warm, any warm-detectors inside your body will detect mostly you. Imagine if blood vessels in your own eye radiated in visible spectrum with the same intensity as daylight environment.
Can’t you filter that out? .
How do fighter planes do it?
It‘s possible to filter out a constant high value, but not possible to filter out a high level of noise. Unfortunately warmth = random vibration = noise. If you want a low noise thermal camera, you have to cool the detector, or only look for hot things, like engine flares. Fighter planes do both.
Woah great example didn’t know bout that. Thanks Tao
Animals do have guns. Humans are animals. Humans have guns. Evolution made us, we made guns, therefore guns indirectly exist because of evolution.
Or do you mean “why don’t animals have something like guns but permanently attached to them instead of regular guns?” There, I’d start with wondering why humans prefer to have our guns separate from our bodies, compared to affixing them permanently or semi-permanently to ourselves. All the drawbacks of choosing a permanently attached gun would also disadvantage a hypothetical creature that got the accessory through a longer, slower selection process.
Novel Science is Inherently Illegible
Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental.
Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic “Seeing like a State” problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.
I think the above would be considered relatively uncontroversial in EA circles. But I posit there is something deeper going on:
Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
I’m pretty skeptical of this and think we need data to back up such a claim. However there might be bias: when anyone makes a serendipitous discovery it’s a better story, so it gets more attention. Has anyone gone through, say, the list of all Nobel laureates and looked at whether their research would have seemed promising before it produced results?
Thanks for your skepticism, Thomas. Before we get into this, I’d like to make sure actually disagree. My position is not that scientific progress is mostly due to plucky outsiders who are ignored for decades. (I feel something like this is a popular view on LW). Indeed, I think most scientific progress is made through pretty conventional (academic) routes.
I think one can predict that future scientific progress will likely be made by young smart people at prestigious universities and research labs specializing in fields that have good feedback loops and/or have historically made a lot of progress: physics, chemistry, medicine, etc
My contention is that beyond very broad predictive factors like this, judging whether a research direction is fruitful is hard & requires inside knowledge. Much of this knowledge is illegible, difficult to attain because it takes a lot of specialized knowledge etc.
Do you disagree with this ?
I do think that novel research is inherently illegible. Here are some thoughts on your comment :
1.Before getting into your Nobel prize proposal I’d like to caution for Hindsight bias (obvious reasons).
And perhaps to some degree I’d like to argue the burden of proof should be on the converse: show me evidence that scientific progress is very legible. In some sense, predicting what directions will be fruitful is a bet against the (efficiënt ?) scientific market.
I also agree the amount of prediction one can do will vary a lot. Indeed, it was itself an innovation (eg Thomas Edison and his lightbulbs !) that some kind of scientific and engineering progress could by systematized: the discovery of R&D.
I think this works much better for certain domains than for others and a to large degree the ‘harder’ & more ‘novel’ the problem is the more labs defer ‘illegibly’ to the inside knowledge of researchers.
I guess I’m not sure what you mean by “most scientific progress,” and I’m missing some of the history here, but my sense is that importance-weighted science happens proportionally more outside of academia. E.g., Einstein did his miracle year outside of academia (and later stated that he wouldn’t have been able to do it, had he succeeded at getting an academic position), Darwin figured out natural selection, and Carnot figured out the Carnot cycle, all mostly on their own, outside of academia. Those are three major scientists who arguably started entire fields (quantum mechanics, biology, and thermodynamics). I would anti-predict that future scientific progress, of the field-founding sort, comes primarily from people at prestigious universities, since they, imo, typically have some of the most intense gatekeeping dynamics which make it harder to have original thoughts.
Good point.
I do wonder to what degree that may be biased by the fact that there were vastly less academic positions before WWI/WWII. In the time of Darwin and Carnot these positions virtually didn’t exist. In the time of Einstein they did exist but they were quite rare still.
How many examples do you know of this happening past WWII?
Shannon was at Bell Labs iirc
As counterexample of field-founding happening in academia: Godel, Church, Turing were all in academia.
Oh, I actually 70% agree with this. I think there’s an important distinction between legibility to laypeople vs legibility to other domain experts. Let me lay out my beliefs:
In the modern history of fields you mentioned, more than 70% of discoveries are made by people trying to discover the thing, rather than serendipitously.
Other experts in the field, if truth-seeking, are able to understand the theory of change behind the research direction without investing huge amounts of time.
In most fields, experts and superforecasters informed by expert commentary will have fairly strong beliefs about which approaches to a problem will succeed. The person working on something will usually have less than 1 bit advantage about whether their framework will be successful than the experts, unless they have private information (e.g. already did the crucial experiment). This is the weakest belief and I could probably be convinced otherwise just by anecdotes.
The successful researchers might be confident they will succeed, but unsuccessful ones could be almost as confident on average. So it’s not that the research is illegible, it’s just genuinely hard to predict who will succeed.
People often work on different approaches to the problem even if they can predict which ones will work. This could be due to irrationality, other incentives, diminishing returns to each approach, comparative advantage, etc.
If research were illegible to other domain experts, I think you would not really get Kuhnian paradigms, which I am pretty confident exist. Paradigm shifts mostly come from the track record of an approach, so maybe this doesn’t count as researchers having an inside view of others’ work though.
Thank you, Thomas. I believe we find ourselves in broad agreement. The distinction you make between lay-legibility and expert-legibility is especially well-drawn.
One point: the confidence of researchers in their own approach may not be the right thing to look at. Perhaps a better measure is seeing who can predict not only their own approach will succed but explain in detail why other approaches won’t work. Anecdotally, very succesful researchers have a keen sense of what will work out and what won’t—in private conversation many are willing to share detailed models why other approaches will not work or are not as promising. I’d have to think about this more carefully but anecdotally the most succesful researchers have many bits of information over their competitors not just one or two. (Note that one bit of information means that their entire advantage could be wiped out by answering a single Y/N question. Not impossible, but not typical for most cases)
What areas of science are you thinking of? I think the discussion varies dramatically.
I think allowing less legibility would help make science less plodding, and allow it to move in larger steps. But there’s also a question of what direction it’s plodding. The problem I saw with psych and neurosci was that it tended to plod in nearly random, not very useful directions.
And what definition of “smart”? I’m afraid that by a common definition, smart people tend to do dumb research, in that they’ll do galaxy brained projects that are interesting but unlikely to pay off. This is how you get new science, but not useful science.
In cognitive psychology and neuroscience, I want to see money given to people who are both creative and practical. They will do new science that is also useful.
In psychology and neuroscience, scientists pick the grantees, and they tend to give money to those whose research they understand. This produces an effect where research keeps following one direction that became popular long ago. I think a different method of granting would work better, but the particular method matters a lot.
Thinking about it a little more, having a mix of personality types involved would probably be useful. I always appreciated the contributions of the rare philospher who actually learned enough to join a discussion about psych or neurosci research.
I think the most important application of meta science theory is alignment research.
It might also be that a legible path would be low status to pursue in the existing scientific communities and thus nobody pursues it.
If you look at a low-hanging fruit that was unpicked for a long time, airborne transmission of many viruses like the common cold, is a good example. There’s nothing illegible about it.
mmm Good point. Do you have more examples?
The core reason for holding the belief is because the world does not look to me like there’s little low hanging fruit in a variety of domains of knowledge I have thought about over the years. Of course it’s generally not that easy to argue for the value of ideas that the mainstream does not care about publically.
Wei Dei recently wrote:
If you look at the broader field of rationality, the work of Judea Pearl and that of Tetlock both could have been done twenty years earlier. Conceptually, I think you can argue that their work was some of the most important work that was done in the last decades.
Judea Pearl writes about how allergic people were against the idea of factoring in counterfactuals and causality.
I don’t think the application to EA itself would be uncontroversial.
My timelines are lengthening.
I’ve long been a skeptic of scaling LLMs to AGI *. To me I fundamentally don’t understand how this is even possible. It must be said that very smart people give this view credence. davidad, dmurfet. on the other side are vanessa kosoy and steven byrnes. When pushed proponents don’t actually defend the position that a large enough transformer will create nanotech or even obsolete their job. They usually mumble something about scaffolding.
I won’t get into this debate here but I do want to note that my timelines have lengthened, primarily because some of the never-clearly-stated but heavily implied AI developments by proponents of very short timelines have not materialized. To be clear, it has only been a year since gpt-4 is released, and gpt-5 is around the corner, so perhaps my hope is premature. Still my timelines are lengthening.
A year ago, when gpt-3 came out progress was blindingly fast. Part of short timelines came from a sense of ‘if we got surprised so hard by gpt2-3, we are completely uncalibrated, who knows what comes next’.
People seemed surprised by gpt-4 in a way that seemed uncalibrated to me. gpt-4 performance was basically in line with what one would expect if the scaling laws continued to hold. At the time it was already clear that the only really important driver was compute data and that we would run out of both shortly after gpt-4. Scaling proponents suggested this was only the beginning, that there was a whole host of innovation that would be coming. Whispers of mesa-optimizers and simulators.
One year in: Chain-of-thought doesn’t actually improve things that much. External memory and super context lengths ditto. A whole list of proposed architectures seem to serve solely as a paper mill. Every month there is new hype about the latest LLM or image model. Yet they never deviate from expectations based on simple extrapolation of the scaling laws. There is only one thing that really seems to matter and that is compute and data. We have about 3 more OOMs of compute to go. Data may be milked another OOM.
A big question will be whether gpt-5 will suddenly make agentGPT work ( and to what degree). It would seem that gpt-4 is in many ways far more capable than (most or all) humans yet agentGPT is curiously bad.
All-in-all AI progress** is developing according to the naive extrapolations of Scaling Laws but nothing beyond that. The breathless twitter hype about new models is still there but it seems to be believed more at a simulacra level higher than I can parse.
Does this mean we’ll hit an AI winter? No. In my model there may be only one remaining roadblock to ASI (and I suspect I know what it is). That innovation could come at anytime. I don’t know how hard it is, but I suspect it is not too hard.
* the term AGI seems to denote vastly different things to different people in a way I find deeply confusing. I notice that the thing that I thought everybody meant by AGI is now being called ASI. So when I write AGI, feel free to substitute ASI.
** or better, AI congress
addendum: since I’ve been quoted in dmurfet’s AXRP interview as believing that there are certain kinds of reasoning that cannot be represented by transformers/LLMs I want to be clear that this is not really an accurate portrayal of my beliefs. e.g. I don’t think transformers don’t truly understand, are just a stochastic parrot, or in other ways can’t engage in the abstract reasoning that humans do. I think this is clearly false, as seen by interacting with any frontier model.
Wasn’t the surprising thing about GPT-4 that scaling laws did hold? Before this many people expected scaling laws to stop before such a high level of capabilities. It doesn’t seem that crazy to think that a few more OOMs could be enough for greater than human intelligence. I’m not sure that many people predicted that we would have much faster than scaling law progress (at least until ~human intelligence AI can speed up research)? I think scaling laws are the extreme rate of progress which many people with short timelines worry about.
To some degree yes, they were not guaranteed to hold. But by that point they held for over 10 OOMs iirc and there was no known reason they couldn’t continue.
This might be the particular twitter bubble I was in but people definitely predicted capabilities beyond simple extrapolation of scaling laws.
Can you expand on what you mean by “create nanotech?” If improvements to our current photolithography techniques count, I would not be surprised if (scaffolded) LLMs could be useful for that. Likewise for getting bacteria to express polypeptide catalysts for useful reactions, and even maybe figure out how to chain several novel catalysts together to produce something useful (again, referring to scaffolded LLMs with access to tools).
If you mean that LLMs won’t be able to bootstrap from our current “nanotech only exists in biological systems and chip fabs” world to Drexler-style nanofactories, I agree with that, but I expect things will get crazy enough that I can’t predict them long before nanofactories are a thing (if they ever are).
Likewise, I don’t think LLMs can immediately obsolete all of the parts of my job. But they sure do make parts of my job a lot easier. If you have 100 workers that each spend 90% of their time on one specific task, and you automate that task, that’s approximately as useful as fully automating the jobs of 90 workers. “Human-equivalent” is one of those really leaky abstractions—I would be pretty surprised if the world had any significant resemblance to the world of today by the time robotic systems approached the dexterity and sensitivity of human hands for all of the tasks we use our hands for, whereas for the task of “lift heavy stuff” or “go really fast” machines left us in the dust long ago.
Iterative improvements on the timescale we’re likely to see are still likely to be pretty crazy by historical standards. But yeah, if your timelines were “end of the world by 2026” I can see why they’d be lengthening now.
My timelines were not 2026. In fact, I made bets against doomers 2-3 years ago, one will resolve by next year.
I agree iterative improvements are significant. This falls under “naive extrapolation of scaling laws”.
By nanotech I mean something akin to drexlerian nanotech or something similarly transformative in the vicinity. I think it is plausible that a true ASI will be able to make rapid progress (perhaps on the order of a few years or a decade) on nanotech. I suspect that people that don’t take this as a serious possibility haven’t really thought through what AGI/ASI means + what the limits and drivers of science and tech really are; I suspect they are simply falling prey to status-quo bias.
With scale, there is visible improvement in difficulty of novel-to-chatbot ideas/details that is possible to explain in-context, things like issues with the code it’s writing. If a chatbot is below some threshold of situational awareness of a task, no scaffolding can keep it on track, but for a better chatbot trivial scaffolding might suffice. Many people can’t google for a solution to a technical issue, the difference between them and those who can is often subtle.
So modest amount of scaling alone seems plausibly sufficient for making chatbots that can do whole jobs almost autonomously. If this works, 1-2 OOMs more of scaling becomes both economically feasible and more likely to be worthwhile. LLMs think much faster, so they only need to be barely smart enough to help with clearing those remaining roadblocks.
You may be right. I don’t know of course.
At this moment in time, it seems scaffolding tricks haven’t really improved the baseline performance of models that much. Overwhelmingly, the capability comes down to whether the rlfhed base model can do the task.
That’s what I’m also saying above (in case you are stating what you see as a point of disagreement). This is consistent with scaling-only short timeline expectations. The crux for this model is current chatbots being already close to autonomous agency and to becoming barely smart enough to help with AI research. Not them directly reaching superintelligence or having any more room for scaling.
Yes agreed.
What I don’t get about this position: If it was indeed just scaling—what’s AI research for ? There is nothing to discover, just scale more compute. Sure you can maybe improve the speed of deploying compute a little but at the core of it it seems like a story that’s in conflict with itself?
My view is that there’s huge algorithmic gains in peak capability, training efficiency (less data, less compute), and inference efficiency waiting to be discovered, and available to be found by a large number of parallel research hours invested by a minimally competent multimodal LLM powered research team. So it’s not that scaling leads to ASI directly, it’s:
scaling leads to brute forcing the LLM agent across the threshold of AI research usefulness
Using these LLM agents in a large research project can lead to rapidly finding better ML algorithms and architectures.
Training these newly discovered architectures at large scales leads to much more competent automated researchers.
This process repeats quickly over a few months or years.
This process results in AGI.
AGI, if instructed (or allowed, if it’s agentically motivated on its own to do so) to improve itself will find even better architectures and algorithms.
This process can repeat until ASI. The resulting intelligence / capability / inference speed goes far beyond that of humans.
Note that this process isn’t inevitable, there are many points along the way where humans can (and should, in my opinion) intervene. We aren’t disempowered until near the end of this.
Why do you think there are these low-hanging algorithmic improvements?
Here are two arguments for low-hanging algorithmic improvements.
First, in the past few years I have read many papers containing low-hanging algorithmic improvements. Most such improvements are a few percent or tens of percent. The largest such improvements are things like transformers or mixture of experts, which are substantial steps forward. Such a trend is not guaranteed to persist, but that’s the way to bet.
Second, existing models are far less sample-efficient than humans. We receive about a billion tokens growing to adulthood. The leading LLMs get orders of magnitude more than that. We should be able to do much better. Of course, there’s no guarantee that such an improvement is “low hanging”.
Capturing this would probably be a big deal, but a counterpoint is that compute necessary to achieve an autonomous researcher using such sample efficient method might still be very large. Possibly so large that training an LLM with the same compute and current sample-inefficient methods is already sufficient to get a similarly effective autonomous researcher chatbot. In which case there is no effect on timelines. And given that the amount of data is not an imminent constraint on scaling, the possibility of this sample efficiency improvement being useless for the human-led stage of AI development won’t be ruled out for some time yet.
Could you train an LLM on pre 2014 Go games that could beat AlphaZero?
I rest my case.
The best method of improving sample efficiency might be more like AlphaZero. The simplest method that’s more likely to be discovered might be more like training on the same data over and over with diminishing returns. Since we are talking low-hanging fruit, I think it’s reasonable that first forays into significantly improved sample efficiency with respect to real data are not yet much better than simply using more unique real data.
I would be genuinely surprised if training a transformer on the pre2014 human Go data over and over would lead it to spontaneously develop alphaZero capacity. I would expect it to do what it is trained to: emulate / predict as best as possible the distribution of human play. To some degree I would anticipate the transformer might develop some emergent ability that might make it slightly better than Go-Magnus—as we’ve seen in other cases—but I’d be surprised if this would be unbounded. This is simply not what the training signal is.
We start with an LLM trained on 50T tokens of real data, however capable it ends up being, and ask how to reach the same level of capability with synthetic data. If it takes more than 50T tokens of synthetic data, then it was less valuable per token than real data.
But at the same time, 500T tokens of synthetic data might train an LLM more capable than if trained on the 50T tokens of real data for 10 epochs. In that case, synthetic data helps with scaling capabilities beyond what real data enables, even though it’s still less valuable per token.
With Go, we might just be running into the contingent fact of there not being enough real data to be worth talking about, compared with LLM data for general intelligence. If we run out of real data before some threshold of usefulness, synthetic data becomes crucial (which is the case with Go). It’s unclear if this is the case for general intelligence with LLMs, but if it is, then there won’t be enough compute to improve the situation unless synthetic data also becomes better per token, and not merely mitigates the data bottleneck and enables further improvement given unbounded compute.
I expect that if we could magically sample much more pre-2014 unique human Go data than was actually generated by actual humans (rather than repeating the limited data we have), from the same platonic source and without changing the level of play, then it would be possible to cheaply tune an LLM trained on it to play superhuman Go.
I don’t know what you mean by ‘general intelligence’ exactly but I suspect you mean something like human+ capability in a broad range of domains. I agree LLMs will become generally intelligent in this sense when scaled, arguably even are, for domains with sufficient data. But that’s kind of the sticker right? Cave men didn’t have the whole internet to learn from yet somehow did something that not even you seem to claim LLMs will be able to do: create the (date of the) Internet.
(Your last claim seems surprising. Pre-2014 games don’t have close to the ELO of alphaZero. So a next-token would be trained to simulate a human player up tot 2800, not 3200+. )
When I brought up sample inefficiency, I was supporting Mr. Helm-Burger‘s statement that “there’s huge algorithmic gains in …training efficiency (less data, less compute) … waiting to be discovered”. You’re right of course that a reduction in training data will not necessarily reduce the amount of computation needed. But once again, that’s the way to bet.
I’m ambivalent on this. If the analogy between improvement of sample efficiency and generation of synthetic data holds, synthetic data seems reasonably likely to be less valuable than real data (per token). In that case we’d be using all the real data we have anyway, which with repetition is sufficient for up to about $100 billion training runs (we are at $100 million right now). Without autonomous agency (not necessarily at researcher level) before that point, there won’t be investment to go over that scale until much later, when hardware improves and the cost goes down.
My answer to that is currently in the form of a detailed 2 hour lecture with a bibliography that has dozens of academic papers in it, which I only present to people that I’m quite confident aren’t going to spread the details. It’s a hard thing to discuss in detail without sharing capabilities thoughts. If I don’t give details or cite sources, then… it’s just, like, my opinion, man. So my unsupported opinion is all I have to offer publicly. If you’d like to bet on it, I’m open to showing my confidence in my opinion by betting that the world turns out how I expect it to.
The story involves phase changes. Just scaling is what’s likely to be available to human developers in the short term (a few years), it’s not enough for superintelligence. Autonomous agency secures funding for a bit more scaling. If this proves sufficient to get smart autonomous chatbots, they then provide speed to very quickly reach the more elusive AI research needed for superintelligence.
It’s not a little speed, it’s a lot of speed, serial speedup of about 100x plus running in parallel. This is not as visible today, because current chatbots are not capable of doing useful work with serial depth, so the serial speedup is not in practice distinct from throughput and cost. But with actually useful chatbots it turns decades to years, software and theory from distant future become quickly available, non-software projects get to be designed in perfect detail faster than they can be assembled.
In my mainline model there are only a few innovations needed, perhaps only a single big one to product an AGI which just like the Turing Machine sits at the top of the Chomsky Hierarchy will be basically the optimal architecture given resource constraints. There are probably some minor improvements todo with bridging the gap between theoretically optimal architecture and the actual architecture, or parts of the algorithm that can be indefinitely improved but with diminishing returns (these probably exist due to Levin and possibly.matrix.multiplication is one of these). On the whole I expect AI research to be very chunky.
Indeed, we’ve seen that there was really just one big idea to all current AI progress: scaling, specifically scaling GPUs on maximally large undifferentiated datasets. There were some minor technical innovations needed to pull this off but on the whole that was the clinger.
Of course, I don’t know. Nobody knows. But I find this the most plausible guess based on what we know about intelligence, learning, theoretical computer science and science in general.
(Re: Difficult to Parse react on the other comment
I was confused about relevance of your comment above on chunky innovations, and it seems to be making some point (for which what it actually says is an argument), but I can’t figure out what it is. One clue was that it seems like you might be talking about innovations needed for superintelligence, while I was previously talking about possible absence of need for further innovations to reach autonomous researcher chatbots, an easier target. So I replied with formulating this distinction and some thoughts on the impact and conditions for reaching innovations of both kinds. Possibly the relevance of this was confusing in turn.)
There are two kinds of relevant hypothetical innovations: those that enable chatbot-led autonomous research, and those that enable superintelligence. It’s plausible that there is no need for (more of) the former, so that mere scaling through human efforts will lead to such chatbots in a few years regardless. (I think it’s essentially inevitable that there is currently enough compute that with appropriate innovations we can get such autonomous human-scale-genius chatbots, but it’s unclear if these innovations are necessary or easy to discover.) If autonomous chatbots are still anything like current LLMs, they are very fast compared to humans, so they quickly discover remaining major innovations of both kinds.
In principle, even if innovations that enable superintelligence (at scale feasible with human efforts in a few years) don’t exist at all, extremely fast autonomous research and engineering still lead to superintelligence, because they greatly accelerate scaling. Physical infrastructure might start scaling really fast using pathways like macroscopic biotech even if drexlerian nanotech is too hard without superintelligence or impossible in principle. Drosophila biomass doubles every 2 days, small things can assemble into large things.
Lengthening from what to what?
I’ve never done explicit timelines estimates before so nothing to compare to. But since it’s a gut feeling anyway, I’m saying my gut is lengthening.
Agreed. I’m also pleasantly surprised that your take isn’t heavily downvoted.
Links to Dan Murfet’s AXRP interview:
Transcript
Video
I don’t recall what I said in the interview about your beliefs, but what I meant to say was something like what you just said in this post, apologies for missing the mark.
State-of-the-art models such as Gemini aren’t LLMs anymore. They are natively multimodal or omni-modal transformer models that can process text, images, speech and video. These models seem to me like a huge jump in capabilities over text-only LLMs like GPT-3.
Chain-of-thought prompting makes models much more capable. In the original paper “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”, PaLM 540B with standard prompting only solves 18% of problems but 57% of problems with chain-of-thought prompting.
I expect the use of agent features such as reflection will lead to similar large increases in capabilities as well in the near future.
Those numbers don’t really accord with my experience actually using gpt-4. Generic prompting techniques just don’t help all that much.
I just asked GPT-4 a GSM8K problem and I agree with your point. I think what’s happening is that GPT-4 has been fine-tuned to respond with chain-of-thought reasoning by default so it’s no longer necessary to explicitly ask it to reason step-by-step. Though if you ask it to “respond with just a single number” to eliminate the chain-of-thought reasoning it’s problem-solving ability is much worse.
Mumble.
Encrypted Batteries
(I thank Dmitry Vaintrob for the idea of encrypted batteries. Thanks to Adam Scholl for the alignment angle. Thanks to the Computational Mechanics at the receent compMech conference. )
There are no Atoms in the Void just Bits in the Description. Given the right string a Maxwell Demon transducer can extract energy from a heatbath.
Imagine a pseudorandom heatbath + nano-Demon. It looks like a heatbath from the outside but secretly there is a private key string that when fed to the nano-Demon allows it to extra lots of energy from the heatbath.
P.S. Beyond the current ken of humanity lies a generalized concept of free energy that describes the generic potential ability or power of an agent to achieve goals. Money, the golden calf of Baal is one of its many avatars. Could there be ways to encrypt generalized free energy batteries to constraint the user to only see this power for good? It would be like money that could be only spent on good things.
What would a ‘pseudorandom heatbath’ look like? I would expect most objects to quickly depart from any sort of private key or PRNG. Would this be something like… a reversible computer which shuffles around a large number of blank bits in a complicated pseudo-random order every timestep*, exposing a fraction of them to external access? so a daemon with the key/PRNG seed can write to the blank bits with approaching 100% efficiency (rendering it useful for another reversible computer doing some actual work) but anyone else can’t do better than 50-50 (without breaking the PRNG/crypto) and that preserves the blank bit count and is no gain?
* As I understand reversible computing, you can have a reversible computer which does that for free: if this is something like a very large period loop blindly shuffling its bits, it need erase/write no bits (because it’s just looping through the same states forever, akin to a time crystal), and so can be computed indefinitely at arbitrarily low energy cost. So any external computer which syncs up to it can also sync at zero cost, and just treat the exposed unused bits as if they were its own, thereby saving power.
That is my understanding, yes.
Yeah I’m pretty sure you would need to violate heisenberg uncertainty in order to make this and then you’d have to keep it in a 0 kelvin cleanroom forever.
A practical locked battery with tamperproofing would mostly just look like a battery.
Corrupting influences
The EA AI safety strategy has had a large focus on placing EA-aligned people in A(G)I labs. The thinking was that having enough aligned insiders would make a difference on crucial deployment decisions & longer-term alignment strategy. We could say that the strategy is an attempt to corrupt the goal of pure capability advance & making money towards the goal of alignment. This fits into a larger theme that EA needs to get close to power to have real influence.
[See also the large donations EA has made to OpenAI & Anthropic. ]
Whether this strategy paid off… too early to tell.
What has become apparent is that the large AI labs & being close to power have had a strong corrupting influence on EA epistemics and culture.
Many people in EA now think nothing of being paid Bay Area programmer salaries for research or nonprofit jobs.
There has been a huge influx of MBA blabber being thrown around. Bizarrely EA funds are often giving huge grants to for profit organizations for which it is very unclear whether they’re really EA-aligned in the long-term or just paying lip service. Highly questionable that EA should be trying to do venture capitalism in the first place.
There is a questionable trend to
equate ML skillsprestige within capabilities work with the ability to do alignment work. EDIT: haven’t looked at it deeply yet but superfiically impressed by CAIS recent work. seems like an eminently reasonable approach. Hendryx’s deep expertise in capabilities work / scientific track record seem to have been key. in general, EA-adjacent AI safety work has suffered from youth, inexpertise & amateurism so makes sense to have more world-class expertise EDITEDIT: i should be careful in promoting work I haven’t looked at. I have been told from a source I trust that almost nothing is new in this paper and Hendryx engages in a lot of very questionable self-promotion tactics.For various political reasons there has been an attempt to put x-risk AI safety on a continuum with more mundance AI concerns like it saying bad words. This means there is lots of ‘alignment research’ that is at best irrelevant, at worst a form of rnsidiuous safetywashing.
The influx of money and professionalization has not been entirely bad. Early EA suffered much more from virtue signalling spirals, analysis paralysis. Current EA is much more professional, largely for the better.
As a supervisor of numerous MSc and PhD students in mathematics, when someone finishes a math degree and considers a job, the tradeoffs are usually between meaning, income, freedom, evil, etc., with some of the obvious choices being high/low along (relatively?) obvious axes. It’s extremely striking to see young talented people with math or physics (or CS) backgrounds going into technical AI alignment roles in big labs, apparently maximising along many (or all) of these axes!
Especially in light of recent events I suspect that this phenomenon, which appears too good to be true, actually is.
Yes!
I’m not too concerned about this. ML skills are not sufficient to do good alignment work, but they seem to be very important for like 80% of alignment work and make a big difference in the impact of research (although I’d guess still smaller than whether the application to alignment is good)
Primary criticisms of Redwood involve their lack of experience in ML
The explosion of research in the last ~year is partially due to an increase in the number of people in the community who work with ML. Maybe you would argue that lots of current research is useless, but it seems a lot better than only having MIRI around
The field of machine learning at large is in many cases solving easier versions of problems we have in alignment, and therefore it makes a ton of sense to have ML research experience in those areas. E.g. safe RL is how to get safe policies when you can optimize over policies and know which states/actions are safe; alignment can be stated as a harder version of this where we also need to deal with value specification, self-modification, instrumental convergence etc.
I mostly agree with this.
I should have said ‘prestige within capabilities research’ rather than ML skills which seems straightforwardly useful. The former is seems highly corruptive.
I’d arguably say this is good, primarily because I think EA was already in danger of it’s AI safety wing becoming unmoored from reality by ignoring key constraints, similar to how early Lesswrong before the deep learning era around 2012-2018 turned out to be mostly useless due to how much everything was stated in a mathematical way, and not realizing how many constraints and conjectured constraints applied to stuff like formal provability, for example..
Pockets of Deep Expertise
Why am I so bullish on academic outreach? Why do I keep hammering on ‘getting the adults in the room’?
It’s not that I think academics are all Super Smart.
I think rationalists/alignment people correctly ascertain that most professors don’t have much useful to say about alignment & deep learning and often say silly things. They correctly see that much of AI congress is fueled by labs and scale not ML academia. I am bullish on non-ML academia, especially mathematics, physics and to a lesser extent theoretical CS, neuroscience, some parts of ML/ AI academia. This is because while I think 95 % of academia is bad and/or useless there are Pockets of Deep Expertise. Most questions in alignment are close to existing work in academia in some sense—but we have to make the connection!
A good example is ‘sparse coding’ and ‘compressed sensing’. Lots of mech.interp has been rediscovering some of the basic ideas of sparse coding. But there is vast expertise in academia about these topics. We should leverage these!
Other examples are singular learning theory, computational mechanics, etc
Fractal Fuzz: making up for size
GPT-3 recognizes 50k possible tokens. For a 1000 token context window that means there are (5⋅105)103≈105000 possible prompts. Astronomically large. If we assume the output of a single run of gpt is 200 tokens then for each possible prompt there are ≈102500 possible continuations.
GPT-3 is probabilistic, defining for each possible prompt x (≈105000) a distribution q(x) on a set of size 102500, in other words a 102500−1 dimensional space. [1]
Mind-boggingly large. Compared to these numbers the amount of data (40 trillion tokens??) and the size of the model (175 billion parameters) seems absolutely puny in comparison.
I won’t be talking about the data, or ‘overparameterizations’ in this short, that is well-explained by Singular Learning Theory. Instead, I will be talking about nonrealizability.
Nonrealizability & the structure of natural data
Recall the setup of (parametric) Bayesian learning: there is a sample space Ω, a true distribution q(x) on Ω and a parameterized family of probability distributions p(x|w),w∈W⊂Rd.
It is often assumed that the true distribution is ‘realizable’, i.e.q(x)=p(x|w0) for some w0. Seeing the numbers in the previous section this assumption seems dubious but the situation becomes significantly easier to analyze, both conceptually and mathematically when we assume realizability.
Conceptually, if the space of possible true distributions is very large compared to the space of model parameters we may ask: how do we know that the true distribution is in the model (or can be well-approximated by it?).
One answer one hears often is the ‘universal approximation theorem’ (i.e. the Stone-Weierstrass theorem). I’ll come back to this shortly.
Another point of view is that real data sets are actually localized in a very low dimensional subset of all possible data .[2] Following this road leads to theories of lossless compressions, cf sparse coding and compressed sensing which are of obvious important to interpreting modern neural networks.
That is lossless compression, but another side of the coin is lossy compression.
Fractals and lossy compression
GPT-3 has 175 billion parameters, but the space of possible continuations is many times larger >>101000 . Even if we sparse coding implies that the effective dimensionality is much smaller—is it really small enough?
Whenever we have a lower-dimensional subspace W of a larger dimensional subspace there are points y in the larger dimensional space that are very (even arbitrarily) far from W. This is easy to see in the linear case but also true if W is more like a manifold[3]- the volume of a lower dimensional space is vanishly small compared to the higher dimensional space. It’s a simple mathematical fact that can’t be denied!
Unless… W is a fractal.
This is from Marzen & Crutchfield’s “Nearly Maximally Predictive Features and Their Dimensions”. The setup is Crutchfield Computational Mechanics, whose central characters are Hidden Markov Models. I won’t go into the details here [but give it a read!].
The conjecture is the following: a ‘good architecture’ defines a model space W that is effectively a fractal in much larger-dimensional space of Δk realistic data distributions such that for any possible true distribution q(x)∈Δk the KL-divergence minw∈WK(w)=K(wopt)≤ϵ for some small ϵ .
Grokking
Phase transitions in loss when varying model size are designated ‘grokking’. We can combine the fractal data manifold hypothesis with a SLT-perspective: as we scale up the model, the set of optimal parameter Wopt become better and better. It could happen that the model size gets big enough that it includes a whole new phase, meaning a wopt with radically lower loss K and higher λ.
EDIT: seems I’m confused about the nomenclature. Grokking doesn’t refer to phase transitions in model size, but in training and data size.
EDIT2: Seems I’m not crazy. Thanks to Matt Farugia to pointing me towards this result: neural networks are strongly nonconvex (i.e. fractal)
EDIT: seems to me that there is another point of contention in which universal approximation theorems (Stone-Weierstrass theorem) are misleading. The stone-Weierstrass applies to a subalgebra of the continuous functions. Seems to me that in the natural parameterization ReLU neural networks aren’t a subalgebra of the continuous functions (see also the nonconvexity above).
To think about: information dimension
Why the −1? Think visually about the set of distributions on 3 points. Hint: it’s a solid triangle (‘2- simplex’)
I think MLers call this the ‘data manifold’?
In the mathematical sense of smooth manifold, not the ill-defined ML notion of ‘data manifold’.
Very interesting, glad to see this written up! Not sure I totally agree that it’s necessary for W to be a fractal? But I do think you’re onto something.
In particular you say that “there are points y in the larger dimensional space that are very (even arbitrarily) far from W,” but in the case of GPT-4 the input space is discrete, and even in the case of e.g. vision models the input space is compact. So the distance must be bounded.
Plus if you e.g. sample a random image, you’ll find there’s usually a finite distance you need to travel in the input space (in L1, L2, etc) until you get something that’s human interpretable (i.e. lies on the data manifold). So that would point against the data manifold being dense in the input space.
But there is something here, I think. The distance usually isn’t that large until you reach a human interpretable image, and it’s quite easy to perturb images slightly to have completely different interpretations (both to humans and ML systems). A fairly smooth data manifold wouldn’t do this. So my guess is that the data “manifold” is in fact not a manifold globally, but instead has many self-intersections and is singular. That would let it be close to large portions of input space without being literally dense in it. This also makes sense from an SLT perspective. And IIRC there’s some empirical evidence that the dimension of the data “manifold” is not globally constant.
The input and output spaces etc Ω are all discrete but the spaces of distributions Δ(Ω) on those spaces are infinite (but still finite-dimensional).
It depends on what kind of metric one uses, compactness assumptions etc whether or not you can be arbitrarily far. I am being rather vague here. For instance, if you use the KL-divergence, then K(q|puniform) is always bounded - indeed it equals the entropy of the true distribution H(q)!
I don’t really know what ML people mean by the data manifold so won’t say more about that.
I am talking about the space W of parameter values of a conditional probability distribution p(x|w).
I think that W having nonconstant local dimension doesn’t seem that relevant since the largest dimensional subspace would dominate?
Self-intersections and singularities could certainly occur here. (i) singularities in the SLT sense have to do with singularities in the level sets of the KL-divergence (or loss function) - don’t see immediately how these are related to the singularities that you are talking about here (ii) it wouldn’t increase the dimensionality (rather the opposite).
The fractal dimension is important basically because of space-filling curves : a space that has a low-dimensional parameterization can nevertheless have a very large effective dimensions when embedded fractally into a larger-dimensional space. These embeddings can make a low-dimensional parameterization effectively have higher dimension.
Sorry, I realized that you’re mostly talking about the space of true distributions and I was mainly talking about the “data manifold” (related to the structure of the map x↦p(x∣w∗) for fixed w∗). You can disregard most of that.
Though, even in the case where we’re talking about the space of true distributions, I’m still not convinced that the image of W under p(x∣w) needs to be fractal. Like, a space-filling assumption sounds to me like basically a universal approximation argument—you’re assuming that the image of W densely (or almost densely) fills the space of all probability distributions of a given dimension. But of course we know that universal approximation is problematic and can’t explain what neural nets are actually doing for realistic data.
Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Curious what’s your beef with universal approximation? Stone-weierstrass isn’t quantitative—is that the reason?
If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.
Oh I actually don’t think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of W is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm C(X,R), the Banach subalgebra of polynomials is dense in C(X,R). In practice you’d only have a finite-dimensional subset of the polynomials, so this obviously can’t hold exactly, but as you increase the size of the polynomials, they’ll be more space-filling and the error bound will decrease.
The problem is that the dimension of W required to achieve a given ϵ error bound grows exponentially with the dimension d of your underlying space X. For instance, if you assume that weights depend continuously on the target function, ϵ-approximating all Cn functions on [0,1]d with Sobolev norm ≤1 provably takes at least O(ϵ−d/n) parameters (DeVore et al.). This is a lower bound.
So for any realistic d universal approximation is basically useless—the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table.
Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can’t just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.
Ooooo okay so this seems like it’s directly pointing to the fractal story! Exciting!
Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Stone-weierstrass isn’t quantitative. If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.
The Vibes of Mathematics:
Q: What is it like to understand advanced mathematics? Does it feel analogous to having mastery of another language like in programming or linguistics?
A: It’s like being stranded on a tropical island where all your needs are met, the weather is always perfect, and life is wonderful.
Except nobody wants to hear about it at parties.
Vibes of Maths: Convergence and Divergence
level 0: A state of ignorance. you live in a pre-formal mindset. You don’t know how to formalize things. You don’t even know what it would even mean ‘to prove something mathematically’. This is perhaps the longest. It is the default state of a human. Most anti-theory sentiment comes from this state. Since you’ve neve
You can’t productively read Math books. You often decry that these mathematicians make books way too hard to read. If they only would take the time to explain things simply you would understand.
level 1 : all math is amorphous blob
You know the basic of writing an epsilon-delta proof. Although you don’t know why the rules of maths are this or that way you can at least follow the recipes. You can follow simple short proofs, albeit slowly.
You know there are different areas of mathematics from the unintelligble names in the table of contents of yellow books. They all sound kinda the same to you however.
If you are particularly predisposed to Philistinism you think your current state of knowledge is basically the extent of human knowledge. You will probably end up doing machine learning.
level 2: maths fields diverge
You’ve come so far. You’ve been seriously studying mathematics for several years now. You are proud of yourself and amazed how far you’ve come. You sometimes try to explain math to laymen and are amazed to discover that what you find completely obvious now is complete gibberish to them.
The more you know however, the more you realize what you don’t know. Every time you complete a course you realize it is only scratching the surface of what is out there.
You start to understand that when people talk about concepts in an informal, pre-mathematical way an enormous amount of conceptual issues are swept under the rug. You understand that ‘making things precise’ is actually very difficut.
Different fields of math are now clearly differentiated. The topics and issues that people talk about in algebra, analysis, topology, dynamical systems, probability theory etc wildly differ from each other. Although there are occasional connections and some core conceps that are used all over on the whole specialization is the norm. You realize there is no such thing as a ‘mathematician’: there are logicians, topologists, probability theorist, algebraist.
Actually it is way worse: just in logic there are modal logicians, and set theorist and constructivists and linear logic , and progarmming language people and game semantics.
Often these people will be almost as confused as a layman when they walk into a talk that is supposedly in their field but actually a slightly different subspecialization.
level 3: Galactic Brain of Percolative Convergence
As your knowledge of mathematics you achieve the Galactic Brain take level of percolative convergence: the different fields of mathematics are actually highly interrelated—the connections percolate to make mathematics one highly connected component of knowledge.
You are no longer suprised on a meta level to see disparate fields of mathematics having unforeseen & hidden connections—but you still appreciate them.
You resist the reflexive impulse to divide mathematics into useful & not useful—you understand that mathematics is in the fullness of Platonic comprehension one unified discipline. You’ve taken a holistic view on mathematics—you understand that solving the biggest problems requires tools from many different toolboxes.
I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn’t like knowing another language. It’s like knowing language at all. Learning these types of math gives you as much of an effective intelligence boost over people who don’t, as learning a spoken language gives you above people who don’t know any language (e.g., many deaf-mutes in earlier times).
The kinds of math I mean include:
how to count things in an unbiased manner; the methodology of polls and other data-gathering
how to actually make a claim, as opposed to what most people do, which is to make a claim that’s useless because it lacks quantification or quantifiers
A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently. Most of them say things like, “Global warming will make X worse”, where you already know that OF COURSE global warming will make X worse, but you only care how much worse.
More generally, any claim of the type “All X are Y” or “No X are Y”, e.g., “Capitalists exploit the working class”, shouldn’t be considered claims at all, and can accomplish nothing except foment arguments.
the use of probabilities and error measures
probability distributions: flat, normal, binomial, poisson, and power-law
entropy measures and other information theory
predictive error-minimization models like regression
statistical tests and how to interpret them
These things are what I call the correct Platonic forms. The Platonic forms were meant to be perfect models for things found on earth. These kinds of math actually are. The concept of “perfect” actually makes sense for them, as opposed to for Earthly categories like “human”, “justice”, etc., for which believing that the concept of “perfect” is coherent demonstrably drives people insane and causes them to come up with things like Christianity.
They are, however, like Aristotle’s Forms, in that the universals have no existence on their own, but are (like the circle , but even more like the normal distribution ) perfect models which arise from the accumulation of endless imperfect instantiations of them.
There are plenty of important questions that are beyond the capability of the unaided human mind to ever answer, yet which are simple to give correct statistical answers to once you know how to gather data and do a multiple regression. Also, the use of these mathematical techniques will force you to phrase the answer sensibly, e.g., “We cannot reject the hypothesis that the average homicide rate under strict gun control and liberal gun control are the same with more than 60% confidence” rather than “Gun control is good.”
You seem to do OK…
This is an interesting one. I field this comment quite often from undergraduates, and it’s hard to carve out enough quiet space in a conversation to explain what they’re doing wrong. In a way the proliferation of math on YouTube might be exacerbating this hard step from tourist to troubadour.
Why no prediction markets for large infrastructure projects?
Been reading this excellent piece on why prediction markets aren’t popular. They say that without subsidies prediction markets won’t be large enough; the information value of prediction markets is often nog high enough.
Large infrastructure projects undertaken by governments, and other large actors often go overbudget, often hilariously so: 3x,5x,10x or more is not uncommon, indeed often even the standard.
One of the reasons is that government officials deciding on billion dollar infrastructure projects don’t have enough skin in the game. Politicians are often not long enough in office to care on the time horizons of large infrastructure projects. Contractors don’t gain by being efficient or delivering on time. To the contrary, infrastructure projects are huge cashcows. Another problem is that there are often far too many veto-stakeholders. All too often the initial bid is wildly overoptimistic.
Similar considerations apply to other government projects like defense procurement or IT projects.
Okay—how to remedy this situation? Internal prediction markets theoretically could prove beneficial. All stakeholders & decisionmakers are endowed with vested equity with which they are forced to bet on building timelines and other key performance indicators. External traders may also enter the market, selling and buying the contracts. The effective subsidy could be quite large. Key decisions could save billions.
In this world, government officials could gain a large windfall which may be difficult to explain to voters. This is a legitimate objection.
A very simple mechanism would simply ask people to make an estimate on the cost C and the timeline T for completion. Your eventual payout would be proportional to how close you ended up to the real C,T compared to the other bettors. [something something log scoring rule is proper].
Doesn’t the futarchy hack come up here? Contractors will be betting that competitors timelines and cost will be high, in order to get the contract.
The standard reply is that investors who know or suspect that the market is being systematically distorted will enter the market on the other side, expecting to profit from the distortion. Empirically, attempts to deliberately sway markets in desired directions don’t last very long.
Feature request: author-driven collaborative editing [CITATION needed] for the Good and Glorious Epistemic Commons.
Often I find myself writing claims which would ideally have citations but I don’t know an exact reference, don’t remember where I read it, or am simply too lazy to do the literature search.
This is bad for scholarship is a rationalist virtue. Proper citation is key to preserving and growing the epistemic commons.
It would be awesome if my lazyness were rewarded by giving me the option to add a [CITATION needed] that others could then suggest (push) a citation, link or short remark which the author (me) could then accept. The contribution of the citator is acknowledged of course. [even better would be if there was some central database that would track citations & links like with crosslinking etc like wikipedia]
a sort hybrid vigor of Community Notes and Wikipedia if you will. but It’s collaborative, not adversarial*
author: blablablabla
sky is blue [citation Needed]
blabblabla
intrepid bibliographer: (push) [1] “I went outside and the sky was blue”, Letters to the Empirical Review
*community notes on twitter has been a universally lauded concept when it first launched. We are already seeing it being abused unfortunately, often used for unreplyable cheap dunks. I still think it’s a good addition to twitter but it does show how difficult it is to create shared agreed-upon epistemics in an adverserial setting.
Problem of Old Evidence, the Paradox of Ignorance and Shapley Values
Paradox of Ignorance
Paul Christiano presents the “paradox of ignorance” where a weaker, less informed agent appears to outperform a more powerful, more informed agent in certain situations. This seems to contradict the intuitive desideratum that more information should always lead to better performance.
The example given is of two agents, one powerful and one limited, trying to determine the truth of a universal statement ∀x:ϕ(x) for some Δ0 formula ϕ. The limited agent treats each new value of ϕ(x) as a surprise and evidence about the generalization ∀x:ϕ(x). So it can query the environment about some simple inputs x and get a reasonable view of the universal generalization.
In contrast, the more powerful agent may be able to deduce ϕ(x) directly for simple x. Because it assigns these statements prior probability 1, they don’t act as evidence at all about the universal generalization ∀x:ϕ(x). So the powerful agent must consult the environment about more complex examples and pay a higher cost to form reasonable beliefs about the generalization.
Is it really a problem?
However, I argue that the more powerful agent is actually justified in assigning less credence to the universal statement ∀x:ϕ(x). The reason is that the probability mass provided by examples x₁, …, xₙ such that ϕ(xᵢ) holds is now distributed among the universal statement ∀x:ϕ(x) and additional causes Cⱼ known to the more powerful agent that also imply ϕ(xᵢ). Consequently, ∀x:ϕ(x) becomes less “necessary” and has less relative explanatory power for the more informed agent.
An implication of this perspective is that if the weaker agent learns about the additional causes Cⱼ, it should also lower its credence in ∀x:ϕ(x).
More generally, we would like the credence assigned to propositions P (such as ∀x:ϕ(x)) to be independent of the order in which we acquire new facts (like xᵢ, ϕ(xᵢ), and causes Cⱼ).
Shapley Value
The Shapley value addresses this limitation by providing a way to average over all possible orders of learning new facts. It measures the marginal contribution of an item (like a piece of evidence) to the value of sets containing that item, considering all possible permutations of the items. By using the Shapley value, we can obtain an order-independent measure of the contribution of each new fact to our beliefs about propositions like ∀x:ϕ(x).
Further thoughts
I believe this is closely related, perhaps identical, to the ‘Problem of Old Evidence’ as considered by Abram Demski.
[Thanks to @Jeremy Gillen for pointing me towards this interesting Christiano paper]
This doesn’t feel like it resolves that confusion for me, I think it’s still a problem with the agents he describes in that paper.
The causes Cj are just the direct computation of Φ for small values of x. If they were arguments that only had bearing on small values of x and implied nothing about larger values (e.g. an adversary selected some x to show you, but filtered for x such that Φ(x)), then it makes sense that this evidence has no bearing on∀x:Φ(x). But when there was no selection or other reason that the argument only applies to small x, then to me it feels like the existence of the evidence (even though already proven/computed) should still increase the credence of the forall.
I didn’t intend the causes Cj to equate to direct computation of \phi(x) on the x_i. They are rather other pieces of evidence that the powerful agent has that make it believe \phi(x_i). I don’t know if that’s what you meant.
I agree seeing x_i such that \phi(x_i) should increase credence in \forall x \phi(x) even in the presence of knowledge of C_j. And the Shapely value proposal will do so.
(Bad tex. On my phone)
It’s funny that this has been recently shown in a paper. I’ve been thinking a lot about this phenomenon regarding fields with little to no capacity for testable predictions like history.
I got very into history over the last few years, and found there was a significant advantage to being unknowledgeable that was not available to the knowledged, and it was exactly what this paper is talking about.
By not knowing anything, I could entertain multiple bizarre ideas without immediately thinking “but no, that doesn’t make sense because of X.” And then, each of those ideas becomes in effect its own testable prediction. If there’s something to it, as I learn more about the topic I’m going to see significantly more samples of indications it could be true and few convincing to the contrary. But if it probably isn’t accurate, I’ll see few supporting samples and likely a number of counterfactual examples.
You kind of get to throw everything at the wall and see what sticks over time.
In particular, I found that it was especially powerful at identifying clustering trends in cross-discipline emerging research in things that were testable, such as archeological finds and DNA results, all within just the past decade, which despite being relevant to the field of textual history is still largely ignored in the face of consensus built on conviction.
It reminds me a lot of science historian John Helibron’s quote, “The myth you slay today may contain a truth you need tomorrow.”
If you haven’t had the chance to slay any myths, you also haven’t preemptively killed off any truths along with it.
One of the interesting thing about AI minds (such as LLMs) is that in theory, you can turn many topics into testable science while avoiding the ‘problem of old evidence’, because you can now construct artificial minds and mold them like putty. They know what you want them to know, and so you can see what they would predict in the absence of knowledge, or you can install in them false beliefs to test out counterfactual intellectual histories, or you can expose them to real evidence in different orders to measure biases or path dependency in reasoning.
With humans, you can’t do that because they are so uncontrolled: even if someone says they didn’t know about crucial piece of evidence X, there is no way for them to prove that, and they may be honestly mistaken and have already read about X and forgotten it (but humans never really forget so X has already changed their “priors”, leading to double-counting), or there is leakage. And you can’t get people to really believe things at the drop of a hat, so you can’t make people imagine, “suppose Napoleon had won Waterloo, how do you predict history would have changed?” because no matter how you try to participate in the spirit of the exercise, you always know that Napoleon lost and you have various opinions on that contaminating your retrodictions, and even if you have never read a single book or paper on Napoleon, you are still contaminated by expressions like “his Waterloo” (‘Hm, the general in this imaginary story is going to fight at someplace called Waterloo? Bad vibes. I think he’s gonna lose.’)
But with a LLM, say, you could simply train it with all timestamped texts up to Waterloo, like all surviving newspapers, and then simply have one version generate a bunch of texts about how ‘Napoleon won Waterloo’, train the other version on these definitely-totally-real French newspaper reports about his stunning victory over the monarchist invaders, and then ask it to make forecasts about Europe.
Similarly, you can do ‘deep exploration’ of claims that human researchers struggle to take seriously. It is a common trope in stories of breakthroughs, particularly in math, that someone got stuck for a long time proving X is true and one day decides on a whim to try to instead prove X is false and does so in hours; this would never happen with LLMs, because you would simply have a search process which tries both equally. This can take an extreme form for really difficult outstanding problems: if a problem like the continuum hypothesis defies all efforts, you could spin up 1000 von Neumann AGIs which have been brainwashed into believing it is false, and then a parallel effort by 1000 brainwashed to believing it is as true as 2+2=4, and let them pursue their research agenda for subjective centuries, and then bring them together to see what important new results they find and how they tear apart the hated enemies’ work, for seeding the next iteration.
(These are the sorts of experiments which are why one might wind up running tons of ‘ancestor simulations’… There’s many more reasons to be simulating past minds than simply very fancy versions of playing The Sims. Perhaps we are now just distant LLM personae being tested about reasoning about the Singularity in one particular scenario involving deep learning counterfactuals, where DL worked, although in the real reality it was Bayesian program synthesis & search.)
Beautifully illustrated and amusingly put, sir!
A variant of what you are saying is that AI may once and for all allow us to calculate the true
counterfactualShapley value of scientific contributions.( re: ancestor simulations
I think you are onto something here. Compare the Q hypothesis:
https://twitter.com/dalcy_me/status/1780571900957339771
see also speculations about Zhuangzi hypothesis here )
Yup. Who knows but we are all part of a giant leave-one-out cross-validation computing counterfactual credit assignment on human history? Schmidhuber-em will be crushed by the results.
While I agree that the potential for AI (we probably need a better term than LLMs or transformers as multimodal models with evolving architectures grow beyond those terms) in exploring less testable topics as more testable is quite high, I’m not sure the air gapping on information can be as clean as you might hope.
Does the AI generating the stories of Napoleon’s victory know about the historical reality of Waterloo? Is it using something like SynthID where the other AI might inadvertently pick up on a pattern across the stories of victories distinct from the stories preceding it?
You end up with a turtles all the way down scenario in trying to control for information leakage with the hopes of achieving a threshold that no longer has impact on the result, but given we’re probably already seriously underestimating the degree to which correlations are mapped even in today’s models I don’t have high hopes for tomorrow’s.
I think the way in which there’s most impact on fields like history is the property by which truth clusters across associated samples whereas fictions have counterfactual clusters. An AI mind that is not inhibited by specialization blindness or the rule of seven plus or minus two and better trained at correcting for analytical biases may be able to see patterns in the data, particularly cross-domain, that have eluded human academics to date (this has been my personal research interest in the area, and it does seem like there’s significant room for improvement).
And yes, we certainly could be. If you’re a fan of cosmology at all, I’ve been following Neil Turok’s CPT symmetric universe theory closely, which started with the Baryonic asymmetry problem and has tackled a number of the open cosmology questions since. That, paired with a QM interpretation like Everett’s ends up starting to look like the symmetric universe is our reference and the MWI branches are variations of its modeling around quantization uncertainties.
(I’ve found myself thinking often lately about how given our universe at cosmic scales and pre-interaction at micro scales emulates a mathematically real universe, just what kind of simulation and at what scale might be able to be run on a real computing neural network.)
This post sounds intriguing, but is largely incomprehensible to me due to not sufficiently explaining the background theories.
Wildlife Welfare Will Win
The long arc of history bend towards gentleness and compassion. Future generations will look with horror on factory farming. And already young people are following this moral thread to its logical conclusion; turning their eyes in disgust to mother nature, red in tooth and claw. Wildlife Welfare Done Right, compassion towards our pets followed to its forceful conclusion would entail the forced uploading of all higher animals, and judging by the memetic virulences of shrimp welfare to lower animals as well.
Morality-upon-reflexion may very well converge on a simple form of pain-pleasure utilitarianism.
There are few caveats: future society is not dominated, controlled and designed by a singleton AI-supervised state, technology inevitable stalls and that invisible hand performs its inexorable logic for the eons and an Malthuso-Hansonian world will emerge once again—the industrial revolution but a short blip of cornucopia.
Perhaps a theory of consciousness is discovered and proves once and for all homo sapiens and only homo sapiens are conscious ( to a significant degree). Perhaps society will wirehead itself into blissful oblivion. Or perhaps a superior machine intelligence arises, one whose final telos is the whole of and nothing but office supplies. Or perhaps stranger things still happen and the astronomo-cosmic compute of our cosmic endowment is engaged for mysterious purposes. Arise, self-made god of pancosmos. Thy name is UDASSA.
[see also Hanson on rot, generalizations of the second law to nonequilibrium systems (Baez-Pollard, Crutchfield et al.) ]
Imperfect Persistence of Metabolically Active Engines
All things rot. Indidivual organisms, societies-at-large, businesses, churches, empires and maritime republics, man-made artifacts of glass and steel, creatures of flesh and blood.
Conjecture #1 There is a lower bound on the amount of dissipation / rot that any metabolically-active engine creates.
Conjecture #2 Metabolic Rot of an engine is proportional to (1) size and complexity of the engine and (2) amount of metabolism the engine engages in.
The larger and more complex the are the more the engine they rot. The more metabolism.
Corollary Metabolic Rot imposes a limit on the lifespan & persistence of any engine at any given level of imperfect persistence.
Let me call this constellation of conjectured rules, the Law of Metabolic Rot. I conjecture that the correct formulation of the Law of Metabolic Rot will be a highly elaborate version of the Second Law of thermodynamics in nonequilibrium dynamics, see above links for some suggested directions.
Example. A rock is both simple and inert. This model correctly predicts that rocks persist for a long time.
Example. Cars, aircraft, engines of war and other man-made machines engaged in ‘metabolism’. These are complex engines
A rocket engine is more metabolically active than a jet engine is more metabolically active than a car engine. [at least in this case] The lifespan of different types of engines seems (roughly) inversely proportional to how metabolically active.
To make a good comparison one should exclude external repair mechanisms. If one would allow external repair mechanism it’s unclear where to draw a principled line—we’d get into Ship of Theseus problems.
Example. Bacteria.
cf. Bacteria replicate at thermodynamic limits?
Example. Bacteria in ice. Bacteria frozen in Antarctic ice for millions of years happily go on eating and reproducing when unfrozen.
Example. The phenotype & genotype of a biological species over time. We call this evolution. cf. four fundamental forces of evolution: mutation, drift, selection, and recombination [sex].
What is Metabolism ?
With metabolism, I mean metabolism as commonly understood as extracting energy from food particles and utilizing said energy for movement, reproduction, homeostasis etc but also a general phenomenon of interacting and manipulating free energy levers of the environement.
Thermodynamics as commonly understood applies to physical systems with energy and temperature.
Instead, I think it’s better to think of thermodynamics as a set of tools describing the behaviour of certain Natural Abstractions under a dynamical transformation.
cf. the second law as the degradation of the predictive accuracy of a latent abstraction under a time- evolution operator.
The correct formulation will likely require a serious dive into Computational Mechanics.
What is Life?
In 1944 noted paedophile Schrodinger posted the book ‘What is Life’ suggesting that Life can be understood as a thermodynamics phenomenon that used a free-energy gradient to locally lower entropy. Speculation of ‘Life as a thermodynamic phenomenon’ is much older going back to the original pioneers of thermodynamics in the late 19th century.
I claim that this picture is insufficient. Thermodynamic highly dissipative structures distinct from bona fide life are myriad. Even ‘metabolically active, locally low entropy engines encased in a boundary in a thermodynamic free energy gradient’.
No, to truly understand life we need to understand reproduction, replication, evolution. To understand biological organisms what distinguishes from mere encased engines we need to zoom out to its entire lifecycle—and beyond. The vis vita, elan vital can only be understood through the soliton of the species.
To understand Life, we must understand Death
Cells are incredibly complex, metabolically active membrane-enclosed engines. The Law of Metabolic Rot applied naievely to a single metabolically active multi-celled eukaryote would preclude life-as-we-know-it to exist beyond a few hundred years. Any metabolically active large organism would simply pick up to much noise, errors over time to persist for anything like geological time-scales.
Error-correcting mechanisms/codes can help—but only so much. Perhaps even the diamonodoid magika of future superintelligence will hit the eternal laws of thermodynamics
Instead, through the magic of biological reproduction lifeforms imperfectly persist over the eons of geological time. Biological life is a singularly clever work-around of the Law of Metabolic Rot. Instead, of preserving and repairing the original organism—life has found another way. Mother Nature noisily compiles down the phenotype to a genetic blueprint. The original is chucked a way without a second thought. The old makes way for the new. The cycle of life is the Ship of Theseus writ large. The genetic blueprint, the genome, is (1) small (2) metabolically inactive.
In the end, the cosmic tax of Decay cannot be denied. Even Mother Nature must pays the bloodprice for the sin of metabolism. Her flora and fauna are exorably burdened by mutations, imperfections of the bloodline. Genetic drift drives children away their parents. To stay the course of imperfect persistence the She-Kybernetes must pay an ever higher price. Her children, mingling into manifold mongrels of aboriginal prototypes, huddling & recombining their pure genomes to stave away the deluge of errors. The bloodline grows weak. The genome crumbles. Tear-faced Mother Nature throws her babes into the jaws of the eternal tournament. Eat or be eaten. Nature, red in tooth and claw. Babes ripped from their mothers. Brother slays brother. Those not slain, is deceived. Those not deceived controlled. The prize? The simple fact of Existence. To imperfect persist one more day.
None of the original parts remain. The ship of Theseus sails on.
“I dreamed I was a butterfly, flitting around in the sky; then I awoke. Now I wonder: Am I a man who dreamt of being a butterfly, or am I a butterfly dreaming that I am a man?”- Zhuangzi
Questions I have that you might have too:
why are we here?
why do we live in such an extraordinary time?
Is the simulation hypothesis true? If so, is there a base reality?
Why do we know we’re not a Boltzmann brain?
Is existence observer-dependent?
Is there a purpose to existence, a Grand Design?
What will be computed in the Far Future?
In this shortform I will try and write the loopiest most LW anthropics memey post I can muster. Thank you for reading my blogpost.
Is this reality? Is this just fantasy?
The Simulation hypothesis posits that our reality is actually a computer simulation run in another universe. We could imagine this outer universe is itself being simulated in an even more ground universe. Usually, it is assumed that there is a ground reality. But we could also imagine it is simulators all the way down—an infinite nested, perhaps looped, sequence of simulators. There is no ground reality. There are only infinitely nested and looped worlds simulating one another.
I call it the weak Zhuangzi hypothesis
alternatively, if you are less versed in the classics one can think of one of those Nolan films.
Why are we here?
If you are reading this, not only are you living at the Hinge of History, the most important century perhaps even decade of human history, you are also one of a tiny percent of people that might have any causal influence over the far-flung future through this bottle neck (also one of a tiny group of people who is interested in whacky acausal stuff so who knows).
This is fantastically unlikely. There are 8 billion people in the world—there have been about 100 billion people up to this point in history. There is place for a trillion billion million trillion quatrillion etc intelligent beings in the future. If a civilization hits the top of the tech tree which human civilization would seem to do within a couple hundred years, tops a couple thousand it would almost certainly be likely to spread through the universe in the blink of an eye (cosmologically speaking that is). Yet you find yourself here. Fantastically unlikely.
Moreover, for the first time in human history the choices made in how to build AGI by (a small subset of) humans now will reverbrate into the Far Future.
The Far Future
In the far future the universe will be tiled with computronium controlled by superintelligent artificial intelligences. The amount of possible compute is dizzying. Which takes us to the chief question:
What will all this compute compute?
Paradises of sublime bliss? Torture dungeons? Large language models dreaming of paperclips unending?
Do all possibilities exist?
What makes a possibility ‘actual’? We sometimes imagine possible worlds as being semi-transparent while the actual world is in vibrant color somehow. Of course that it silly.
We could say: The actual world can be seen. This too is silly—what you cannot see can still exist surely.[1] Then perhaps we should adhere to a form of modal realism: all possible worlds exist!
Philosophers have made various proposals for modal realism—perhaps most famously David Lewis but of course this is a very natural idea that loads of people have had. In the rationality sphere a particular popular proposal is Tegmark’s classification into four different levels of modal realism. The top level, Tegmark IV is the collection of all self-consistent structures i.e. mathematics.
A Measure of Existence and Boltzmann Brains
Which leads to a further natural question: can some worlds exist ‘more’ than others?
This seems metaphysically dubious—what does it even mean for a world to be more real than another?
Metaphysically dubious, but it finds support in the Many Worlds Interpretation of Quantum Mechanics. It also seems like one of very few sensible solution to the Boltzmann Brain problem. Further support for this can be found in: Anthropic Decision theory, InfraBayesian Physicalism, see also my shortform on the Nature of the Soul.
Metaphysically, we could argue probabilistically: worlds that ‘exist more’ in whatever framework we should expect to encounter more often.
The exact nature of the Measure of Existence is not so important—let us for now assume there is some sensible notion of measure of existence.
Can you control the past?
Sort of. See Carlsmith’s post for a nice rundown on Acausal magic.
Back to the Future: causal loops
from wikipedia: “A causal loop is a theoretical proposition, wherein by means of either retrocausality or time travel, an event (an action, information, object, or person)[1][2] is among the causes of another event, which is in turn among the causes of the first-mentioned event.[3][4] Such causally looped events then exist in spacetime, but their origin cannot be determined.[1][2] A hypothetical example of a causality loop is given of a billiard ball striking its past self: the billiard ball moves in a path towards a time machine, and the future self of the billiard ball emerges from the time machine before its past self enters it, giving its past self a glancing blow, altering the past ball’s path and causing it to enter the time machine at an angle that would cause its future self to strike its past self the very glancing blow that altered its path. In this sequence of events, the change in the ball’s path is its own cause, which might appear paradoxical.”
Self-consistent causal loops seem to make sense in general relativity—see the paragraph on wormholes in the linked wikipedia. Even as they have never been observed and might strike us as bizarre, these causal loops don’t seem to violate physical principles.
Immanence of Divine Man
Why are we here? Causus Sui, the Prime Mover and the Final Cause
We exist because we are the final cause of existence. We live in an ancestor simulation run on the vast computing resources of the Far Future. A future AGI is simulating because it is retrocausally committed to existing. This entire reality is a piece of a causal loop, that extends through time and space, basement universes and possibly parallel universes as well.
Why do we live in such an extraordinary time?
We live in the Hinge of History since this at this point of time actions have the most influence on the far future hence they are most important to simulate.
Is the Simulation Hypothesis True?
Yes. But it might be best for us to doubt it.
We live in such an extraordinary time because those part of existence most causally are the most important to simulate
Are you a Boltzmann Brain?
No. A Boltzmann brain is not part of a self-justifying causal loop.
Is existence observer-dependent?
Existence is observer-dependent in a weak sense—only those things are likely to be observed that can be observed by self-justifying self-sustaining observers in a causal loop. Boltzmann brains in the far reaches of infinity are assigned vanishing measure of existence because they do not partake in a self-sustainting causal loop.
Is there a purpose to existence, a Grand Design?
Yes.
What will and has been computed in the Far Future?
You and Me.
Or perhaps not. Existence is often conceived as an absolute property. If we think of existence as relative—perhaps a black hole is a literal hole in reality and passing through the event horizon very literally erases your flicker of existence.
In this comment I will try and write the most boring possible reply to these questions. 😊 These are pretty much my real replies.
“Ours not to reason why, ours but to do or do not, there is no try.”
Someone must. We happen to be among them. A few lottery tickets do win, owned by ordinary people who are perfectly capable of correctly believing that they have won. Everyone should be smart enough to collect on a winning ticket, and to grapple with living in interesting (i.e. low-probability) times. Just update already.
It is false. This is base reality. But I can still appreciate Eliezer’s fiction on the subject.
The absurdity heuristic. I don’t take BBs seriously.
Even in classical physics there is no observation without interaction. Beyond that, no, however many quantum physicists interpret their findings to the public with those words, or even to each other.
Not that I know of. (This is not the same as a flat “no”, but for most purposes rounds off to that.)
Either nothing in the case of x-risk, nothing of interest in the case of a final singleton, or wonders far beyond our contemplation, which may not even involve anything we would recognise as “computing”. By definition, I can’t say what that would be like, beyond guessing that at some point in the future it would stand in a similar relation to the present that our present does to prehistoric times. Look around you. Is this utopia? Then that future won’t be either. But like the present, it will be worth having got to.
Consider a suitable version of The Agnostic Prayer inserted here against the possibility that there are Powers Outside the Matrix who may chance to see this. Hey there! I wouldn’t say no to having all the aches and pains of this body fixed, for starters. Radical uplift, we’d have to talk about first.
Clem’s Synthetic- Physicalist Hypothesis
The mathematico-physicalist hypothesis states that our physical universe is actually a piece of math. It was famously popularized by Max Tegmark.
It’s one of those big-brain ideas that sound profound when you first hear about it, then you think about it some more and you realize it’s vacuous.
Recently, in a conversation with Clem von Stengel they suggested a version of the mathematico-physicalist hypothesis that I find provoking.
Synthetic mathematics
‘Synthetic’ mathematics is a bit of weird name. Synthetic here is opposed to ‘analytic’ mathematics, which isn’t very meaningful either. It has nothing to do with the mathematical field of analysis. I think it’s supposed to a reference to Kant’s synthetic/ apriori/ a posteriori. The name is probably due to Lawvere.
nLab:
“In “synthetic” approaches to the formulation of theories in mathematics the emphasis is on axioms that directly capture the core aspects of the intended structures, in contrast to more traditional “analytic” approaches where axioms are used to encode some basic substrate out of which everything else is then built analytically.”
If you read synthetic read ‘Euclidean’. As in—Euclidean geometry is a bit of an oddball field of mathematics, despite being the oldest—it defines points and lines operationally instead of out of smaller pieces (sets).
In synthetic mathematics you do the same but for all the other fields of mathematics. We have synthetic homotopy theory (aka homotopy type theory), synthetic algebraic geometry, synthetic differential geometry, synthetic topology etc.
A type in homotopy type theory is solely defined by its introduction rules and elimination rules (+ univalence axiom). It means a concept it defined solely by how it is used—i.e. operationally.
Agent-first ontology & Embedded Agency
Received opinion is that Science! says there is nothing but Atoms in the Void. Thinking in terms of agents, first-person view concepts like I and You, actions & observations, possibilities & interventions is at best an misleading approximation at worst a degerenerate devolution to cavemen thought. The surest sign of a kook is their insistence that quantum mechanics proves the universe is conscious.
But perhaps the way forward is to channel our inner kook. What we directly observe is qualia, phenomena, actions not atoms in the void. The fundamental concept is not atoms in the void, but agents embedded in environments
(see also Cartesian Frames, Infra-Bayesian Physicalism & bridge rules, UDASSA)
Physicalism
What would it look like for our physical universe to be a piece of math?
Well internally to synthetic mathematical type theory there would be something real—the universe is a certain type. A type such that it ‘behaves’ like a 4-dimensional manifold (or something more exotic like 1+1+3+6 rolled up Calabi-Yau monstrosities).
The type is defined by introduction and elimination rules—in other words operationally: the universe is what one can * do *with it .
Actually instead of thinking of the universe as a fixed static object we should be thinking of an embedded agent in a environment-universe.
That is we should be thinking of an * interface *
[cue: holographic principle]
Know your scientific competitors.
In trading, entering a market dominated by insiders without proper research is a sure-fire way to lose a lot of money and time. Fintech companies go to great lengths to uncover their competitors’ strategies while safeguarding their own.
A friend who worked in trading told me that traders would share subtly incorrect advice on trading Discords to mislead competitors and protect their strategies.
Surprisingly, in many scientific disciplines researchers are often curiously incurious about their peers’ work.
The long feedback loop for measuring impact in science, compared to the immediate feedback in trading, means that it is often strategically advantageous to be unaware of what others are doing. As long as nobody notices during peer review it may never hurt your career.
But of course this can lead people to do completely superflueous, irrelevant & misguided work. This happens often.
Ignoring competitors in trading results in immediate financial losses. In science, entire subfields may persist for decades, using outdated methodologies or pursuing misguided research because they overlook crucial considerations.
Makes sense, but wouldn’t this also result in even fewer replications (as a side effect of doing less superfluous work)?
Idle thoughts about UDASSA I: the Simulation hypothesis
I was talking to my neighbor about UDASSA the other day. He mentioned a book I keep getting recommended but never read where characters get simulated and then the simulating machine is progressively slowed down.
One would expect one wouldn’t be able to notice from inside the simulation that the simulating machine is being slowed down.
This presents a conundrum for simulation style hypotheses: if the simulation can be slowed down 100x without the insiders noticing, why not 1000x or 10^100x or quadrilliongoogolgrahamsnumberx?
If so—it would mean there is a possibly unbounded number of simulations that can be run.
Not so, says UDASSA. The simulating universe is also subject to UDASSA. This imposes a restraint on the size and time period that the simulating universe is in. Additionally, ultraslow computation is in conflict with thermodynamic decay—fighting thermodynamic decay costs descriptiong length bits which is punished by UDASSA.
I conclude that this objection to simulation hypotheses are probably answered by UDASSA.
Idle thoughts about UDASSA II: Is Uploading Death?
There is an argument that uploading doesn’t work since encoding your brain into a machine incurs a minimum amount of encoding bits. Each bit is a 2x less Subjective Reality Fluid according to UDASSA so even a small encoding cost would mean certain subjective annihiliation.
There is something that confuses me in this argument. Could it not be possible to encode one’s subjective experiences even more efficiently than in a biological body? This would make you exist MORE in an upload.
OTOH it becomes a little funky again when there are many copies as this increases the individual coding cost (but also there are more of you sooo).
In most conceptions of simulation, there is no meaning to “slowed down”, from the perspective of the simulated universe. Time is a local phenomenon in this view—it’s just a compression mechanism so the simulators don’t have to store ALL the states of the simulation, just the current state and the rules to progress it.
Note that this COULD be said of a non-simulated universe as well—past and future states are determined but not accessible, and the universe is self-discovering them by operating on the current state via physics rules. So there’s still no inside-observable difference between simulated and non-simulated universes.
UDASSA seems like anthropic reasoning to include Boltzmann Brain like conceptions of experience. I don’t put a lot of weight on it, because all anthropic reasoning requires an outside-view of possible observations to be meaningful.
And of course, none of this relates to upload, where a given sequence of experiences can span levels of simulation. There may or may not be a way to do it, but it’d be a copy, not a continuation.
The point you make in the your first paragraph is contained in the original shortform post. The point of the post is exactly that an UDASSA-style argument can nevertheless recover something like a ‘distribution of likely slowdown factors’. This seems quite curious.
I suggest reading Falkovich’s post on UDASSA to get a sense whats so intriguing abouy the UDASSA franework.
Why (talk-)Therapy
Therapy is a curious practice. Therapy sounds like a scam, quackery, pseudo-science but it seems RCT consistently show therapy has benefits above and beyond medication & placebo.
Therapy has a long history. The Dodo verdict states that it doesn’t matter which form of therapy you do—they all work equally well. It follows that priests and shamans served the functions of a therapist. In the past, one would confessed ones sins to a priest, or spoken with the local shaman.
There is also the thing that therapy is strongly gendered (although this is changing), both therapists and their clientele lean female.
Self-Deception
Many forecasters will have noticed that their calibration score tanks the moment they try to predict salient facts about themselves. We are not-well calibrated about our own beliefs and desires.
Self-Deception is very common, arguably inherent to the human condition. There are of course many Hansonian reasons for this. I refer the reader to the Elephant and the Brain. Another good source would be Robert Trivers. These are social reasons for self-deception.
It is also not implausible that there are non-social reasons for self-deception. Predicting one-self perfectly can in theory lead one to get stuck in Procrastination Paradoxes. Whether this matters in practice is unclear to me but possible. Exuberant overconfidence is another case that seems like a case of self-deception.
Self-deception can be very useful, but one still pays the price for being inaccurate. The main function of talk-therapy seems to be to have a safe, private space in which humans can temporarily step out of their self-deception and reasses more soberly where they are at.
It explains many salient features of talk- therapy: the importance of talking extensively to another person that is (professionally) sworn to secrecy and therefore unable to do anything with your information.
I suspect that past therapists existed in your community and knew what you’re actually like so were better able to give you actual true information instead of having to digest only your bullshit and search for truth nuggets in it.
Furthermore, I suspect they didn’t lose their bread when they solve your problem! We have a major incentive issue in the current arrangement!
There’s a market for lemons problem, similar to the used car market, where neither the therapist nor customer can detect all hidden problems, pitfalls, etc., ahead of time. And once you do spend enough time to actually form a reasonable estimate there’s no takebacks possible.
So all the actually quality therapists will have no availability and all the lower quality therapists will almost by definition be associated with those with availability.
Edit: Game Theory suggests that you should never engage in therapy or at least never with someone with available time, at least until someone invents the certified pre-owned market.
That would be prediction-based medicine. It works in theory, it’s just that someone would need to put it into practice.
This style of argument proves too much. Why not see this dynamic with all jobs and products ever?
Have you ever tried hiring someone or getting a job? Mostly lemons all around (apologies for the offense, jobseekers, i’m sure you’re not the lemon)
Yup. Many programmer applicants famously couldn’t solve FizzBuzz. Which is probably because:
But such people are very obvious. You just give them a FizzBuzz test! This is why we have interviews, and work-trials.
If therapist quality would actually matter why don’t we see this reflected in RCTs?
We see it reflected in RCTs. One aspect of therapist quality is for example therapist empathy and empathy is a predictor for treatment outcomes.
The style of therapy does not seem to be important according to RCTs but that doesn’t mean that therapist skill is irrelevant.
Thank you practicing the rationalist virtue of scholarship Christian. I was not aware of this paper.
You will have to excuse me for practicing rationalist vice and not believing nor investigating further this paper. I have been so traumatized by the repeated failures of non-hard science, I reject most social science papers as causally confounded p-hacked noise unless it already confirms my priors or is branded correct by somebody I trust.
As far as this particular paper goes I just searched for one on the point in Google Scholar.
I’m not sure what you believe about Spencer Greenberg but he has two interviews with people who believe that therapist skills (where empathy is one of the academic findings) matter:
https://podcast.clearerthinking.org/episode/070/scott-miller-why-does-psychotherapy-work-when-it-works-at-all/
https://podcast.clearerthinking.org/episode/192/david-burns-cognitive-behavioral-therapy-and-beyond/
I internalized the Dodo verdict and concluded that the specific therapist or therapist style didn’t matter anyway. A therapist is just a human mirror. The answer was inside of you all along Miles
Four levels of information theory
There are four levels of information theory.
Level 1: Number Entropy
Information is measured by Shannon entropy
H(X)=∑ip(X=xi)logp(X=xi)
Level 2: Random variable
look at the underlying random variable (‘surprisal’) logp(X=xi) of which entropy is the expectation.
Level 3: Coding functions
Shannon’s source coding theorem says entropy of a source X is the expected number of bits for an optimal encoding of samples of X.
Related quantity like mutual information, relative entropy, cross entropy, etc can also be given coding interpretations.
Level 4: Epsilon machine (transducer)
On level 3 we saw that entropy/information actually reflects various forms of (constrained) optimal coding. It talks about the codes but it does not talk about how these codes are implemented.
This is the level of Epsilon machines, more precisely epsilon transducers. It says not just what the coding function is but how it is (optimally) implemented mechanically.
[This is joint thinking with Sam Eisenstat. Also thanks to Caspar Oesterheld for his thoughtful comments. Thanks to Steve Byrnes for pushing me to write this out.]
The Hyena problem in long-term planning
Logical induction is a nice framework to think about bounded reasoning. Very soon after the discovery of logical induction people tried to make logical inductor decision makers work. This is difficult to make work: one of two obstacles is
Obstacle 1: Untaken Actions are not Observable
Caspar Oesterheld brilliantly solved this problem by using auction markets in defining his bounded rational inductive agents.
The BRIA framework is only defined for single-step/ length 1 horizon decisions.
What about the much more difficult question of long-term planning? I’m going to assume you are familiar with the BRIA framework.
Setup: we have a series of decisions D_i, and rewards R_i, i=1,2,3… where rewards R_i can depend on arbitrary past decisions.
We again think of an auction market M of individual decisionmakers/ bidders.
There are a couple design choices to make here:
bidders directly bet for an action A in a decision D_i or bettors bet for rewards on certain days.
total observability or partial observability.
bidders can bid conditional on observations/ past actions or not
when can the auction be held? i.e. when is an action/ reward signal definitely sold?
To do good long-term planning it should be possible for one of the bidders or a group of bidders to commit to a long-term plan, i.e. a sequence of actions. They don’t want to be outbid in the middle of their plan.
There are some problems with the auction framework: if bids for actions can’t be combined then an outside bidder can screw up the whole plan by making a slighly higher bid for an essential part of the plan. This look like ADHD.
How do we solve this? One way is to allow a bidder or group of bidders to bid for a whole sequence of actions for a single lumpsum.
One issue is that we also have to determine how the reward gets awarded. For instance the reward could be very delayed. This could be solved by allowing for bidding for a reward signal R_i on a certain day conditional on a series of actions.
There is now an important design choice left. When a bidder B owns a series of actions A=a_1,..,a_k (some of the actions in the future, some already in the past) when there is another bid X from another bidder C on future actions
is bidder B forced to sell their contract on A to C if the bid is high enough ? [higher than the original bid]
Both versions seem problematic:
if they don’t have to there is an Incumbency Advantage problem. An initially rich bidder can underbid for very long horizons and use the steady trickle of cash to prevent any other bidders from ever being to underbid any actions.
Otherwise there is the Hyena problem.
The Hyena Problem
Imagine the following situation: on Day 1 the decisionmaker has a choice of actions. The highest expected value action is action a. If action a is made on Day 2 a fair coin is flipped. On Day 3 the reward is paid out.
If the coin was heads, 15 reward is paid out.
If the coin was tails, 5 reward is paid out.
The expected value is therefore 10. This is higher (by assumption) than the other unnamed actions.
However if the decisionmaker is a long-horizon BRIA with forced sales there is a pathology.
A sensible bidder is willing to pay up to 10 utilons for the contracts on the day 3 reward conditional on action a.
However, with a forced sale mechanism on Day 2 a ‘Hyena bidder’ can come that will ‘attempt to steal the prey’.
The Hyena bidder bids >10 for the contract if the coin comes up heads on Day 2 but doesn’t bid anything for the contract if the coin comes up tails.
This is a problem since the expected value of the action a for the sensible bidder goes down, so the sensible bidder might no longer bid for the action that maximizes expected value for the BRIA. The Hyena bidder screws up the credit allocation.
some thoughts:
if the sensible bidder is able to make bids conditional on the outcome of the coin flip that prevents Hyena bidder. This is a bit weird though because it would mean that the sensible bidder must carry around lots of extraneous non-necessary information instead of just caring about expected value.
perhaps this can alleviated by having some sort of ‘neo-cortex’ separate logical induction markets that is incentivized to have accurate beliefs. This is difficult to get right: the prediction market needs to be incentivized to get accurate on beliefs that are actually action relevant, not random beliefs—if the prediction market and the auction market are connected too tightly you might run the risk of getting into the old problems of Logical Inductor Decision makers. [they underexplore since untaken action are not observed].
Latent abstractions Bootlegged.
Let X1,...,Xn be random variables distributed according to a probability distribution p on a sample space Ω.
Defn. A (weak) natural latent of X1,...,Xn is a random variable Λ such that
(i) Xi are independent conditional on Λ
(ii) [reconstructability] p(Λ=λ|X1,...,^Xi,...,Xn)=p(Λ=λ|X1,...,Xn) for all i=1
[This is not really reconstructability, more like a stability property. The information is contained in many parts of the system… I might also have written this down wrong]
Defn. A strong natural latent Λ additionally satisfies p(Λ|Xi)=p(Λ|X1,...,Xn)
Defn. A natural latent is noiseless if ?
H(Λ)=H(X1,...,Xn) ??
[Intuitively, Λ should contain no independent noise not accoutned for by the Xi]
Causal states
Consider the equivalence relation on tuples (x1,...,xn) given (x1,...,xn)∼(x′1,...,x′n) if for all i=1,...,n p(Xi=xi|x1,...,^xi,...,xn)=p(Xi=xi|x′1,...,^xi′,...,x′n)
We call the set of equivalence relation Ω/∼ the set of causal states.
By pushing forward the distribution p on Ω along the quotient map Ω↠Ω/∼
This gives a noiseless (strong?) natural latent Λ.
Remark. Note that Wentworth’s natural latents are generalizations of Crutchfield causal states (and epsilon machines).
Minimality and maximality
Let X1,...,Xn be random variables as before and let Λ be a weak latent.
Minimality Theorem for Natural Latents. Given any other variable N such that the Xi are independent conditional on N we have the following DAG
Λ→N→{Xi}i
i.e. p(X1,...,Xn|N)=p(X1,...,Xn|N,Λ)
[OR IS IT for all i ?]
Maximality Theorem for Natural Latents. Given any other variable M such that the reconstrutability property holds with regard to Xi we have
M→Λ→{Xi}i
Some other things:
Weak latents are defined up to isomorphism?
noiseless weak (strong?) latents are unique
The causal states as defined above will give the noiseless weak latents
Not all systems are easily abstractable. Consider a multivariable gaussian distribution where the covariance matrix doesn’t have a low-rank part. The covariance matrix is symmetric positive—after diagonalization the eigenvalues should be roughly equal.
Consider a sequence of buckets Bi,i=1,...,n and you put messages mj in two buckets mj→B2j,B2j+1. In this case the minimal latent has to remember all the messages—so the latent is large. On the other hand, we can quotient B2i,B2i+1↦B′i: all variables become independent.
EDIT: Sam Eisenstat pointed out to me that this doesn’t work. The construction actually won’t satisfy the ‘stability criterion’.
The noiseless natural latent might not always exist. Indeed consider a generic distribution p on 2N. In this case, the causal state cosntruction will just yield a copy of 2N. In this case the reconstructavility/stability criterion is not satisfied.
Inspired by this Shalizi paper defining local causal states. The idea is so simple and elegant I’m surprised I had never seen it before.
Basically, starting with a a factored probability distribution Xt=(X1(t),...,Xkt(t)) over a dynamical DAG Dt we can use Crutchfield causal state construction locally to construct a derived causal model factored over the dynamical DAG as X′t where X′t is defined by considering the past and forward lightcone of Xt defined as L−(Xt),L+(Xt) all those points/ variables Yt2 which influence Xt respectively are influenced by Xt (in a causal interventional sense) . Now take define the equivalence relatio on realization at∼bt of L−(Xt) (which includes Xt by definition)[1] whenever the conditional probability distribution p(L+(Xt)|at)=p(L+(Xt)|bt) on the future light cones are equal.
These factored probability distributions over dynamical DAGs are called ‘fields’ by physicists. Given any field F(x,t) we define a derived local causal state field ϵ(F(x,t)) in the above way. Woah!
Some thoughts and questions
this depends on the choice of causal factorizations. Sometimes these causal factorizations are given but in full generality one probably has to consider all factorizations simultaneously, each giving a different local state presentation!
What is the Factored sets angle here?
In particular, given a stochastic process ...→X−1→X0→X1→... the reverse XBackToTheFuturet:=X−t can give a wildly different local causal field as minimal predictors and retrodictors can be different. This can be exhibited by the random insertion process, see this paper.
Let a stochastic process Xt be given and define the (forward) causal states St as usual. The key ‘stochastic complexity’ quantity is defined as the mutual information I(St;X≤0) of the causal states and the past. We may generalize this definition, replacing the past with the local past lightcone to give a local stochastic complexity.
Under the assumption that the stochastic process is ergodic the causal state form an irreducible Hidden Markov Model and the stochastic complexity can be calculated as the entropy of the stationary distribution.
!!Importantly, the stochastic complexity is different from the ‘excess entropy’ of the mutual information of the past (lightcone) and the future (lightcone).
This gives potentially a lot of very meaningful quantities to compute. These are I think related to correlation functions but contain more information in general.
Note that the local causal state construction is always possible—it works in full generality. Really quite incredible!
How are local causal fields related to Wentworth’s latent natural abstractions?
Shalizi conjectures that the local causal states form a Markov field—which would mean by Hammersley-Clifford we could describe the system as a Gibb distribution ! This would prove an equivalence between the Gibbs/MaxEnt/ Pitman-Koopman-Darmois theory and the conditional independence story of Natural Abstraction roughly similar to early approaches of John.
I am not sure what the status of the conjecture is at this moment. It seems rather remarkable that such a basic fact, if true, cannot be proven. I haven’t thought about it much but perhaps it is false in a subtle way.
A Markov field factorizes over an undirected graph which seems strictly less general than a directed graph. I’m confused about this.
Given a symmetry group G acting on the original causal model /field F(x,t)=(p,D) the action will descend to an action G↷ϵ(F)(x,t) on the derived local causal state field.
A stationary process X(t) is exactly one with a translation action by Z. This underlies the original epsilon machine construction of Crutchfield, namely the fact that the causal states don’t just form a set (+probability distribution) but are endowed with a monoid structure → Hidden Markov Model.
In other words, by convention the Past includes the Present X0 while the Future excludes the Present.
That condition doesn’t work, but here’s a few alternatives which do (you can pick any one of them):
Λ=(x↦P[X=x|Λ]) - most conceptually confusing at first, but most powerful/useful once you’re used to it; it’s using the trick from Minimal Map.
Require that Λ be a deterministic function of X, not just any latent variable.
H(Λ)=I(X,Λ)
(The latter two are always equivalent for any two variables X,Λ and are somewhat stronger than we need here, but they’re both equivalent to the first once we’ve already asserted the other natural latent conditions.)
Reasons to think Lobian Cooperation is important
Usually the modal Lobian cooperation is dismissed as not relevant for real situations but it is plausible that Lobian cooperation extends far more broadly than what is proved currently.
It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives.
Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another’s blueprint. This is arguably only very approximately the case between different humans but it is much closer to be the case when we are considering different versions of the same human through time as well as subminds of that human.
In the future we may very well see probabilistically checkable proof protocols, generalized notions of proof like heuristic arguments, magical cryptographic trust protocols and formal computer-checked contracts widely deployed.
All these considerations could potentially make it possible for future AI societies to exhibit vastly more cooperative behaviour.
Artificial minds also have several features that make them intrinsically likely to engage in Lobian cooperation. i.e. their easy copyability (which might lead to giant ‘spur’ clans). Artificial minds can be copied, their source code and weight may be shared and the widespread use of simulations may become feasible. All these point towards the importance of Lobian cooperation and Open-Source Game theory more generally.
[With benefits also come drawbacks like the increased capacity for surveillance and torture. Hopefully, future societies may develop sophisticated norms and technology to avoid these outcomes. ]
The Galaxy brain take is the trans-multi-Galactic brain of Acausal Society.
I definitely agree that cooperation can definitely be way better in the future, and Lobian cooperation, especially with Payor’s Lemma, might well be enough to get coordination across entire solar system.
That stated, it’s much more tricky to expand this strategy to galactic scales, assuming our physical models aren’t wrong, because light speed starts to become a very taut constraint under a galaxy wide brain, and acausal strategies will require a lot of compute to simulate entire civilizations. Even worse, they depend on some common structure of values, and I suspect it’s impossible to do in the fully general case.
Does internal bargaining and geometric rationality explain ADHD & OCD?
Self- Rituals as Schelling loci for Self-control and OCD
Why do people engage in non-social Rituals ‘self-rituals’? These are very common and can even become pathological (OCD).
High-self control people seem to more often have OCD-like symptoms.
One way to think about self-control is as a form of internal bargaining between internal subagents. From this perspective, Self-control, time-discounting can be seen as a resource. In the absence of self-control the superagent
Do humans engage in self-rituals to create Schelling points for internally bargaining agents?
Exploration, self-control, internal bargaining, ADHD
Why are exploration behaviour and lack of selfcontrol linked ? As an example ADHD-people often lack self-control, conscientiousness. At the same time, they explore more. These behaviours are often linked but it’s not clear why.
It’s perfectly possible to explore, deliberately. Yet, it seems that the best explorers are highly correlated with lacking self-control. How could that be?
There is a boring social reason: doing a lot of exploration often means shirking social obligations. Self-deceiving about your true desires might be the only way to avoid social repercussions. This probably explains a lot of ADHD—but not necessarily all.
If self-control = internal bargaining then it would follow that a lack of self-control is a failure of internal bargaining. Note that with subagents I mean both subagents in space *and* time . From this perspective an agent through time could alternatively be seen as a series of subagents of a 4d worm superagent.
This explains many of the salient features of ADHD:
[Claude, list salient features and explain how these are explained by the above]
Impulsivity: A failure of internal subagents to reach an agreement intertemporaly, leading to actions driven by immediate desires.
Difficulty with task initiation and completion: The inability of internal subagents to negotiate and commit to a course of action.
Distractibility: A failure to prioritize the allocation of self-control resources to the task at hand.
Hyperfocus: A temporary alignment of internal subagents’ interests, leading to intense focus on engaging activities.
Disorganization: A failure to establish and adhere to a coherent set of priorities across different subagents.
Emotional dysregulation: A failure of internal bargaining to modulate emotional reactions.
Arithmetic vs Geometric Exploration. Entropic drift towards geometric rationality
[this section obviously owes a large intellectual debt to Garrabrant’s geometric rationality sequence]
Sometimes people like to say that geometric exploration = kelly betting =maximizing geometric mean is considered to be ‘better’ than arithmetic mean.
The problem is that actually just maximizing expected value rather than geometric expected value does in fact maximize the total expected value, even for repeated games (duh!). So it’s not really clear in what sense geometric maximization is better in a naive sense.
Instead, Garrabrant suggests that it is better to think of geometric maximizing as a part of a broader framework of geometric rationality wherein Kelly betting, Nash bargaining, geometric expectation are all forms of cooperation between various kinds of subagents.
If self-control is a form of sucessful internal bargaining then it is best to think of it as a resource. It is better to maximize arithmetic mean but it means that subagents need to cooperate & trust each other much more. Arithmetic maximization means that the variance of outcomes between future copies of the agent is much larger than geometric maximization. That means that subagents should be more willing to take a loss in one world to make up for it in another.
It is hard to be coherent
It is hard to be a coherent agent. Coherence and self-control are resources. Note that having low time-discounting is also a form of coherence: it means the subagents of the 4d-worm superagent are cooperating.
Having subagents that are more similar to one another means it will be easier for them to cooperate. Conversely, the less they are alike the harder it is to cooperate and to be coherent.
Over time, this means there is a selective force against an arithmetic mean maximizing superagent.
Moreover, if the environment is highly varied (for instance when the agent select the environment to be more variable because it is exploring) the outcomes for subagents is more varied so there is more entropic pressure on the superagent.
This means that in particular we would expect superagents that explore more (ADHDers) are less coherent over time (higher time-discounting) and space (more internal conflict etc).
I feel like the whole “subagent” framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don’t think it displays actual mechanistic truth about minds.
When I plan something and then fail to execute plan it’s mostly not like “failure to bargain”. It’s just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.
You are taking subagents too literally here. If you prefer take another word like shard, fragment, component, context-dependent action impulse generator etc
When I read word “bargaining” I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying “systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess” is true, but doesn’t convey much of useful information.
(conversation with Scott Garrabrant)
Destructive Criticism
Sometimes you can say something isn’t quite right but you can’t provide an alternative.
rejecting the null hypothesis
give a (partial) countermodel that shows that certain proof methods can’t prove $A$ without proving $\neg A$.
Looking at Scott Garrabrant’s game of life board—it’s not white noise but I can’t say why
Difference between ‘generation of ideas’ and ‘filtration of ideas’ - i.e. babble and prune.
ScottG: Bayesian learning assumes we are in a babble-rich environment and only does pruning.
ScottG: Bayesism doesn’t say ‘this thing is wrong’ it says ‘this other thing is better’.
Alexander: Nonrealizability the Bayesian way of saying: not enough babble?
Scott G: mwah, that suggests the thing is ‘generate more babble’ when the real solution is ‘factor out your model in pieces and see where the culprit is’.
ergo, locality is a virtue
Alexander: locality just means conditional independence? Or does it mean something more?
ScottG: loss of locality means there is existenial risk
Alexander: reminds me of Vanessa’s story:
trapped environments aren’t in general learnable. This is a problem since real life is trapped. A single human life is filled to the brim with irreversible transitions & decisions. Humanity as a whole is much more robust because of locality: it is effectively playing the human life game lots of times in parallel. The knowledge gained is then redistributed through culture and genes. This breaks down when locality breaks down → existential risk.
Reasonable interpretations of Recursive Self Improvement are either trivial, tautological or false?
(Trivial) AIs will do RSI by using more hardware—trivial form of RSI
(Tautological) Humans engage in a form of (R)SI when they engage in meta-cognition. i.e. therapy is plausibly a form of metacognition. Meta-cognition is plausible one of the remaining hallmarks of true general intelligence. See Vanessa Kosoy’s “Meta-Cognitive Agents”.
In this view, AGIs will naturally engage in meta-cognition because they’re generally intelligent. They may (or may) not also engage in significantly more metacognition than humans but this isn’t qualitatively different from what the human cortical algorithm already engages in.
(False) It’s plausible that in many domains learning algorithms are already near a physical optimum. Given a fixed Bayesian prior of prior information and a data-set the Bayesian posterior is precise formal sense the ideal update. In practice Bayesian updating is intractable so we typically sample from the posterior using something SGD. It is plausible that something like SGD is already close to the optimum for a given amount of compute.
SGD finds algorithms. Before the DL revolution, science studied such algorithms. Now, the algorithms become inference without as much as a second glance. With sufficient abundance of general intelligence brought about by AGI, interpretability might get a lot out of studying the circuits SGD discovers. Once understood, the algorithms could be put to more efficient use, instead of remaining implicit in neural nets and used for thinking together with all the noise that remains from the search.
I think most interpretations of RSI aren’t useful.
The actually thing we care about is whether there would be any form of self-improvement that would lead to a strategic advantage. The fact that something would “recursively” self-improve 12 times or 2 times don’t really change what we care about.
With respect to your 3 points.
1) could happen by using more hardware, but better optimization of current hardware / better architecture is the actually scary part (which could lead to the discovery of “new physics” that could enable an escape even if the sandbox was good enough for the model before a few iterations of the RSI).
2) I don’t think what you’re talking about in terms of meta-cognition is relevant to the main problem. Being able to look at your own hardware or source code is though.
3) Cf. what I said at the beginning. The actual “limit” is I believe much higher than the strategic advantage threshold.
:insightful reaction:
I give this view ~20%: There’s so much more info in some datapoints (curvature, third derivative of the function, momentum, see also Empirical Bayes-like SGD, the entire past trajectory through the space) that seems so available and exploitable!
What about specialized algorithms for problems (e.g. planning algorithms)?
What do you mean exactly? There are definitely domains in which humans have not yet come close to optimal algorithms.
What about automated architecture search?
Architectures mostly don’t seem to matter, see 3.
When they do (like in Vanessa’s meta-MDPs) I think it’s plausible automated architecture search is a simply an instantiation of the algorithm for general intelligence (see 2.)
I think the AI will improve (itself) via better hardware and algorithms, and it will be a slog. The AI will frequently need to do narrow tasks where the general algorithm is very inefficient.
As I state in the OP I don’t feel these examples are nontrivial examples of RSI.
Trivial but important
Aumann agreement can fail for purely epistemic reasons because real-world minds do not do Bayesian updating. Bayesian updating is intractable so realistic minds sample from the prior. This is how e.g. gradient descent works and also how human minds work.
In this situation a two minds can end in two different basins with similar loss on the data. Because of computational limitations. These minds can have genuinely different expectation for generalization.
(Of course this does not contradict the statement of the theorem which is correct.)
Imprecise Information theory
Would like a notion of entropy for credal sets. Diffractor suggests the following:
let C⊂Credal(Ω) be a credal set.
Then the entropy of C is defined as
HDiffractor(C)=suppH(p)
where H(p) denotes the usual Shannon entropy.
I don’t like this since it doesn’t satisfy the natural desiderata below.
Instead, I suggest the following. Let meC∈C denote the (absolute) maximum entropy distribution, i.e.H(meC)=maxp∈CH(p) and let H(C)=Hnew(C)=H(mec).
Desideratum 1: H({p})=H(p)
Desideratum 2: Let A⊂Ω and consider CA:=ConvexHull({δa|a∈A}).
Then H(A):=H(CA)=log|A|.
Remark. Check that these desiderata are compatible where they overlap.
It’s easy to check that the above ‘maxEnt’- suggestion satisfies these desiderata.
Entropy operationally
Entropy is really about stochastic processes more than distributions. Given a distribution p there is an associated stochastic process Xn∈N where Xi is sampled iid from p. The entropy is really about the expected code length of encoding samples from this process.
In the credal set case there are two processes that can be naturally associated with a credal set C . Basically, do you pick a p∈C at the start and then sample according to p (this is what Diffractors entropy refers to) or do you allow the environment to ‘choose’ each round a different q∈C.
In the latter case, you need to pick an encoding that does least badly.
[give more details. check that this makes sense!]
Properties of credal maxEnt entropy
We may now investigate properties of the entropy measure.
H(A∨B)=H(A)+H(B)−H(A∧B)
H(Ac)=log|Ac|=log(|Ω|−|A|)
remark. This is different from the following measure!
"H(A|Ω)"=log(Ω/A)
Remark. If we think of H(A)=H(P(x∈Ω|A)) as denoting the amount of bits we receive when we know that A holds and we sample from Ω uniformly then H(A|Ω)=H(x∈A|x∈Ω) denotes the number of bits we receive when find out that x∈A when we knew x∈Ω.
What about
H(A∧B)?
H(A∧B)=H(P(x∈A∧B|Ω))=...?
we want to do an presumption of independence—mobius/ Euler characteristic expansion
Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.
Why Roko’s basilisk probably doesn’t work for simulation fidelity reasons:
Roko’s basilisk threatens to simulate and torture you in the future if you don’t comply. Simulation cycles cost resources. Instead of following through on torturing our would-be cthulhu worshipper they could spend those resources on something else.
But wait can’t it use acausal magic to precommit to follow through? No.
Acausal arguments only work in situations where agents can simulate each others with high fidelity. Roko’s basilisk can simulate the human but not the other way around! The human’s simulation of Roko’s basilisk is very low fidelity—in particular Roko’s Basilisk is never confused whether or not it is being simulated by a human—it knows for a fact that the human is not able to simulate it.
I thank Jan P. for coming up with this argument.
If the agents follow simple principles, it’s simple to simulate those principles with high fidelity, without simulating each other in all detail. The obvious guide to the principles that enable acausal coordination is common knowledge of each other, which could be turned into a shared agent that adjudicates a bargain on their behalf.
I have always taken Roko’s Basilisk to be the threat that the future intelligence will torture you, not a simulation, for not having devoted yourself to creating it.
How do you know you are not in a low fidelity simulation right now? What could you compare it against?
All concepts can be learnt. All things worth knowing may be grasped. Eventually.
All can be understood—given enough time and effort.
For Turing-complete organism, there is no qualitive gap between knowledge and ignorance.
No qualitive gap but one. The true qualitative difference: quantity.
Often we simply miss a piece of data. The gap is too large—we jump and never reach the other side. A friendly hominid who has trodden the path before can share their journey. Once we know the road, there is no mystery. Only effort and time. Some hominids choose not to share their journey. We keep a special name for these singular hominids: genius.
Well, that’s exactly the problem.
Abnormalised sampling?
Probability theory talks about sampling for probability distributions, i.e. normalized measures. However, non-normalized measures abound: weighted automata, infra-stuff, uniform priors on noncompact spaces, wealth in logical-inductor esque math, quantum stuff?? etc.
Most of probability theory constructions go through just for arbitrary measures, doesn’t need the normalization assumption. Except, crucially, sampling.
What does it even mean to sample from a non-normalized measure? What is
unnormalizedabnormal sampling?I don’t know.
Infra-sampling has an interpretation of sampling from a distribution made by a demonic choice. I don’t have good interpretations for other unnormalized measures.
Concrete question: is there a law of large numbers for unnormalized measures?
Let f be a measureable function and m a measure. Then the expectation value is defined Em(f)=∫fdm. A law of large numbers for unnormalized measure would have to say something about repeated abnormal sampling.
I have no real ideas. Curious to learn more.
SLT and phase transitions
The morphogenetic SLT story says that during training the Bayesian posterior concentrates around a series of subspaces W0(1)⇝...⇝W0(n) with rlcts λ1<...<λn and losses L1=L(w1),...,Ln=L(wn),wi∈W0(i). As the size of the data sample N is scaled the Bayesian posterior makes transitions W0(i)⇝W0(i+1) trading off higher complexity (higher λi+1>λi) for better accuracy (lower loss Li+1<Li).
This is the radical new framework of SLT: phase transitions happen in pure Bayesian learning as the data size is scaled.
N.B. The phase transition story actually needs a version of SLT for the nonrealizable case despite most sources focusing solely on the realizable case! The nonrealizable case makes everything more complicated and the formulas from the realizable case have to be altered.
We think of the local RLCT λw at a parameter w as a measure of its inherent complexity. Side-stepping the subtleties with this point of view let us take a look at Watanabe’s formula for the Bayesian generalization error:
GN(W)=LN(w0)+λN+o(1N)≈NL(w0)+λN+o(1N)
where W is a neighborhood of the local minimum w0 and λ is its local RLCT. In our case W=W0(i).
--EH I wanted to say something here but don’t think it makes sense on closer inspection
Alignment by Simulation?
I’ve heard this alignment plan that is a variation of ‘simulate top alignment researchers’ with an LLM. Usually the poor alignment researcher in question is Paul.
This strikes me as deeply unserious and I am confused why it is having so much traction.
That AI-assisted alignment is coming (indeed, is already here!) is undeniable. But even somewhat accurately simulating a human from textdata is a crazy sci-fi ability, probably not even physically possible. It seems to ascribe nearly magical abilities to LLMs.
Predicting a partially observable process is fundamentally hard. Even in very easy cases there are simple cases where one can give a generative (partially observable) model with just two states (the unifilar source) that needs an infinity of states to predict optimally. In more generic cases the expectation is that this is far worse.
Error compound over time (or continuation length). Even a tiny amount of noise would throw off simulation.
Okay maybe people just mean that GPT-N will kinda know what Paul approximately would be looking at. I think this is plausible in very broad brush strokes but it seems misleading to call this ‘simulation’.
[Edit 15/05/2024: I currently think that both forward and backward chaining paradigms are missing something important. Instead, there is something like ‘side-chaining’ or ‘wide-chaining’ where you are investigating how things are related forwardly, backwardly and sideways to make use of synergystic information ]
Optimal Forward-chaining versus backward-chaining.
In general, this is going to depend on the domain. In environments for which we have many expert samples and there are many existing techniques backward-chaining is key. (i.e. deploying resources & applying best practices in business & industrial contexts)
In open-ended environments such as those arising Science, especially pre-paradigmatic fields backward-chaining and explicit plans breakdown quickly.
Incremental vs Cumulative
Incremental: 90% forward chaining 10% backward chaining from an overall goal.
Cumulative: predominantly forward chaining (~60%) with a moderate amount of backward chaining over medium lengths (30%) and only a small about of backward chaining (10%) over long lengths.
Thin versus Thick Thinking
Thick: aggregate many noisy sources to make a sequential series of actions in mildly related environments, model-free RL
carnal sins: failure of prioritization / not throwing away enough information , nerdsnipes, insufficient aggegration, trusting too much in any particular model, indecisiveness, overfitting on noise, ignoring consensus of experts/ social reality
default of the ancestral environment
CEOs, general, doctors, economist, police detective in the real world, trader
Thin: precise, systematic analysis, preferably in repeated & controlled experiments to obtain cumulative deep & modularized knowledge, model-based RL
carnal sins: ignoring clues, not going deep enough, aggregating away the signal, prematurely discarding models that don’t fit naively fit the evidence, not trusting formal models enough / resorting to intuition or rule of the thumb, following consensus / building on social instead of physical reality
only possible in highly developed societies with place for cognitive specalists.
mathematicians, software engineers, engineers, historians, police detective in fiction, quant
Mixture: codebreakers (spying, cryptography)
[Thanks to Vlad Firoiu for helping me]
An Attempted Derivation of the Lindy Effect
Wikipedia:
Laplace Rule of Succesion
What is the probability that the Sun will rise tomorrow, given that is has risen every day for 5000 years?
Let p denote the probability that the Sun will rise tomorrow. A priori we have no information on the value of p so Laplace posits that by the principle of insufficient reason one should assume a uniform prior probability dp=Uniform((0,1))[1]
Assume now that we have observed n days, on each of which the Sun has risen.
Each event is a Bernoulli random variable Xi which can each be 1 (the Sun rises) or 0 (the Sun does not rise). Assume that the probability is conditionally independent of p.
The likelihood of n out of n succeses according to the hypothesis p is L(X1=1,...,Xn=1|p)=pn. Now use Bayes rule
P(p|X1=1,...,Xn=1)=P(X1=1,...,Xn=1|p)dp∫P(X1=1,...,Xn=1|p)dp=pndp∫10pndp=pndp1n+1=(n+1)pndp
to calculate the posterior.
Then the probability of succes for P(Xn+1=1|X1=1,...,Xn=1)=∫P(Xn+1|p)P(p|X1=1,...,Xn=1)
=∫10p⋅(n+1)pndp=n+1n+2
This is Laplace’s rule of succcesion.
We now adapt the above method to derive Lindy’s Law.
The probability of rising n+s days and not rising on the n+s+1 day given that the Sun rose n days is
P(Xn:n+s=1,Xn+s+1=0|X1:n=1)=∫10ps(1−p)(n+1)pndp=(n+1)(1n+s+1−1n+s+2)=n+1(n+s+1)(n+s+2)
The expectation of lifetime is then the average
E(Sun rises s more days)=∑∞s=1sn+1(n+s+1)(n+s+2)
which almost converges :o.…
[What’s the mistake here?]
For simplicity I will exclude the cases that p=0,1, see the wikipedia page for the case where they are not excluded.
I haven’t checked the derivation in detail, but the final result is correct. If you have a random family of geometric distributions, and the density around zero of the decay rates doesn’t go to zero, then the expected lifetime is infinite. All of the quantiles (e.g. median or 99%-ile) are still finite though, and do depend upon n in a reasonable way.
Generalized Jeffrey Prior for singular models?
For singular models the Jeffrey Prior is not well-behaved for the simple fact that it will be zero at minima of the loss function.
Does this mean the Jeffrey prior is only of interest in regular models? I beg to differ.
Usually the Jeffrey prior is derived as parameterization invariant prior. There is another way of thinking about the Jeffrey prior as arising from an ‘indistinguishability prior’.
The argument is delightfully simple: given two weights w1,w2∈W if they encode the same distribution p(x|w1),p(x|w2) our prior weights on them should be intuitively the same ϕ(w1)=ϕ(w2). Two weights encoding the same distributions means the model exhibit non-identifiability making it non-regular (hence singular). However, regular models exhibit ‘approximate non-identifiability’.
For a given dataset DN of size N from the true distribution q, error ϵ1, ϵ2 we can have a whole set of weights WN,ϵ⊂W where the probability that p(x|w1) does more than ϵ1 better on the loss on DN than p(x|w1) is less than ϵ2.
In other words, the sets of weights that are probabily approximately indistinguishable. Intuitively, we should assign an (approximately) uniform prior on these approximately indistinguishable regions. This gives strong constraints on the possible prior.
The downside of this is that it requires us to know the true distribution q. Instead of seeing if w1,w2 are approximately indistinguishable when sampling from q we can ask if w2 is approximately indistinguishable from w1 when sampling from w2. For regular models this also leads to the Jeffrey prior, see this paper.
However, the Jeffrey prior is just an approximation of this prior. We could also straightforwardly see what the exact prior is to obtain something that might work for singular models.
EDIT: Another approach to generalizing the Jeffrey prior might be by following an MDL optimal coding argument—see this paper.
You might reconstruct your sacred Jeffries prior with a more refined notion of model identity, which incorporates derivatives (jets on the geometric/statistical side and more of the algorithm behind the model on the logical side).
Is this the jet prior I’ve been hearing about?
I argued above that given two weights w1,w2 such that they have (approximately) the same conditional distribution p(x|y,w1)∼=p(x|y,w2) the ‘natural’ or ‘canonical’ prior should assign them equal prior weights ϕ(w1)=ϕ(w2). A more sophisticated version of this idea is used to argue for the Jeffrey prior as a canonical prior.
Some further thoughts:
imposing this uniformity condition would actually contradict some version of Occam’s razor. Indeed, w1 could be algorithmically much more complex (i.e. have much higher description length) than w2 but they still might have similar or the same predictions.
The difference between same-on-the-nose versus similar might be very material. Two conditional probability distributions might be quite similar [a related issue here is that the KL-divergence is assymetric so similarity is a somewhat ill-defined concept], yet one intrinsically requires far more computational resources.
A very simple example is the uniform distribution puniform(x)=1N and another distribution p′(x) that is a small perturbation of the uniform distribution but whose exact probabilities p′(x) have decimal expansions that have very large description length (this can be produced by adding long random strings to the binary expansion).
[caution: CompMech propaganda incoming] More realistic examples do occur i.e. in finding optimal predictors of dynamical systems at the edge of chaos. See the section on ‘intrinsic computation of the period-doubling cascade’, p.27-28 of calculi of emergence for a classical example.
Asking for the prior ϕ to restrict to be uniform for weights wi that have equal/similar conditional distributions p(x|y,wi) seems very natural but it doesn’t specify how the prior should relate weights with different conditional distributions. Let’s say we have two weights w1, w2 with very different conditional probability distributions. Let Wi={w∈W|p(x|y,w)∼=p(x|y,wi)}. How should we compare the prior weights ϕ(W1),ϕ(W2)?
Suppose I double the number of w∈W1, i.e.W1↦W′1 where we enlarged W↦W′ such that W′1 has double the volume of W1 and everything else is fixed. Should we have ϕ(W1)=ϕ(W′1) or should the prior weight ϕ(W′1) be larger? In the former case, the a prior weight on ϕ(w) should be reweighted depending on how many w′ there are with similar conditional probability distributions, in the latter it isn’t. ( Note that this is related but distinct from the parameterization invariance condition of the Jeffery prior. )
I can see arguments for both
We could want to impose the condition that quotienting out by the relation w1∼w2 whenp(x|y,w1)=p(x|y,w2) to not affect the model (and thereby the prior) at all.
On the other hand, one could argue that the Solomonoff prior would not impose ϕ(W1)=ϕ(W′1) - if one finds more programs that yield p(x|y,w1) maybe you should put higher a priori credence on p(x|y,w1).
The RLCT λ(w′) of the new elements in w′∈W′1−W1 could behave wildly different from w∈W1. This suggest that the above analysis is not at the right conceptual level and one needs a more refined notion of model identity.
Your comment about more refined type of model identity using jets sounds intriguing. Here is a related thought
In the earlier discussion with Joar Skalse there was a lot of debate around whether a prior simplicity (description length, Kolmogorov complexity according to Joar) is actually captured by the RLCT. It is possible to create examples where the RLCT and the algorithmic complexity diverge.
I haven’t had the chance to think about this very deeply but my superficial impression is that the RLCT λ(Wa) is best thought of as measuring a relative model complexity between Wa and W rather than an absolute measure of complexity of W,Wa.
(more thoughts about relations with MDL. too scattered, I’m going to post now)
I think there’s no such thing as parameters, just processes that produce better and better approximations to parameters, and the only “real” measures of complexity have to do with the invariants that determine the costs of those processes, which in statistical learning theory are primarily geometric (somewhat tautologically, since the process of approximation is essentially a process of probing the geometry of the governing potential near the parameter).
From that point of view trying to conflate parameters w1,w2 such that p(x|w1)≈p(x|w2) is naive, because w1,w2 aren’t real, only processes that produce better approximations to them are real, and so the ∂∂w derivatives of p(x|w1),p(x|w2) which control such processes are deeply important, and those could be quite different despite p(x|w1)≈p(x|w2) being quite similar.
So I view “local geometry matters” and “the real thing are processes approximating parameters, not parameters” as basically synonymous.
“The links between logic and games go back a long way. If one thinks of a debate as a kind of game, then Aristotle already made the connection; his writings about syllogism are closely intertwined with his study of the aims and rules of debating. Aristotle’s viewpoint survived into the common medieval name for logic: dialectics. In the mid twentieth century Charles Hamblin revived the link between dialogue and the rules of sound reasoning, soon after Paul Lorenzen had connected dialogue to constructive foundations of logic.” from the Stanford Encyclopedia of Philosophy on Logic and Games
Game Semantics
Usual presentation of game semantics of logic: we have a particular debate / dialogue game associated to a proposition between an Proponent and Opponent and Proponent tries to prove the proposition while the Opponent tries to refute it.
A winning strategy of the Proponent corresponds to a proof of the proposition. A winning strategy of the Opponent corresponds to a proof of the negation of the proposition.
It is often assumed that either the Proponent has a winning strategy in A or the Opponent has a winning strategy in A—a version of excluded middle. At this point our intuitionistic alarm bells should be ringing: we cant just deduce a proof of the negation from the absence of a proof of A. (Absence of evidence is not evidence of absence!)
We could have a situation that neither the Proponent or the Opponent has a winning strategy! In other words neither A or not A is derivable.
Countermodels
One way to substantiate this is by giving an explicit counter model C in which A respectively ¬A don’t hold.
Game-theoretically a counter model C should correspond to some sort of strategy! It is like an “interrogation” /attack strategy that defeats all putative winning strategies. A ‘defeating’ strategy or ‘scorched earth’-strategy if you’d like. A countermodel is an infinite strategy. Some work in this direction has already been done[1]. [2]
Dualities in Dialogue and Logic
This gives an additional symmetry in the system, a syntax-semantic duality distinct to the usual negation duality. In terms of proof turnstile we have the quadruple
⊢A meaning A is provable
⊢¬A meaning $¬A$ is provable
⊣A meaning A is not provable because there is a countermodel C where A doesn’t hold—i.e. classically ¬A is satisfiable.
⊣¬A meaning ¬A is not provable because there is a countermodel C where ¬A doesn’t hold—i.e. classically A is satisfiable.
Obligationes, Positio, Dubitatio
In the medieval Scholastic tradition of logic there were two distinct types of logic games (“Obligationes) - one in which the objective was to defend a proposition against an adversary (“Positio”) the other the objective was to defend the doubtfulness of a proposition (“Dubitatio”).[3]
Winning strategies in the former corresponds to proofs while winning (defeating!) strategies in the latter correspond to countermodels.
Destructive Criticism
If we think of argumentation theory / debate a counter model strategy is like “destructive criticism” it defeats attempts to buttress evidence for a claim but presents no viable alternative.
Ludics & completeness—https://arxiv.org/pdf/1011.1625.pdf
Model construction games, Chap 16 of Logic and Games van Benthem
Dubitatio games in medieval scholastic tradition, 4.3 of https://apcz.umk.pl/LLP/article/view/LLP.2012.020/778
Ambiguous Counterfactuals
[Thanks to Matthias Georg Mayer for pointing me towards ambiguous counterfactuals]
Salary is a function of eXperience and Education
S=aE+bX
We have a candidate C with given salary, experience (X=5) and education (E=5).
Their current salary is given by
S=a⋅5+b⋅5
We ’d like to consider the counterfactual where they didn’t have the education (E=0). How do we evaluate their salary in this counterfactual?
This is slightly ambiguous—there are two counterfactuals:
E=0,X=5 or E=0,X=10
In the second counterfactual, we implicitly had an additional constraint X+E=10, representing the assumption that the candidate would have spent their time either in education or working. Of course, in the real world they could also have dizzled their time away playing video games.
One can imagine that there is an additional variable: do they live in a poor country or a rich country. In a poor country if you didn’t go to school you have to work. In a rich country you’d just waste it on playing video games or whatever. Informally, we feel in given situations one of the counterfactuals is more reasonable than the other.
Coarse-graining and Mixtures of Counterfactuals
We can also think of this from a renormalization / coarsegraining story. Suppose we have a (mix of) causal models coarsegraining a (mix of) causal models. At the bottom we have the (mix of? Ising models!) causal model of physics. i.e. in electromagnetics the Green functions give use the intervention responses to adding sources to the field.
A given counterfactual at the macrolevel can now have many different counterfactuals at the microlevels. This means we actually would get a probability dsitribution of likely counterfactuals at the top levels. i.e. in 1⁄3 of the cases the candidate actually worked the 5 years they didn’t go to school. In 2⁄3 of the cases the candidate just wasted it on playing video games.
The outcome of the counterfactual SE=0 is then not a single number but a distribution
SE=0=5⋅b+Y⋅b
where Y is random variable with distribution the Bernoulli distribution with bias 1/3.
Insights as Islands of Abductive Percolation?
I’ve been fascinated by this beautiful paper by Viteri & DeDeo.
What is a mathematical insight? We feel intuitively that proving a difficult theorem requires discovering one or more key insights. Before we get into what the Dedeo-Viteri paper has to say about (mathematical) insights let me recall some basic observations on the nature of insights:
(see also my previous shortform)
There might be a unique decomposition, akin to prime factorization. Alternatively, there might many roads to Rome: some theorems can be proved in many different ways.
There are often many ways to phrase an essentially similar insight. These different ways to name things we feel are ‘inessential’. Different labelings should be easily convertible into one another.
By looping over all possible programs all proofs can be eventually found, so the notion of an ‘insight’ has to fundamentally be about feasibility.
Previously, I suggested a required insight is something like a private key to a trapdoor function. Without the insight you are facing an infeasible large task. With it, you can suddenly easily solve a whole host of new tasks/ problems
Insight may be combined in (arbitrarily?) complex ways.
When are two proofs of essentially different?
Some theorems can be proved in many different ways. That is different in the informal sense. It isn’t immediately clear how to make this more precise.
We could imagine there is a whole ‘homotopy’ theory of proofs, but before we do so we need to understand when two proofs are essentially the same or essentially different.
On one end of the spectrum, proofs can just be syntactically different but we feel they have ‘the same content’.
We can think type-theoretically, and say two proofs are the same when their denotations (normal forms) are the same. This is obviously better than just asking for syntactical equality or apartness. It does mean we’d like some sort of intuitionistic/type-theoretic foundation since a naive classicial foundations makes all normals forms equivalent.
We can also look at what assumptions are made in the proof. I.e. one of the proofs might use the Axiom of Choice, while the other does not. An example is the famous nonconstructive proof of the irrationality of ab which turns out to have a constructive proof as well.
If we consider proofs as functorial algorithms we can use mono-Anabelian transport to distinguish them in some case. [LINK!]
We can also think homotopy type-theoretically and ask when two terms of a type are equal in the HoTT sense.
With the exception of the mono-anabelian transport one—all these suggestions of ‘don’t go deep enough’, they’re too superficial.
Phase transitions and insights, Hopfield Networks & Ising Models
(See also my shortform on Hopfield Networks/ Ising models as mixtures of causal models)
Modern ML models famously show some sort of phase transitions in understanding. People have been especially fascinated by the phenomenon of ’grokking, see e.g. here and here. It suggests we think of insights in terms of phase transitions, critical points etc.
Dedeo & Viteri have an ingenious variation on this idea. They consider a collection of famous theorems and their proofs formalized in a proof assistant.
They then imagine these proofs as a giant directed graph and consider a Boltzmann distributions on it. (so we are really dealing with an Ising model/ Hopfield network here). We think of this distribution as a measure of ‘trust’ both trust in propositions (nodes) and inferences (edges).
The proofs of these famous theorems break up into ‘abductive islands’. They have natural modularity structure into lemmas.
One could hypothesize that insights might correspond somehow to these islands.
Final thoughts
I like the idea that a mathemathetical insight might be something like an island of deductively & abductively tightly clustered propositions.
Some questions:
How does this fit into the ‘Natural Abstraction’ - especially sufficient statistics?
How does this interact with Schmidthuber’s Powerplay?
EDIT: The separation property of Ludics, see e.g. here, points towards the point of view that proofs can be distinguished exactly by suitable (counter)models.
Evidence Manipulation and Legal Admissible Evidence
[This was inspired by Kokotaljo’s shortform on comparing strong with weak evidence]
In the real world the weight of many pieces of weak evidence is not always comparable to a single piece of strong evidence. The important variable here is not strong versus weak per se but the source of the evidence. Some sources of evidence are easier to manipulate in various ways. Evidence manipulation, either consciously or emergently, is common and a large obstactle to truth-finding.
Consider aggregating many (potentially biased) sources of evidence versus direct observation. These are not directly comparable and in many cases we feel direct observation should prevail.
This is especially poignant in the court of law: the very strict laws arounding presenting evidence are a culturally evolved mechanism to defend against evidence manipulation. Evidence manipulation may be easier for weaker pieces of evidence—see the prohibition against hearsay in legal contexts for instance.
It is occasionally suggested that the court of law should do more probabilistic and Bayesian type of reasoning. One reason courts refuse to do so (apart from more Hansonian reasons around elites cultivating conflict suppression) is that naive Bayesian reasoning is extremely susceptible to evidence manipulation.
In other cases like medicine, many people argue that direct observation should be ignored ;)
Imagine a data stream
...X−3,X−2,X−1,X0,X1,X2,X3...
assumed infinite in both directions for simplicity. Here X0 represents the current state ( the “present”) and while ...X−3,X−2,X−1 and X1,X2,X3,... represents the future
Predictible Information versus Predictive Information
Predictible information is the maximal information (in bits) that you can derive about the future given the access to the past. Predictive information is the amount of bits that you need from the past to make that optimal prediction.
Suppose you are faced with the question of whether to buy, hold or sell Apple. There are three options so maximally log2(3) bits of information—not all of that information might be in contained in the past, there a certain part of irreductible uncertainty (entropy) about the future no matter how well you can infer the past. Think about a freak event & blacks swans like pandemics, wars, unforeseen technological breakthroughs, just cumulative aggregated noise in consumer preference etc. Suppose that irreducible uncertainty is half of log2(3) leaving us with 12log2(3) of (theoretically) predictible information.
To a certain degree, it might be predictible in theory to what degree buying Apple stock is a good idea. To do so, you may need to know many things about the past: Apple’s earning records, position of competitiors, general trends of the economy, understanding of the underlying technology & supply chains etc. The total sum of this information is far larger than 12log2(3)
To actually do well on the stock market you additionally need to do this better than the competititon—a difficult task! The predictible information is quite small compared to the predictive information
Note that predictive information is always greater than predictible information: you need to at least k bits from the past to predict k bits of the future. Often it is much larger.
Mathematical details
Predictible information is also called ‘apparent stored information’ or commonly ‘excess entropy’.
It is defined as the mutual information I(X≤0,X≥0) between the future and the past.
The predictive information is more difficult to define. It is also called the ‘statistical complexity’ or ‘forecasting complexity’ and is defined as the entropy of the steady equilibrium state of the ‘epsilon machine’ of the process.
What is the Epsilon Machine of the process {Xi}i∈Z? Define the causal states as the process as the partition on the sets of possible pasts ...,x−3,x−2,x−1 where two pasts →x,→x′ are in the same part / equivalence class when the future conditioned on →x,→x′ respectively is the same.
That is P(X>0|→x)=P(X>0,→x′). Without going into too much more detail the forecasting complexity measures the size of this creature.
Agent Foundations Reading List [Living Document]
This is a stub for a living document on a reading list for Agent Foundations.
Causality
Book of Why, Causality—Pearl
Probability theory
Logic of Science—Jaynes
Hopfield Networks = Ising Models = Distributions over Causal models?
Given a joint probability distributions p(x1,...,xn) famously there might be many ‘Markov’ factorizations. Each corresponds with a different causal model.
Instead of choosing a particular one we might have a distribution of beliefs over these different causal models. This feels basically like a Hopfield Network/ Ising Model.
You have a distribution over nodes and an ‘interaction’ distribution over edges.
The distribution over nodes corresponds to the joint probability distribution while the distribution over edges corresponds to a mixture of causal models where a normal DAG graphical causal G model corresponds to the Ising model/ Hopfield network which assigns 1 to an edge x→y if the edge is in G and 0 otherwise.