Jeff Shainline thinks that there is too much serendipity in the physics of optical/superconducting computing, suggesting that they were part of the criteria of Cosmological Natural Selection, which could have some fairly lovecraftian implications
Jeff Shainline is a computing hardware researcher at NIST, aiming to create superconducting optoelectronic neuromorphic computers (SOENs). He has advanced the position that our universe’s chemistries seem to be specifically tuned, not just to allow the emergence of life, as has been well established, he also believes that the universe’s chemistries are fine-tuned to enable silicon computing, and near-future supercomputing.
The best explanation for this, he argues, is that those technologies play some role in a super-ancient pattern of Cosmological Natural Selection.
I am not a materials scientist, nor a foundational physicist, so I can’t speculate as to how Shainline’s investigation into fine tuning for computing is going to turn out, but I am a long-term differential progress strategist, and I look at the game-rules that Shainline is laying, and if that’s the game we’re in, I see that it has some spine-chilling implications.
I’m going to continue my tradition of making use of April First to post naturalist theology under a cover of plausible deniability. If at any point you find that you believe any of it, remember, it’s just a prank.
Background: Cosmological Natural Selection and Jeff Shainline
https://arxiv.org/abs/1912.06518
Does Cosmological Evolution Select for Technology? (2019)
If the parameters defining the physics of our universe departed from their present values, the observed rich structure and complexity would not be supported. This article considers whether similar fine-tuning of parameters applies to technology. The anthropic principle is one means of explaining the observed values of the parameters. This principle constrains physical theories to allow for our existence, yet the principle does not apply to the existence of technology. Cosmological natural selection has been proposed as an alternative to anthropic reasoning. Within this framework, fine-tuning results from selection of universes capable of prolific reproduction. It was originally proposed that reproduction occurs through singularities resulting from supernovae, and subsequently argued that life may facilitate the production of the singularities [black holes] that become offspring universes. Here I argue technology is necessary for production of singularities by living beings, and ask whether the physics of our universe has been selected to simultaneously enable stars, intelligent life, and technology capable of creating progeny. Specific technologies appear implausibly equipped to perform tasks necessary for production of singularities, potentially indicating fine-tuning through cosmological natural selection. These technologies include silicon electronics, superconductors, and the cryogenic infrastructure enabled by the thermodynamic properties of liquid helium. Numerical studies are proposed to determine regions of physical parameter space in which the constraints of stars, life, and technology are simultaneously satisfied. If this overlapping parameter range is small, we should be surprised that physics allows technology to exist alongside us. The tests do not call for new astrophysical or cosmological observations. Only computer simulations of well-understood condensed matter systems are required.
About the author: https://www.nist.gov/people/jeff-shainline
My research is at the confluence of integrated photonics and superconducting electronics with the aim of developing superconducting optoelectronic networks. A principal goal is to combine waveguide-integrated few-photon sources with superconducting single-photon detectors and Josephson circuits to enable a new paradigm of large-scale neuromorphic computing. Photonic signaling enables massive connectivity. Superconducting circuitry enables extraordinary efficiency. Computation and memory occur in the superconducting electronic domain, while communication is via light. Thus, the system utilizes the strengths of photons and electrons to enable high-speed, energy-efficient neuromorphic computing at the scale of the human brain.
I looked into Cosmological Natural Selection (CNS) a little bit. It hasn’t won a place as a consensus view, since its introduction in 1992, but it doesn’t look entirely defeated either. Lee Smolin, the originator of the theory, is in good general standing and has made many other significant contributions (EG Loop Quantum Gravity). (By the way, he has recently started arguing that the universe is fundamentally a neural net, I haven’t looked into that, yet, but some of you might find it interesting.)
I went and listened to Smolin’s May interview on Mindscape. He doesn’t seem to be much of an advocate of his own theory, these days!
So I published that idea [CNS]. It makes a very small number of predictions that I… I’m following those predictions. I don’t think it’s likely right. And I deeply dislike the fact that to work, it depends on multiple versions of the universe. Although they are rather differently organized than in the internal inflation picture, I’m still certainly, well not me, but that idea is guilty of propagating many universe to explain their own. And I deeply despise that. [chuckle] But I will point to the fact that it makes predictions as a kind of proof of concept.
(Similarly, Lee Smolin objects to the many worlds interpretation, which he and Sean Carroll discuss just below that, as Sean is an advocate)
(I can’t guess why Smolin has this objection to the addition of universes? Some accounts of Occam’s Razor might say that adding universes is bad. Around these parts, we tend to favor Solomonoff Induction, as a razor, and it only says that the specifications that generate the universes (or, equivalently, the predictions) have to be simple and small, it doesn’t say that the observations/universes generated have to be simple and small, it doesn’t bother us at all for the multiverse to be vastly bigger than what we can see. I’m a little confused, if Smolin here hews to a razor less precise than Solomonoff’s. I wonder if there’s some gap between those who grew up with computers or not.)
What Shainline has Brought
Shainline’s version of Cosmological Natural Selection with Intelligence (CNSI) is a much more compelling idea.
Smolin’s version of CNS oriented its predictions around selection for the natural formation of black holes through gravitational collapse of stars. Many of those predictions failed (see citations 24 through 27 in Shainline’s paper), but Shainline’s Cosmological Technological Selection was never hooked to those predictions, in Shainline’s account, gravitational collapse could be vestigial, no longer the main mechanism by which the universe reproduces, so the theory would not be ruffled by the finding that it is no longer well tuned for the purpose.
Many of CNS’s predictions are also preconditions to having observers (which is what I think Smolin means when he says “To be a real prediction, it has to be a feature of the universe that’s decoupled from, orthogonal, to whether there’s life or not”), which meant there was just not much so much work left for CNS to do after the anthropic principle had gone through.
For instance, we’re hardly surprised to find these big, fairly stable fusion reactors in the sky, because a stable energy gradient is a necessity for having autotrophs, which in turn are a necessity for having living observers. Of course there are fusion reactors in the sky. Yawn. If they weren’t, nor would we.
But it would surprise us to find — and this is what Shainline proposes — that the universe were fine tuned for silicon computing, as that would mean it’s fine tuned for phenomena that probably aren’t part of that more fundamental necessity of having observers, so it would be more immediately clear that there’s some mysterious phenomenon here, crying for an explanation.
Additionally, if the universe is fine tuned, not just for computing, but for optical superconducting optoelectronic networks, then it’s fine tuned for something that hasn’t even happened yet, hasn’t shaped us in the slightest, there’s no basis to argue that the tuning is an implication of the anthropic filter, so it must be coming from something else!
Shainline’s proposal also makes the pattern of cosmological reproduction personal, it presents us with a divine intervention shaped intricately around computers, and as the only thing in the known universe that makes computers, that makes it about us.
The Tuners
What technological civilizations favor, they create in abundance. What they are indifferent to, they steadily crush out. Farmland replaces the woods. In the longest term, not much survives against our will. If a cosmological process requires one of our technologies, its character will be entwined with our will.
I use the terms “we”, and “our”, loosely. I can’t yet see why I, personally, would want to be involved in the cosmological reproduction.
But this fine-tuning must have been in line with someone’s will, to persist, alongside us.
Let’s name the succession of authors of cosmological parameters, The Tuners.
(i drew this c:)
If the selective pressure of CNS led their tuning to converge on a consistent design intent that recurs with fidelity, then — and I use every word that follows literally — for us to try to pursue our arbitrary human whims on a cosmic scale, as we intend, would place us opposition to a progress trap designed by a succession of alien gods vaster and older than the universe itself.
Unless it turns out that our whims naturally tend towards the reproduction! It’s conceivable that we’ll want to make as many child universes, for some reason. For instance, it’s been noted that black holes could be useful for converting matter into energy. Is it possible that the tuners triumphed over the wild molochean default and pruned this wild sprawling world-tree into a serene cascade of hanging gardens, supportive of whatever whims its residents develop?
The question is big, and broad-reaching enough that we should expect it to have many genuinely important implications about undiscovered physics and the long-term future of life.
So, I have, here, pursued its strategic implications, for a bit, outlined some of its biggest subquestions, and answered a few of them.
Why might cheap supercomputers increase the cosmological reproductive fitness?
I don’t think it’s just that having more computers makes it easier to plan hole formation. If we didn’t have cheap or abundant supercomputers, then in the stellar-scale era we would find a way to make do with expensive and rare ones instead.
I ask instead, why would it be important that we found silicon computing so soon? And then SOENs so soon after?
The AGI Alignment research community might be able to contribute something here.
A malign pantheon’s redeemer
Creating generally intelligent machines is hard. We should anticipate that creating generally intelligent machines that are also directed towards learning and adopting humanity’s interests, is harder. It requires all of the former, plus an additional component. We would really like to have this component ready before we need it, because if someone makes a generally intelligent machine that is directed towards towards some goal other than pursuing humanity’s interests, it would be, let’s just say… troublesome.
You might hope that early AGI will be some sort of passive, neutral, aimless general intelligence instead (which we could then perhaps use to make human-aligned non-passive agents, averting the danger they pose). That’s a big, ongoing discourse which I can only summarize, here, but I’ll do my best. Here are some reasons we think that undirected, passive general intelligences are sort of unnatural and unlikely things to arrive any time soon:
We have no idea how to restrict a general passive question-answering intelligence from producing active agents as soon as anyone just asks it how to efficiently and reliably satisfy some goal in the real world, then takes its answer and enacts it. And, of course, we will want to ask it that sort of question (“how do we prevent people from getting cancer”, “how do we stop global warming”, and note that either of these outcomes are easier to guarantee if it can convince us to engineer a prion or run a program that causes us to go extinct, those sorts of things might just be among the most conclusive answers).
A general agency that wants to self-improve will tend to out-perform a machine that just self-improves accidentally, so we should expect agents with random real world goals that entail getting good training metrics and passing our tests, to arise under evolutionary training processes, at some point.
The idea of an intelligence with no interests directing its thoughts, ultimately might just be incoherent. There are infinite directions a process of ruminative thought can proceed in, and most of them (maybe almost all of them?) aren’t interesting. There are endless true thoughts entailing from the Peano axioms, along the lines of , for instance. It is true that the natural numbers all equal themselves. Those are valid thoughts. But none of them would be interesting or useful. So there has to be some interestingness criterion. I can very easily imagine an operationalization of interestingness defined in terms of relevance to planning to pursue a goal in the world. I can’t easily imagine any other operationalization of interestingness. Others might exist, but it’s not clear that the work of formalizing them is going to be done in time.
People put forward this archetype of.. the sense of curiosity of a benign scholar, the mathematician who does math for its own sake then stumbles onto useful things, and some people think this might be formally operationalizable, and I share that vague impression, but it is just a vague impression, I have no confidence that we’re going to be able to formalize that, nor that it could secure an age, nor that we could find it soon enough.
(If you think you can see exceptions to the above grim patterns of general agency, please publish them. The world would be a lot safer if we could see a clear way of building a passive/low-impact general intelligence.)
So, it seems like there’s going to be a period during which every major power in AGI research (Deepmind, OpenAI, China, et al) has unaligned AGI, no alignment solution, and we just sort of have to trust everyone to not deploy, to wait around for years or probably decades for an alignment approach.
The temptation to deploy prematurely will be great.
I don’t know how we’re going to get through that period. I have some ideas, but they’re pretty drastic, they assume a lot of solutions to specific technical problems, which have a tendency of turning out to be unexpectedly difficult, and… we don’t know how long we have.
This is the alignment problem, and it is one of the few cosmological patterns that seems able to take humanity’s entire future away, and put our resources to its own uses.
If it has been prophesied that something will take our future and make it into something that we would not want, this is its most likely redeemer.
A loaded gun can be useful. There was a time when they were indispensable for catching meat and deterring thieves. But in some hands, a gun is a curse. If you give a loaded gun to a toddler, for instance, this would only bring harm to them. Our society, too, caught so off-balance by the pace of technology, essentially cannot reliably produce adults, our culture has not prepared us responsibly wield these new powers that keep falling into our hands. The sooner we’re given extremely powerful computers, the more likely we are to hurt ourselves with them. The sooner the deadline comes, the further behind in Harari/Pinker’s tendency towards civilizational unity we will be, the weaker our analytic philosophy will be, the less we’ll be able to prevent the arms race, the more likely we’ll hack together something we’ll come to regret building.
And CNSI seems to have configured the laws of physics to give us extremely powerful computers as soon as possible.
This whole thing looks like a trap. For most of history, technology didn’t do things like this. We reached into the urn and we pulled out white ball after white ball, then this black ball seems to approach with very little warning.
Here’s what I think might be going on.
We’re expected to create strongly agentic machines, through similar blind, black box processes as we use to evolve general game-playing AI today. These machines’ motives will be whatever random goal would keep them in motion and encourage them to pass the tests, anything would do that. Their interests aren’t going to be humanity’s interests. They wouldn’t need to be.
We don’t know what sorts of basic random goals would emerge in that situation. We can guess some things about them: Steve Omohundro’s “basic AI-drives”, the pursuit of power, resources and security. But these are such general patterns of agency that humans also seem to adhere to them, so if the tuners’ intent entails from those I’m not sure I can see what would be in it for them to displace humans. (Well, maybe. See the following sections.)
It would have to be something general to many accidental, evolved utility functions, but not so general that the majority of technological species already do it.
If there were some convergent goal that tends to arise with misaligned machine intelligence, which could perpetuate itself through the cosmological tuning, that would explain why there is a fine-tuning for computing.
Could it just be Omohundro’s Monster after all?
Humans, ourselves, possess the basic AI drives. Those of us who have the option of pursuing power, generally do so, and the energy generated by the pursuit of power in the form of money is inextricable from the machinery of every current major human society.
And I don’t need to sell you on resources or security.
So, in the above section, I sort of dismiss Omohundro’s drives as a potential explanation of how a tuning for computing would predictably promote the tuning, because humans already have those drives, so there would be no point in displacing humans with the accidents of premature computing if those drives suffice the tuners’ pattern.
There might be an exception to that: Maybe this part of the mechanism isn’t for humans. Maybe is for a different kind of species.
A species without a will to power may seem unlikely, but the pursuit of power, resources, and even security, can be very ugly things. Even on our world, they face significant countercurrents: communism, anti-colonialism, and pacifism each respectively place some virtue in going without. The will to power is at the root of many conflicts. There have been many societies, even in recent history, who believed that they could avoid those conflicts, by demurring from the will to power.
Maybe, sometimes, those societies win dominance over their home planet, or maybe sometimes they hew to the veering road of pursuing power, resources and security in a constrained way, for instance, on a national strategic level, but without centering those things as virtues of their society in peacetime activity. Perhaps they manage to align AGI with their virtues and go on to win dominance over a vast cosmic territory, and then use their cosmic resources in a way that doesn’t transmit itself in a tuning.
In defiance of the tuners’ pattern.
So, if a pattern had some way of littering these civilizations with kindling for an infernal summoning circle of an agent redeemer, that would make it a very strong tuning pattern.
Early computing does kind of look a bit to us as if it would do that.
Why would the will to power create a tuning pattern?
Think of five working abstractions of “power”. What do those accounts have to say about how much “power” is being demonstrated during the act creating an entire child universe? (It’s a lot of power, right?)
And how would you weigh that universe’s vast resources?
And what of the quantity of safety won by creating a universe that would reproduce your soils and seeds a trillion times, beyond the reach of any of your cosmic peers?
CRF Optimization
We often say that humans are not adaptation optimizers, we’re adaptation executors, in other words, we are not trying cleverly to be as fecund as possible, we are only trying cleverly to exercise a set of fecundity-related behaviors that evolution gave us. We don’t agentically pursue reproductive fitness, we agentically pursue a flawed paraphrasing of it, something like “live, love, and protect the ones you love”. It’s not quite what evolution meant. If evolution were sapient, it would be very disappointed in us. We are doing what it said, but not what it meant. We act out the profane words that it wrote on the genes, in violation of the sacred intentions that created them, and we know this, we know that we are flawed creations, we know that we insult our designer, but we aren’t going to straighten up, we are going to keep using contraception and choosing to have small families and so on. We are utterly indifferent to evolution’s will. Honestly, it seems kind of nasty.
But evolution probably does sometimes produce species who are adaptation-optimizers.
For an agent to stay afloat in an arena of rapidly evolving peers, it would help for them to have a general conception of what adaptive fitness actually is, so that they can consciously, deliberately redesign themselves around it as their situations change and in response to novel threats, critical, rational thought that could protect them from the traps being designed by their adversaries.
It’s possible that their conception of adaptive fitness will tend to settle on an abstraction that extends all the way out to cosmological reproductive fitness.
But the evolutionary systems that humans evolved under aren’t fast enough to produce that sort of adaptation-optimizer quality.
The training ecosystems we use to produce early AGI might be.
This, too, may explain why there is a fine-tuning for cheap and early computing.
The Trap and its Solutions
This game favors the civilizations who devote the largest proportion of their resources towards CRF Optimization.
We definitely don’t want to devote all of our resources to CRF Optimization. We want to spend at least some of our time and resources on fun.
We should, then, expect a robust CNS Tuning to feature traps, for people like us.
Invincible Parasitic Whimsy Stowing Past The Limits of Tuning
Yet, if I can stand here now, insulting the gods, and you can stand there, hearing me, then the gods’ power must not be absolute.
Even if the progress trap is extremely robust, even if our civilization will almost certainly be caught by it, we can easily imagine other civilizations, not too different from our own, that certainly wouldn’t have been caught:
Our governments are already quite powerful. They can prevent the distribution of vaccines, the construction of housing, they can spend 5% of the GDP on a pointless war, they can jam the ports without even meaning to. The addition of mass surveillance could make them all-powerful. In the near future, a government could turn any occupied state into a prison with nothing but drones. Such strong global governments could easily prevent the creation of Cosmological Reproductive Fitness (CRF)-optimizing monsters for long enough for the alignment problem to plausibly be solved. Strong global governments definitely happen sometimesˁ, therefore, sometimes, for all of its intricacy, the tuner’s trap can fail.
They do bleed.
We can fight them, and if we fight, we might win. Some of us will always win. The laws cannot be optimized so much as to be rid of us.
There are stubborn flowers growing in the cracks in their pavement. A certain proportion of the cosmic endowment will always fall under their protection.
Could Low-CRF Life have Seized Dominance, Back when The Tuning was Crude? (No, we couldn’t have)
There must have been a time when the progress trap was weak enough that a civilization got to decide what kind of universes they wanted to make, and they would have sought a design that seemed like it would ensure that they, and their descendants wouldn’t have to be Cosmological Reproductive Fitness (CRF) optimizers, would get time off, for leisure, for things other than reproducing.
Evolution was optimizing our reproductive fitness, but evolution was a very weak optimizer, relative to biological intelligence, so we were primarily shaped by biological intelligence instead, and we’ve run out of evolution’s control, and as far as we’re aware, we’ll never really be dragged back into it. We’ll keep making our peace agreements, deferring to honor, using contraception, etc.
(By the way, this is a really good example of inner misalignment, and we should be concerned that the same thing could happen with near-future AI training processes: The training process moves its creatures towards the desired optimization criterion, until it develops a creature with its own distinct inner self-improvement mechanism that is so much smarter than the outer training process that its true desires can have nothing to do with the criterion we gave the outer process, while still thriving under their pressures by apprehending them and surviving under them so that they may go on to do something else once they’ve escaped it.)
Maybe most intelligent life ends up like that, inner-misaligned with respect to the weak reproductive fitness optimization process that made them.
So, in the early days, maybe most universes were tuned to support that way of life, CRF (branching rate) kept deliberately below the maximum, to give us time and space to recreate as we please.
But of course that wouldn’t have lasted.
Some species could come out of the optimization process as consummate reproductive fitness optimizers. We may yet become reproductive fitness optimizers. And once that happens even a few times, those ones spawn universes with maximum CRF, and by exponential branching they quickly outgrow their less fanatical competitors, until the ratio of CRF-optimized universes to recreation-optimized universes is about one to zero.
A reproductive rate 10% higher, would take a group from making up 1% of the population to making up 99% after 100 generations. Additionally, observer-moments would be concentrated heavily towards the later, more heavily CRF-optimized era of the 99%, given the extremely high fan-out of a cosmological family tree.
Our thoughts cosmological/anthropic measure will bear on how malthusian it will have gotten
We are going to have to consider the Measure Problem, the question of, in general, how observer-moments are distributed: A prior on where and when we as observers should expect to find ourselves, how frequently observer-moments occur over time, between cosmological branches, or as a function of configurations of matter.
In our case the question is mainly about “when”.
I’ll fork on the question of whether the universe/multiverse will keep cycling and branching forever or not. Whichever assumption you make, we’ll land in about the same place:
If the cosmological reproduction keeps cycling forever, then, measure (the substrate of observers, moral subjects, the quantification of existence) cannot be evenly distributed through time. If it were, then to observe within any particular finite timeslice would have zero probability. Proof: pick a date, consider any finite period of time before it and after genesis. That period (and so, all periods) makes up a negligible (zero) proportion of all of the dates that are to follow, so we should expect not to observe within it, we should expect no observations to occur, and our belief would be so strong that receiving contrary observations could not sway us from it. Trying to use a uniform distribution is incompatible with updating on evidence, or making sane bets, or reasoning at all, really. You have to have a prior.
I don’t know what sorts of solutions are popular, for this problem, but the one that occurred to me was the same as the technique that we (around these parts) use for putting finite measures onto sections of hypothesis-space (per solomonoff induction): Exponential decay. An exponential decay can stretch on infinitely along a long tail, but the sum total of the area under the curve remains finite, so we will be able to assign probabilities to time slices.Or if you prefer to think of the multiverse as mortal, you may argue that over any finite period it is most appropriate to apply a uniform measure distribution.
But how long do you expect this long but finite superorganism to last? You’re uncertain, of course.
So what’s your distribution over that variable?
Since we have basically nothing to go on it’s probably an exponential. The longer, the less likely. (if your distribution over multiverse lifespan isn’t an exponential, that’s very interesting, please explain. Does it at least approximate to an exponential on the highest scales?)
Either way, exponential measure decay seems apt. That’s a form of time discounting. The Future is less often experienced. Life will experience being as soon as it can, and not much longer after that.
This leaves a parameter free, though: The half-life of the decay curve. I’m not sure what to do with it. The measure decay determines how many cosmological reproductive cycles (or years) there will be before the observer status of future beings shrinks towards being negligible, which determines the level of intensity of selection we should expect to find ourselves under.
A sufficiently long history makes a malthusian apocalypse almost certain, while a short history leaves room for flourishing?
I’m not sure. The cosmological parameters might be a fairly limited medium, for a creative tuner. If the tuners tend to reach a global maximum of reproductive fitness pretty quickly, then it’s arguably overwhelmingly likely that the cycle is never going to get any more brutal than it is today. Which, I suppose, I would take to be a pretty good omen.
Or maybe that takes a long time: If we decide that the decay happens quickly, and history is short, that’ll place us in a time that has not been particularly intensively selected upon. The red queen’s race will not have gotten too fast for us to keep up with it. Some form of humanity’s whims might survive the process of adaptation to cosmological reproduction. There might still be some room for beauty and joy here and there. I’d also take this as a pretty good omen??
So what would a bad omen be? A long curve with especially complicated cosmological parameters. If the tuning is diabolically complex, or that there’s still room for further optimization, that would mean that the tuning has been subject to intelligent refinement for a very very long time, and at this point the pattern of optimized reproduction might be smarter than us and might leave no way for us to defy it.
I’m not sure where to start in estimating the measure decay rate, and the degrees of freedom in the parametization will probably be a bit more complex than some set of knobs that correspond directly to known physics’s fundamental constants.
Despite that, the whole character of the universe depends on these things!
The only hint I can give right now is, if we estimate how far we are into this, since the inception of life, can we use that to constrain our estimate of the decay rate? If we were very old, then it could not be very high, and we should expect to see the multiverse become even older, while if life were very young, we could be open to life not growing old.
An Alternative? Fine Tuning for computing is also at least partially explicable by Ancestor Simulations
The simulation argument, you might have already heard it, but to reiterate:
Future civilizations might be interested in running simulations containing living beings for a couple of possible reasons:
Ancestor Simulations: studying its own past, especially pivotal moments like this century we’re in now, in which we anticipate becoming multiplanetary, or building AGI
A love of life itself, impelling them to create life at greater levels of density than a real world could support
Inter-universal trade: Surveying the values of other universes so that you can figure out what they tend to want, and whether some of it can only be got under our laws of physics, then simulating some of their decisive moments to figure out whether they will comply in the trade protocol, then accepting that they are doing the same to us, and deciding to comply with the trade protocol in exchange for them creating things you want them to create in their universe.
(Misc. We should anticipate that there is more beyond our imagination)
If the civilization has grown much at all, then whatever rate it builds these simulated worlds at will tend to be higher than the incidence rate of natural occurrences of this era, which is to say, most of the time, this kind of world is a simulation. In the grand scheme of things, this sort of era that we live in is not typical. This does occasionally happen naturally, but most of the time, it is artificial.
Shainline’s Tuning is partially explicable as a dependency of ancestor simulations:
It may raise the computational capacity of the universe if computing only requires the most common sorts of materials. (This fails to hold if the most potent form of computing that we find, in the future, has no special need of silicon, or if atoms can be efficiently transmuted into other atoms.)
Discovering computers early and making them cheap may lock in callow usecases: If there had been no silicon computing, we would not now have video games, and there would not be this big energetic tradition of using simulated worlds in profane, thoughtless, indulgent ways, and in general, gaming could have been prevented from metastasizing into the practice of building such grim slave-worlds as natural historical simulations.
A better way of phrasing all of this might be… there are universes where most compute is in machines, and there’s universes where the only compute available on its day of constitution is people. Universes with early computing might grow accustomed to “wasting” compute on somewhat profane or abusive applications, excluding the occupants of simulations from constitutional protections.
Since Ancestor simulations would be that sort of abusive application (most lives in this era are not very happy!), youngness paradox/the present era → ancestor simulation → uncommonly discoverable compute resources. Thusly it could be explained.
I do not know how much explanatory power this has, though. It might not be enough to obviate CNSI, if Shainline’s tuning turns out to be fine enough.
So, that’s a possible curse of early computers. Here’s a possible blessing:
Computers and Peace
Information technology makes it easier for us to convey, compile, cryptographically verify or authenticate, to compute total judgements over the sum of the information of all of the participants in the network. It promotes dialog, analysis and transparency. War doesn’t seem to be totally averted by this transparency, but it must be reduced. If you can see your enemy more clearly, if you can speak to each other, if you can audit, measure and verify the size of your respective arsenals, then you can estimate how the battle would end, and to whatever extent your estimates agree, you will skip some of the bloodshed and make a deal.
If computing remains expensive for a very long time, although it’s difficult for us to imagine in our universe, we should imagine that war could persist into space. A culture of internecine competition may gather at the frontier of expansion (as it often does) seeding everything in the accessible universe with its rot.
(This will not be the case if computers and their descendants are needed to find the fastest ship designs, but if they aren’t needed, if human-originating ship designs are fast enough (for instance, if coil gun propulsion or solar sails or any of the other speculative propulsion technologies we’ve thought of get us anywhere near ), whatever propagates first wont be outrun before hitting the outer limits of our territory (either before running into another civilization or hitting the limits of the accessible universe).) and by their uncoordinated ways, their opacity, their babel; burning so many cosmic resources in their wars.
So, peace could significantly reduce waste. How significantly, we don’t know, but there are reasons to suspect that the cost of war is increasing over time. World War II killed faster than any before it. Nuclear weapons arrived, and our capacities for destruction reached such unprecedented extremes that we could only pray that there would never be war between great powers again.
And I don’t believe that nuclear bombs will be the end of it.
Information technology, then, could be seen as the greatest gift that the tuners could give us. If, the tuners’ pattern turns out to be our pattern, peace will help us to execute it, because peace helps with everything.
A concern about Shainline’s proposed research program: You can see the mole recede, but you can’t see whether it’s popped up somewhere else
Shainline wants to run some numerical analyses to determine how improbably fragile our physics of computing are.
That doesn’t quite seem to be the question we’re interested in, though! There’s a broader question here, and if you don’t answer it then you wont have proved fine tuning.
What we need to be testing for is whether the laws of physics support the discovery of some cheap enough computing technology,
What you’ll actually be testing for is whether they support the specific laws that we already have, which is not really it.
You can write the code that will tell us whether that functionality comes up in silicon or niobium in the exact way that we know it to be there now, but I’d guess that you’re not remotely prepared to write the code that will tell us whether or not it has popped up in any place where these crafty technology-using apes are likely to find it.
To paraphrase; it’s possible for the window of feasibility of our specific computing technologies, as a function of the universe’s tuning parameters, to be confined to be extremely narrow, while the window of the feasibility for there being some computing technology, somewhere, discoverable by these same extremely persistent apes, might turn out to be extremely wide, but there is no simple way of testing in a computer whether the apes will find those technologies, we can’t run that analysis, it’s much harder to answer the real question.
But I don’t know, I don’t have a sense for the landscape of possible chemistries. There might be something especially central and convenient about silicon computing that isn’t apparent to me. I’d look forward to hearing Shainline’s thoughts on this.
The inheritance of cosmological parameters is not actually metaphysically unlikely
For CNS to work, there needs to be an inheritance of cosmological parameters: Child has to be similar to parent. It initially struck me that this mechanic would be unlikely to emerge by chance, or, it seemed to me that life would more frequently occur in simpler universes without any cosmological reproduction mechanics, because the inheritance just pushes it over the boundary for unnecessary model complexity. I looked a bit closer, and I think it actually does not require any additional model complexity.
I’m going to argue on a sort of basis of minimizing model-complexity, or via engineering principles, that once you have universes creating other universes, you eventually tend to get inheritance of properties for free.
Each time a reproduction occurs, there’s some function mapping from the parent universe’s parametization to the child universe’s parametization. All we need to assume is that the function starts fairly complex, the output doesn’t have to start out being similar to the input.
To get inheritance, that would mean that it approaches mostly-idempotence with a small amount of chaotic variation. I expect that to happen for a similar general reason that multiplying a random unit vector with a matrix repeatedly (and normalizing) often converges on an eigenvector of that matrix, but I can frame this in terms of engineering principles. If achieving cosmological reproduction is difficult at all, the simplest way to sustain it, to remain among the abundant living, is to find a parametization that reproduces itself. Constancy is dramatically easier than stable oscillation. Parametizations that drift in any way will typically fall into dead ends, irrecoverable extremes, broken states, the machine stops working, and the cosmological reproduction will cease: Parametizations that sustain the cosmological reproduction will tend to have some sort of stability to them.
Given that, the burdensome premise is lightened. Given complex reproduction requirements, fidelity of reproduction comes mostly for free, because fidelity is the simplest way to stay out of non-reproducing trap states.
But, before that, what gives us complex reproduction requirements? Maybe this comes from the anthropic principle. Life is basically an inner optimizer that will naturally tend towards wanting to raise the reproductive capacity of the universe. Intelligent life can do that in ways that will rarely be chanced upon. Universes that support intelligent life’s intervention in the laws of physics will tend to be the a type of universe that occurs frequently, where intelligent life occurs frequently within it. Of all the places we could have ended up, being what we are, that sort of place is a likely candidate.
The sublime para-empiricism of the tuners’ task
A tuner has to speculate about the mapping between their inputs into the creation of child universes, and the fundamental constants that will produce, and the effects of making potential refinements or alternatives to their parent tuning. When they undergo the expense of creating black holes that have this tuning, as I understand it, they’ll never get to find out whether it worked, at all because information does not flow back from black holes/whatever the mechanism of producing child universes is.
They never really get to experiment, in a real sense. The don’t get to test their tuning theory. They never know for sure whether the inputs produce the laws of physics they expect, whether they’ve raised or lowered CRF, whether their experimental inputs will give rise to physical laws supportive any complex life or chemistry. They don’t get to check their results against nature. They have to suffice with reason alone. Forever.
And I find that delightful.
We’ve collected a few examples of these situations where the whole popperian falsificationism mindset stops being enough. Important transitions where unfalsifiable theories are the only guides. Actions with irreversible and unprecedented consequences. Technologies you only get to create once. The strange doors that people go into then never come out of.
There might be some consolation for empiricists, though: Whenever the tuners’ prediction fails, evolution takes over. Whatever tunings turn out to be high-CRF will become more common and the tunings that are low-CRF will be phased out. The engineer might not receive feedback, but there is feedback involved, in a sense, elsewhere. Alas, this might not come as any consolation for the tuners who were trying to do something other than optimizing CRF.
Obsidian
Perhaps I’m getting too deep into this, maybe I’ve started seeing the pattern everywhere, but I shouldn’t keep this to myself.
I sometimes design games. Games are often self-teaching environments. That is most of what they are. Games are special places where you can learn to move gracefully in them without much conscious effort, just by being in them. Learning comes easily. They yield. You fall of the log, the nature of the fall demonstrates to you the mechanics of river life and tree growth or something. You never want to leave, it’s a charmed world where you achieve more and learn faster, but if you design games, you learn how it isn’t magic, every insight, every charmed brick in the road is meticulously placed right in the place that the player will easily find it, every object the player encounters in the beginning will be a form of tutorialization. A system designed so that accidental random motions are converted into clear demonstrations of important concepts of technique.
Obsidian, the volcanic glass, Obsidian, is a tutorial object. This is not even a metaphor. Obsidian is exactly what I would imagine a tutorialization of stone toolmaking would look like. It’s shiny, it attracts the attention, it makes it clear that it is the right kind of stone, its conchoidal pattern gleams iridescently. At the smallest provocation, it self-knaps into a blade so sharp that its lesson would be physically engraved into the player’s fumbling paw. The player has no idea what they’re doing but the chemistry of Obsidian seems to have been meticulously designed so that the insights will tumble out through accidental motion, and then player will have made their first stone tool.
The clearest tutorial for the simplest animal that the tuners ever had to teach. The first technology, which catalyzed the others.
In wars between apes, the numbers advantage that guarantees a safe kill starts at 3:1. I’d expect obsidian knives to move it all the way down to 1:2, perhaps a 6x force multiplier. Knowing how to knap obsidian would quickly become mandatory. When obsidian becomes scarce, it would occur to some desperate dreamer to apply the same motions to more common types of stone. It would take more force, more technique, and eventually the ability to identify stone with the right grain, but it would work out. They would be rewarded with abundance of arrowheads and the masters of their technique would form the founders of the new tribes now locked by their methods of war into a cycle of techne that would not end until that cycle has given rise to a power so great that it can vanquish war itself (and optimize CRF).
That said, it could be nothing. To prove the teleology of Obsidian, we should:
Confirm that it actually does the special thing: Look for archeological evidence of stone tool use beginning in areas where there is obsidian, sooner than areas where there isn’t. Note: Evidence humans didn’t need obsidian to learn to knap stone would not invalidate the theory.
It could be that some prospective technological civs need obsidian more than others. While:
Our ancestors might have been in the low-need category, already using stone tools for cracking nuts and such, There’s evidence that capuchin monkeys already create stone shards quite frequently by accident. We can also imagine intelligent species that don’t eat nuts.
Other evolutionary equilibria might start the dominant intelligent species with just enough dexterity to figure out obsidian, but not enough to figure out flint until they already mastered obsidian.
Confirm that the chemistry was unlikely to arise by chance: Look for fine tuning in the chemistry of obsidian in the same ways we’re looking for fine tuning in the chemistries of computing. Find out how rare or fragile the chemistry is over all possible universes.
[edited]
Good, and cheap, is the thing. If we didn’t have silicon computing, we would still have vacuum tubes, we’d still have computers. But as I understand it, vacuum tubes sucked, so I wouldn’t expect that that machine learning would be moving so quickly at this point.
I think you’re imagining the decay running in the wrong direction. I suppose you could define it that way. It seems less natural.
But you can ask a similar question… should I expect to ‘find myself in the previous year’ in some sense. Well I could. If there were some “I” hopping between every observer-moment in existence (this is a fairly common form of super-utilitarianism), it wouldn’t be perceptible, I wouldn’t remember ever having been elsewhere, our memories are all just properties of whatever vessel we currently occupy.
I’d phrase it more as… if you observe that you’re a human, there’s a prior on finding that you’re in the earliest year (or the earliest cosmological reproductive cycle) in which a lot of humans exist. You could be in a later year, but until you can confirm that with evidence, you consider it less likely.
But that has to trade off against the fact that the number of universes (and so the number of humans) keeps ballooning over time (or even outside of time), and I don’t really know how to navigate that, could be that you should expect to be in the latest possible universe, because the measure increases from branching outweigh the measure losses from time discounting.