Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn’t gone up in a nuclear chain reaction already?
Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like “chain reactions don’t naturally occur but they’re possible to engineer on practical scales” is represented by a narrow band of the possible parameters.
I admit that I don’t know what evidence Fermi did and didn’t have at the time, but I’d be surprised if Szilard’s conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today’s current knowledge.
Strictly speaking, chain reactions do naturally occur, they’re just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn’t have that evidence available.
Also, although I like your argument… wouldn’t it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn’t burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth’s history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them—that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: “Of course it wouldn’t exist in nature! Those bombs went off a long time ago, we’ll have to build a slightly different sort! We’re not restricted to bombs that grow on trees.” By such reasoning, if you had attended to it, you might have correctly agreed with Szilard, and been correctly skeptical of Fermi’s hypothetical counterargument.
Not taking into account that engineering intelligence will be applied to overcome the first hypothetical difficulty is, indeed, a source of systematic directional pessimistic bias in long-term technological forecasts. Though in this case it was only a decade. I think if Fermi had said that things were 30 years off and Szilard had said 10, I would’ve been a tad more sympathetic toward Fermi because of the obvious larger reference class—though I would still be trying not to update my brain in the opposite direction from the training example.
because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention
Except there aren’t any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.
Matter usually ends up as a fusion powered, flaming hell. (If you look really closely it is not all like that; there are scattered little lumps in orbit, such as the Earth and Mars)
Second, a world view with a free parameter, adjusted to explain away vulcanism.
Before the discovery of radio-activity, the source of the Earth’s internal heat was a puzzle. Kelvin had calculated that the heat from Earth’s gravitational collapse, from dispersed matter to planet, was no where near enough to keep the Earth’s internal fires going for the timescales which geologists were arguing for.
Enter radioactivity. But nobody actually knows the internal composition the Earth. The amount of radioactive material is a free parameter. You know how much heat you need and you infer the amount of Thorium and Uranium that “must” be there. If there is extra heat due to chain reactions you just revise the estimate downwards to suit.
Sticking to the theme of being less wrong, how does one see the elephant in the room? How does one avoid missing the existence of spontaneous nuclear fusion on a sunny day? Pass.
The vulcanism point is more promising. The structure of the error is to say that vulcanism does not count against the premise “ordinary matter is stable” because we’ve got vulcanism fully explained. We’ve worked out how much Uranium and Thorium there needs to be to explain it and we’ve bored holes 1000km deep and checked and found the correct amount. But wait! We haven’t done the bore-hole thing, and it is hard to remember this because it is so hopelessly impractical that we are not looking forward to doing it. In this case we assume that we have dotted the i’s and crossed the t’s on the existing theory when we haven’t.
One technique for avoiding “clever arguments” is to keep track of which things have been cross-checked and which things have only a single chain of inference and could probably be adjusted to fit with a new phenomenon. For example, there was a long time in astronomy when estimates of the distances to galaxies used Cepheid variables as a standard candle, and that was the only way of putting an absolute number on the distance. So there was room for a radical new theory that changed the size of the universe a lot, provided it mucked about with nuclear physics, putting the period/luminosity relationship into doubt (hmm, maybe not, I think it is an empirical relationship based on using parallax to get measured values from galactic Cepheid variables). Anyway along come type Ia supernovas as a second standard candle, and inter galactic distances are calculated two ways and are on a much firmer footing.
So there are things you know that you only know via one route and there is an implicit assumption that there is nothing extra that you don’t know about. Things that you only know via a single route can be useless for ruling out surprising new things .
And there are things you know that you know via two routes that pretty much agree. (if they disagree then you already know that there is something you don’t know). Things you know via two routes do have some power of ruling out surprising new things. The new thing has to sneak in between the error bars on the existing agreement or somehow produce a coordinated change to preserve the agreement or correctly fill the gap opened up by changing one thing and not the other.
I thought they did know that if the sun was solely dependent on chemical reactions, then it would have burned itself out more quickly than the age of the earth suggested.
I was glibly assuming that Fermi would know that the sun was nuclear powered. So he would already have one example of a large scale nuclear reaction to hand. Hans Bethe won his Nobel prize for discovering this. Checking dates, This obituary dates the discovery to 1938. So the timing is a little tight.
As you say, they knew that the sun wasn’t powered by chemical fires, they wouldn’t burn of long enough, but perhaps I’m expecting Fermi to have assimilated new physics quicker than is humanly possible.
Major nitpick: stars are examples of sustained nuclear fusion, not fission. The two are sustained by completely different mechanisms, so observation of nuclear fusion in stars doesn’t really tell us anything about the possibility of sustained nuclear fission.
Minor nitpick: it’s spelled volcanism, not vulcanism.
I’m looking at the outside view argument: matter is stable so we don’t expect to get anything nuclear.
But we look at the sun and see a power source with light atoms fusing to make medium weight ones. We already know about the radioactive decay of heavy atoms, and the interesting new twist is the fission of heavy atoms resulting in medium weight atoms and lots of energy. We know that it is medium weight atoms that are most stable, there is surplus energy to be had both from light atoms and heavy atoms. Can we actually do it with heavy atoms? It works elsewhere with light atoms, but that’s different. We basically know that it is up for grabs and it is time to go to the laboratory and find out.
I fear that I have outed myself with my tragic spelling error. People will be able to guess that I’m a fan of Mr Spock from the planet Vulcan ;-(
At least nine times out of ten in the history of physics, that heuristic probably did work. I agree that Fermi was wrong not to track down a perceived moderately small chance of a consequential breakthrough, but I can’t believe with any confidence that his initial estimate was too low without the power of hindsight.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out? Obviously we would have a much lower chance of hearing about it especially on a cursory reading of history books, but the chance is not zero, there are allegedly many such occasions, and the absence of any such known cases is not insignificant evidence. Bolded to help broadcast the question to random readers, in case somebody who knows of an example runs across this comment a year later. The only thing I can think of offhand in possibly arguably the same reference class would be polywell fusion today, assuming it doesn’t pan out. There’s no known conspiracy there, but there’s a high-impact argument and Bussard previously working on the polywell.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out?
Do you have a set of examples where it did pan out, or are we just talking about a description crafted to describe a particular event?
Restricting to physicists cuts us from talking about other areas like bioweapons research, where indeed most of the “remote possibilities” of apocalyptic destruction don’t pan out. Computer scientists did not produce AI in the 20th century, and it was thought of as at least a remote possibility.
For physicists, effective nuclear missile defense using beam weapons and interceptors did not pan out.
Radioactivity was discovered via “fluorescence is responsible for x-rays” idea that did not pan out...
There’s a big number of fusion related attempts that did not pan out at all, there’s fission of lithium which can’t be used for a chain reaction and is only used for making tritium. There’s hafnium triggering which might or might not pan out (and all the other isomers), and so on.
For the most part chasing or not chasing “wouldn’t it be neat if” scenarios doesn’t have much of effect on science, it seems—Fermi would still inevitably have discovered secondary neutrons even if he wasn’t pursuing chain reaction (provided someone else didn’t do that before him).
They were not hell bent on obtaining grant money for a fission bomb no-matter-what. The first thing they had to do was to measure fission cross sections over the neutron spectra, and in the counter-factual world where U235 does not exist but they detected fission anyway (because high energy neutrons do fission U238), they did the founding effort for the accelerator driven fission, the fission products of which heal the cancer around the world (the radiation sources in medicine would still be produced somehow), and in that world maybe you go on using it in some other sequence going on how Szilard was wrong and Fermi dramatically overestimated and how obviously the chance was far lower because they were talking of one isotope and not a single isotope works and how stupid it is to think that fissioning and producing neutrons is enough for chain reaction (the bar on that is tad higher) etc etc. In that alternate world, today, maybe there’s even an enormous project of trying to produce—in an accelerator or something more clever—enough plutonium to kick-start breeder reactor economy. Or maybe we got fusion power plants there, because a lot of effort was put into that (plus Manhattan project never happened and some scientists perhaps didn’t get cancer) . edit: Or actually, combination of the two could have happened at some point much later than 1945: sub-unity tokamak which produces neutrons via fusion, to irradiate uranium-238 and breed enough plutonium to kick start breeder reactors. Or maybe not, because it could have took a long while there until someone measures properties of plutonium. Either way, Fermi and Szilard end up looking awesome.
How about the original Pascal’s wager? It was made by a famed mathematician rather than a famed physicist, and it wasn’t a conspiracy, but it’s definitely in the same reference frame.
Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn’t gone up in a nuclear chain reaction already?
Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like “chain reactions don’t naturally occur but they’re possible to engineer on practical scales” is represented by a narrow band of the possible parameters.
I admit that I don’t know what evidence Fermi did and didn’t have at the time, but I’d be surprised if Szilard’s conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today’s current knowledge.
Strictly speaking, chain reactions do naturally occur, they’re just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn’t have that evidence available.
Also, although I like your argument… wouldn’t it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn’t burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth’s history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them—that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: “Of course it wouldn’t exist in nature! Those bombs went off a long time ago, we’ll have to build a slightly different sort! We’re not restricted to bombs that grow on trees.” By such reasoning, if you had attended to it, you might have correctly agreed with Szilard, and been correctly skeptical of Fermi’s hypothetical counterargument.
Not taking into account that engineering intelligence will be applied to overcome the first hypothetical difficulty is, indeed, a source of systematic directional pessimistic bias in long-term technological forecasts. Though in this case it was only a decade. I think if Fermi had said that things were 30 years off and Szilard had said 10, I would’ve been a tad more sympathetic toward Fermi because of the obvious larger reference class—though I would still be trying not to update my brain in the opposite direction from the training example.
Except there aren’t any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.
Good counter-analogy, and awesome Wikipedia article. Thanks!
A clever argument! Why didn’t it work on Reality?
I spot two holes.
First the elephant in the living room: The sun.
Matter usually ends up as a fusion powered, flaming hell. (If you look really closely it is not all like that; there are scattered little lumps in orbit, such as the Earth and Mars)
Second, a world view with a free parameter, adjusted to explain away vulcanism.
Before the discovery of radio-activity, the source of the Earth’s internal heat was a puzzle. Kelvin had calculated that the heat from Earth’s gravitational collapse, from dispersed matter to planet, was no where near enough to keep the Earth’s internal fires going for the timescales which geologists were arguing for.
Enter radioactivity. But nobody actually knows the internal composition the Earth. The amount of radioactive material is a free parameter. You know how much heat you need and you infer the amount of Thorium and Uranium that “must” be there. If there is extra heat due to chain reactions you just revise the estimate downwards to suit.
Sticking to the theme of being less wrong, how does one see the elephant in the room? How does one avoid missing the existence of spontaneous nuclear fusion on a sunny day? Pass.
The vulcanism point is more promising. The structure of the error is to say that vulcanism does not count against the premise “ordinary matter is stable” because we’ve got vulcanism fully explained. We’ve worked out how much Uranium and Thorium there needs to be to explain it and we’ve bored holes 1000km deep and checked and found the correct amount. But wait! We haven’t done the bore-hole thing, and it is hard to remember this because it is so hopelessly impractical that we are not looking forward to doing it. In this case we assume that we have dotted the i’s and crossed the t’s on the existing theory when we haven’t.
One technique for avoiding “clever arguments” is to keep track of which things have been cross-checked and which things have only a single chain of inference and could probably be adjusted to fit with a new phenomenon. For example, there was a long time in astronomy when estimates of the distances to galaxies used Cepheid variables as a standard candle, and that was the only way of putting an absolute number on the distance. So there was room for a radical new theory that changed the size of the universe a lot, provided it mucked about with nuclear physics, putting the period/luminosity relationship into doubt (hmm, maybe not, I think it is an empirical relationship based on using parallax to get measured values from galactic Cepheid variables). Anyway along come type Ia supernovas as a second standard candle, and inter galactic distances are calculated two ways and are on a much firmer footing.
So there are things you know that you only know via one route and there is an implicit assumption that there is nothing extra that you don’t know about. Things that you only know via a single route can be useless for ruling out surprising new things .
And there are things you know that you know via two routes that pretty much agree. (if they disagree then you already know that there is something you don’t know). Things you know via two routes do have some power of ruling out surprising new things. The new thing has to sneak in between the error bars on the existing agreement or somehow produce a coordinated change to preserve the agreement or correctly fill the gap opened up by changing one thing and not the other.
I thought they did know that if the sun was solely dependent on chemical reactions, then it would have burned itself out more quickly than the age of the earth suggested.
I was glibly assuming that Fermi would know that the sun was nuclear powered. So he would already have one example of a large scale nuclear reaction to hand. Hans Bethe won his Nobel prize for discovering this. Checking dates, This obituary dates the discovery to 1938. So the timing is a little tight.
As you say, they knew that the sun wasn’t powered by chemical fires, they wouldn’t burn of long enough, but perhaps I’m expecting Fermi to have assimilated new physics quicker than is humanly possible.
Major nitpick: stars are examples of sustained nuclear fusion, not fission. The two are sustained by completely different mechanisms, so observation of nuclear fusion in stars doesn’t really tell us anything about the possibility of sustained nuclear fission.
Minor nitpick: it’s spelled volcanism, not vulcanism.
I’m looking at the outside view argument: matter is stable so we don’t expect to get anything nuclear.
But we look at the sun and see a power source with light atoms fusing to make medium weight ones. We already know about the radioactive decay of heavy atoms, and the interesting new twist is the fission of heavy atoms resulting in medium weight atoms and lots of energy. We know that it is medium weight atoms that are most stable, there is surplus energy to be had both from light atoms and heavy atoms. Can we actually do it with heavy atoms? It works elsewhere with light atoms, but that’s different. We basically know that it is up for grabs and it is time to go to the laboratory and find out.
I fear that I have outed myself with my tragic spelling error. People will be able to guess that I’m a fan of Mr Spock from the planet Vulcan ;-(
Quoted for irony.
I’m not sure if pointing out my typo was your intent there, but you caused me to notice it, so I fixed it.
At least nine times out of ten in the history of physics, that heuristic probably did work. I agree that Fermi was wrong not to track down a perceived moderately small chance of a consequential breakthrough, but I can’t believe with any confidence that his initial estimate was too low without the power of hindsight.
Is there a good example of a conspiracy including physicists of the same prior fame as Rabi and Fermi (Szilard was then mostly an unknown) which was pursuing a ‘remote possibility’, of similar impact to nuclear weapons, that didn’t pan out? Obviously we would have a much lower chance of hearing about it especially on a cursory reading of history books, but the chance is not zero, there are allegedly many such occasions, and the absence of any such known cases is not insignificant evidence. Bolded to help broadcast the question to random readers, in case somebody who knows of an example runs across this comment a year later. The only thing I can think of offhand in possibly arguably the same reference class would be polywell fusion today, assuming it doesn’t pan out. There’s no known conspiracy there, but there’s a high-impact argument and Bussard previously working on the polywell.
Do you have a set of examples where it did pan out, or are we just talking about a description crafted to describe a particular event?
Restricting to physicists cuts us from talking about other areas like bioweapons research, where indeed most of the “remote possibilities” of apocalyptic destruction don’t pan out. Computer scientists did not produce AI in the 20th century, and it was thought of as at least a remote possibility.
For physicists, effective nuclear missile defense using beam weapons and interceptors did not pan out.
Radioactivity was discovered via “fluorescence is responsible for x-rays” idea that did not pan out...
There’s a big number of fusion related attempts that did not pan out at all, there’s fission of lithium which can’t be used for a chain reaction and is only used for making tritium. There’s hafnium triggering which might or might not pan out (and all the other isomers), and so on.
For the most part chasing or not chasing “wouldn’t it be neat if” scenarios doesn’t have much of effect on science, it seems—Fermi would still inevitably have discovered secondary neutrons even if he wasn’t pursuing chain reaction (provided someone else didn’t do that before him).
They were not hell bent on obtaining grant money for a fission bomb no-matter-what. The first thing they had to do was to measure fission cross sections over the neutron spectra, and in the counter-factual world where U235 does not exist but they detected fission anyway (because high energy neutrons do fission U238), they did the founding effort for the accelerator driven fission, the fission products of which heal the cancer around the world (the radiation sources in medicine would still be produced somehow), and in that world maybe you go on using it in some other sequence going on how Szilard was wrong and Fermi dramatically overestimated and how obviously the chance was far lower because they were talking of one isotope and not a single isotope works and how stupid it is to think that fissioning and producing neutrons is enough for chain reaction (the bar on that is tad higher) etc etc. In that alternate world, today, maybe there’s even an enormous project of trying to produce—in an accelerator or something more clever—enough plutonium to kick-start breeder reactor economy. Or maybe we got fusion power plants there, because a lot of effort was put into that (plus Manhattan project never happened and some scientists perhaps didn’t get cancer) . edit: Or actually, combination of the two could have happened at some point much later than 1945: sub-unity tokamak which produces neutrons via fusion, to irradiate uranium-238 and breed enough plutonium to kick start breeder reactors. Or maybe not, because it could have took a long while there until someone measures properties of plutonium. Either way, Fermi and Szilard end up looking awesome.
How about the original Pascal’s wager? It was made by a famed mathematician rather than a famed physicist, and it wasn’t a conspiracy, but it’s definitely in the same reference frame.