If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilization, is ‘Have they discovered evolution yet?’
-- Richard Dawkins, The Selfish Gene
(I know it’s old and famous and classic, but this doesn’t make it any less precious, does it?)
Sometimes I suspect that wouldn’t even occur to them as a question. That evolution might turn out to be one of those things that it’s just assumed any race that had mastered agriculture MUST understand.
Because, well, how could a race use selective breeding, and NOT realise that evolution by natural selection occurs?
Realizing far-reaching consequences of an idea is only easy in hindsight, otherwise I think it’s a matter of exceptional intelligence and/or luck. There’s an enormous difference between, on the one hand, noticing some limited selection and utilising it for practical benefits—despite only having a limited, if any, understanding of what you’re doing—and on the other hand realizing how life evolved into complexity from its simple beginnings, in the course of a difficult to grasp period of time. Especially if the idea has to go up against well-entrenched, hostile memes.
I don’t know if this has a name, but there seems to exit a trope where (speaking broadly) superior beings are unable to understand the thinking and errors of less advanced beings. I first noticed it when reading H. Fast’s The First Men, where this exchange between a “Man Plus” child and a normal human occurs:
“Can you do something you disapprove of?”
“I am afraid I can. And do.”
“I don’t understand. Then why do you do it?”
It’s supposed to be about how the child is so advanced and undivided in her thinking, but to me it just means “well then you don’t understand how the human mind works”.
In short, I find this trope to be a fallacy. I’d expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
In short, I find this trope to be a fallacy. I’d expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
But what reason do we have to expect them to pick evolution, as opposed to the concept of money, or of extensive governments (governments governing more than 10,000 people at once), or of written language, or of the internet, or of radio communication, or of fillangerisation, as their obvious sign of advancement?
Just because humans picked up on evolution far later than we should have, doesn’t mean that evolution is what they’ll expect to be the late discovery. They might equally expect that the internet wouldn’t be invented until the equivalent tech level of 2150. Or they might consider moveable type to be the symbol of a masterful race.
Just because they’ll likely be able to understand why we were late to it, doesn’t mean it would occur to them before looking at us. It’s easy to explain why we came to it when we did, once you know that that’s what happened, but if you were from a society that realised evolution [not necessarily common descent] existed as they were domesticating animals; would you really think of understanding evolution as a sign of advancement?
EDIT: IOW: I’ve upvoted your disagreement with the “advanced people can’t understand the simpler ways” trope; but I stand by my original point: they wouldn’t EXPECT evolution to be undiscovered.
I suspect that the intent of the original quote is that they’ll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he’s coming from, though I can’t say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I’m probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
I wouldn’t consider anything you’ve said here stupid, in fact I would agree with it.
I, personally, see it as a failure of imagination on the part of Dawkin’s, that he considers the issue he personally finds most important to be that which alien intelligences will find most important, but you are right to point out what his likely reasoning is.
I think you’re interpreting the quote too literally, it’s not a statement about some alien intelligences but an allegory to communicate just how important the science of evolution is.
Another chain of reasoning I have seen people use to reach similar conclusions is that the aliens are looking for species that have outgrown their sense of their own special importance to the universe. Aliens checking for that would be likely to ask about evolution, or possibly about cosmologies that don’t have the home planet at the center of the universe. However, I don’t think a sense of specialness is one of the main things aliens will care about.
In short, I find this trope to be a fallacy. I’d expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
Have you never looked at something someone does and asked yourself, “How can they be so stupid?”
It’s not as though you literally cannot conceive of such limitations; just that you cannot empathize with them.
It’s anthropomorphism to assume that it would occur to advanced aliens to try to understand us empathetically rather than causally/technically in the first place, though.
Anthropomorphism? I think not. All known organisms that think have emotions. Advanced animals demonstrate empathy.
Now, certainly it might be possible that an advanced civilization might arise that is non-sentient, and thus incapable of modeling other’s psyche empathetically. I will admit to the possibility of anthropocentrism in my statements here; that is, in my inability to conceive of a mechanism whereby technological intelligence could arise without passing through a route that produces intelligences sufficiently like our own as to possess the characteristic of ‘empathy’.
It’s one thing to postulate counter-factuals; it’s another altogether to actually attempt to legitimize them with sound reasoning.
Do you have any good evidence that this assertion applies to Cephalopods? I.e., either that they don’t think or that they have emotions. (Not a rhetorical question; I know about them only enough to realize that I don’t know.)
Do you have any good evidence that this assertion applies to Cephalopods?
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There’s no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model “animal intelligence” in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond ‘simple’ animal intelligence; but those are the exception rather than the norm.)
I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).
But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.)
Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods.
Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove.
On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)
While I think you’re right to point out that the uncomprehending-superior-beings trope is unrealistic, I don’t think Dawkins was generalizing from fictional evidence; his quote reads more to me like plain old anthropomorphism, along with a good slice of self-serving bias relating to the importance of his own work.
A point similar to your first one shows up occasionally in fiction too, incidentally; there’s a semi-common sci-fi trope that has alien species achieving interstellar travel or some other advanced technology by way of a very simple and obvious-in-retrospect process that just happened never to occur to any human scientist. So culture’s not completely blind to the idea. Both tropes basically exist to serve narrative purposes, though, and usually obviously polemic ones; Dawkins isn’t any kind of extra-rational superhuman, but I wouldn’t expect him to unwittingly parrot a device that transparent out of its original context.
The British agricultural revolution involved animal breeding starting in about 1750.
Darwin didn’t publish Origin of Species until 1859, so in reality it took about 100 years for the other shoe to drop.
100 years is nothing in the evolution of a civilization though. The time between agricultural revolution and the discovery of evolution is not a typical period in the history of humanity.
Selective breeding isn’t necessarily the same as artificial selection, however. The taming of dogs and cats was largely considered accidental; the neotenous animals were more human-friendly and thus able to access greater amounts of food supplies from humans until eventually they could directly interact, whereupon (at least in dogs) “usefulness” became a valued trait.
There wasn’t purposefulness in this; people just fed the better dogs more and disliked the ‘worse’ dogs. It wasn’t until the mid-1700′s that dog ‘breeds’ became a concept.
There wasn’t purposefulness in this; people just fed the better dogs more and disliked the ‘worse’ dogs. It wasn’t until the mid-1700′s that dog ‘breeds’ became a concept.
There were certainly attempts to breed specific traits earlier than that. But they were hindered by a poor understanding of inheritance. For example, in the Bible, Jacob tried to breed speckled cattle by putting speckled rods in front of the cattle when they are trying to mate. Problems with understanding genetics works at a basic level was an issue even for much later and some of them still impact what are officially considered purebreds now.
I think that deliberate breeding of stronger horses dates back prior to the 1700s, at least to the early Middle Ages, but I don’t have a source for that.
Absolutely. Even the dog-breeding practitioners were unaware of how inheritence operates; that didn’t come about until Gregor Mendel. We really do take for granted the vast sums of understanding about the topic we are inculcated with simply through cultural osmosis.
If I were an intelligent creature from space visiting Earth, I’d probably start by asking, “do they have anything that can shoot us out of orbit ?” That’s just me though.
but this doesn’t make it any less precious, does it?
I wouldn’t say it has much preciousness to begin with. It is is nearly nonsensical cheering. The sort of thing I don’t like to associate myself with at all.
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can’t discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I’ll still bet evolution has a much larger variance in its discovery time than a lot of other things.
Seems dependent on substitute energy availability and military technology.
the expansion of the universe
There seems to be significant variance in how much humans care about such things, and achievement depends significantly on interest. Would aliens care at all about this?
If you want to pick one question to ask
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
I mean we’d do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
when you have only two data points.
I have zero data points, I’m comparing hypothetical situations in which I ask aliens one or more questions about their technology. (It seems Dawkins’ scenario got inverted somewhere along the way, but I don’t think that makes any difference.)
I mean we’d do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
That’s actually a claim of superexponential growth, but how you said it sounds ok. I’m actually not sure that you can get superexponential growth in a meaningful sense. If you have n bits of data you can’t do better than having all n bits be completely independent. So if one is measuring information content in a Shannon sense one can’t do better than exponential.
Edit: If this is what you want to say I’d say something like “As the number of questions asked goes up the information level increases exponentially” or use “superexponentially” if you mean that.
My best guess for each individual achievement gets better each other achievement I learn about, as they are not independent.
So if one is measuring information content in a Shannon sense one can’t do better than exponential.
I was trying to get at the legitimacy of summarizing the aggregate of somewhat correlated achievements as a “level of civilization”. Describing a civilization as having a a “low/medium/high/etc. level of civilization” in relation to others depends on either its technological advances being correlated similarly or establishing some subset of them as especially important. I don’t think the latter can be done much, which leaves inquiring about the former.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don’t have a level of technology.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don’t have a level of technology.
Well what does no medicine mean? A lot of medicine would work fine without understanding genetics in detail. Blood donors and antibiotics are both examples. Also do normal computers not count as technology? Why not? Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them? I think not. For one, they might have math that we don’t have. They might have other technologies that we lack (for example, better superconductors). You may be buying into a narrative of technological levels that isn’t necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense. For example, one-time pads arose in the late 19th century, but would have made sense as a useful system on telegraphs 20 or 30 years before. Another example are high-temperature superconductors. Similarly, high temperature superconductors (that is substances that are superconductors at liquid nitrogen temperatures) were discovered in the mid 1980s but the basic constructions could have been made twenty years before.
No blood donors (if they have blood), no antibiotics (if they have bacteria), etc.
Also do normal computers not count as technology?
Of course they do.
Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them?
We could learn a lot from them, but it would be wrong to say “The aliens have a technological level less than ours”, “The aliens have a technological level roughly equal to ours”, “The aliens have a technological level greater than ours”, or “The aliens have a technological level, for by technological levels we can most helpfully and meaningfully divide possible-civilizationspace”.
You may be buying into a narrative of technological levels that isn’t necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense.
My point is that there are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense, so asking about what technologies have arisen isn’t as informative as one might intuitively suspect. It’s so uninformative that the idea of levels of technology is in danger of losing coherence as a concept absent confirmation from the alien society that we can analogize from our society to theirs, confirmation that requires multiple data points.
I didn’t downvote it, but if you notice, JoshuaZ concluded my use of “exponential” was “ok”, as what I actually meant was not “a lot” but rather what is technically known as “superexponential growth”.
“Even less justification” has some harsh connotations.
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
Very much agreed.
I also agree with:
I, personally, see it as a failure of imagination on the part of Dawkin’s, that he considers the issue he personally finds most important to be that which alien intelligences will find most important,
I agree with the general idea of:
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph.
though I think it is hard to correctly choose according to this criterion. I’m skeptical that digital computers would
really pass this test. Considering the medium that we are all using to discuss this, we might be a bit biased in
our views of their significance. (as a former chemist, I’m biased towards picking the periodic table—but I know
I’m not making a neutral assessment here.)
Nuclear energy seems like a decent choice, from the dependency graph point of view.
A civilization which is able to use either fission or fusion has to pass a couple of fairly stringent tests.
To detect the relevant nuclear reactions in the first place, they need to detect Mev particles, which
aren’t things that everyday chemical or biological processes produce. To get either reaction to happen
on a large scale, they must recognize and successfully separate isotopes, which is a significant
technical accomplishment.
To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Is it possible the right isotopes might be lying around? Like here, but more concentrated and dispersed?
Is it possible the right isotopes might be lying around?
Yes, good point, if intelligent life evolved faster on their planet. The relevant timing is how long it took after the supernova that generated the uranium for the alien civilization to arise. (since that sets the 238U/235U ratio).
I’m confused. I thought a reaction needed a quantity of 235U in an area, and that smaller areas needed more 235U to sustain a chain reaction. Wouldn’t very small pieces of relatively 235U rich uranium be fairly stable? One could then put them together with no technological requirements at all.
You are quite correct, small pieces of 235U are stable. The difference is that low concentrations of 235U
in natural uranium (because of it’s faster decay than 238U) make it harder to get to critical mass, even with
chemically pure (but not isotopically pure) uranium. IIRC, reactor grade is around 5% 235U, while natural
uranium is 0.7%. IIRC, pure natural uranium metal, at least by itself, doesn’t have enough 235U to sustain
a chain reaction, even in a large mass. (but I vaguely recall that the original reactor experiment with just
the right spacing of uranium metal lumps and graphite moderator may have been natural uranium—I
need to check this… (short of time right now))
(I’m still not quite sure—Chicago Pile-1 is documented here
but the web page described the fuel as “uranium pellets”. I think they mean natural uranium, in which case
I withdraw my statement that isotope separation is a prerequisite for nuclear power.)
I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium
I think this is correct but finding a source which says that seems to be tough. However, Wikipedia does explicitly confirm that the successor to CP1 did initially use unenriched uranium.
Edit: This article (pdf) seems to confirm it. They couldn’t even use pure uranium but had to use uranium oxide. No mention of any sort of enrichment is made.
Yes, CP-1 used natural uranium (~0.7% U-235) and ultra high purity graphite. It would become impossible to attain without isotope separation in just a few hundred million years, to add to the billions from the formation of uranium in the star. Conversely, 1.7 billions years ago, it occurred naturally, with regular water to slow down neutrons.
IIRC, pure natural uranium metal, at least by itself, doesn’t have enough 235U to sustain a chain reaction, even in a large mass.
What is natural is something that I, without background other than a history of nuclear weapons class for my history degree, was/am not confident wouldn’t vary from solar system to solar system.
The natural reactor ended up with less U235 than normal, decayed uranium because some of the fuel had been spent. I assume that it began with either an unusual concentration of regular uranium (or other configuration of elements that slowed neutrons or otherwise facilitated a reaction) or that the uranium there was unusually rich in 235U. If it was the latter, I don’t know the limits for how rich in 235U uranium could be at time of seeding into a planet, but no matter the richness, having small enough pieces would preserve it for future beings. Richness alone wouldn’t cause a natural reaction, so to the extent richness can vary, it can make nuclear technology easy.
If the natural reactor had average uranium, and uranium on planets wouldn’t be particularly more 235U rich than ours, then nuclear technology’s ease would be dependent on life arising quickly relative to ours, but not fantastically so, as you say.
Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can’t discover those without discovering a lot of other things first.
Genetic code might likely vary. While it isn’t implausible that other life would use DNA for its genetic storage it doesn’t seem to be that likely. It seems extremely unlikely that DNA would be organized in the same triplet codon system that life on Earth uses.
Heavier than air flight is also a function of what sort of planet you are on. If Earth had slightly weaker or stronger gravity the difficulty of this achievement would change a lot. Also if intelligent life had arose from winged species one could see this as impacting how much they study aerodynamics and the like. One could conceive of that going either way (say having a very intuitive understanding of how to fly but considering it to be incredibly difficult to make an Artificial Flyer, or the opposite, using that intuition to easily understand what would need to be done in some form.)
Other than that, your argument seems to be a good one.
I wonder if there’s any way to estimate how hard it is for an intelligent species to think of evolution. It’s a very abstract theory, and I think it’s plausible that intelligent species could be significantly better or worse than we are at abstract thought. I have no idea where the middle of the bell curve (if it’s a bell curve at all) would be.
-- Richard Dawkins, The Selfish Gene
(I know it’s old and famous and classic, but this doesn’t make it any less precious, does it?)
Sometimes I suspect that wouldn’t even occur to them as a question. That evolution might turn out to be one of those things that it’s just assumed any race that had mastered agriculture MUST understand.
Because, well, how could a race use selective breeding, and NOT realise that evolution by natural selection occurs?
Easily.
Realizing far-reaching consequences of an idea is only easy in hindsight, otherwise I think it’s a matter of exceptional intelligence and/or luck. There’s an enormous difference between, on the one hand, noticing some limited selection and utilising it for practical benefits—despite only having a limited, if any, understanding of what you’re doing—and on the other hand realizing how life evolved into complexity from its simple beginnings, in the course of a difficult to grasp period of time. Especially if the idea has to go up against well-entrenched, hostile memes.
I don’t know if this has a name, but there seems to exit a trope where (speaking broadly) superior beings are unable to understand the thinking and errors of less advanced beings. I first noticed it when reading H. Fast’s The First Men, where this exchange between a “Man Plus” child and a normal human occurs:
“Can you do something you disapprove of?” “I am afraid I can. And do.” “I don’t understand. Then why do you do it?”
It’s supposed to be about how the child is so advanced and undivided in her thinking, but to me it just means “well then you don’t understand how the human mind works”.
In short, I find this trope to be a fallacy. I’d expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
Yeah. This was put very well by Fyodor Urnov, in an MCB140 lecture:
“What is blindingly obvious to us was not obvious to geniuses of ages past.”
I think the lecture series is available on iTunes.
But what reason do we have to expect them to pick evolution, as opposed to the concept of money, or of extensive governments (governments governing more than 10,000 people at once), or of written language, or of the internet, or of radio communication, or of fillangerisation, as their obvious sign of advancement?
Just because humans picked up on evolution far later than we should have, doesn’t mean that evolution is what they’ll expect to be the late discovery. They might equally expect that the internet wouldn’t be invented until the equivalent tech level of 2150. Or they might consider moveable type to be the symbol of a masterful race.
Just because they’ll likely be able to understand why we were late to it, doesn’t mean it would occur to them before looking at us. It’s easy to explain why we came to it when we did, once you know that that’s what happened, but if you were from a society that realised evolution [not necessarily common descent] existed as they were domesticating animals; would you really think of understanding evolution as a sign of advancement?
EDIT: IOW: I’ve upvoted your disagreement with the “advanced people can’t understand the simpler ways” trope; but I stand by my original point: they wouldn’t EXPECT evolution to be undiscovered.
I suspect that the intent of the original quote is that they’ll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he’s coming from, though I can’t say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I’m probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
Welcome to lesswrong.
I wouldn’t consider anything you’ve said here stupid, in fact I would agree with it.
I, personally, see it as a failure of imagination on the part of Dawkin’s, that he considers the issue he personally finds most important to be that which alien intelligences will find most important, but you are right to point out what his likely reasoning is.
I think you’re interpreting the quote too literally, it’s not a statement about some alien intelligences but an allegory to communicate just how important the science of evolution is.
Another chain of reasoning I have seen people use to reach similar conclusions is that the aliens are looking for species that have outgrown their sense of their own special importance to the universe. Aliens checking for that would be likely to ask about evolution, or possibly about cosmologies that don’t have the home planet at the center of the universe. However, I don’t think a sense of specialness is one of the main things aliens will care about.
Have you never looked at something someone does and asked yourself, “How can they be so stupid?”
It’s not as though you literally cannot conceive of such limitations; just that you cannot empathize with them.
It’s anthropomorphism to assume that it would occur to advanced aliens to try to understand us empathetically rather than causally/technically in the first place, though.
Anthropomorphism? I think not. All known organisms that think have emotions. Advanced animals demonstrate empathy.
Now, certainly it might be possible that an advanced civilization might arise that is non-sentient, and thus incapable of modeling other’s psyche empathetically. I will admit to the possibility of anthropocentrism in my statements here; that is, in my inability to conceive of a mechanism whereby technological intelligence could arise without passing through a route that produces intelligences sufficiently like our own as to possess the characteristic of ‘empathy’.
It’s one thing to postulate counter-factuals; it’s another altogether to actually attempt to legitimize them with sound reasoning.
Do you have any good evidence that this assertion applies to Cephalopods? I.e., either that they don’t think or that they have emotions. (Not a rhetorical question; I know about them only enough to realize that I don’t know.)
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There’s no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model “animal intelligence” in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond ‘simple’ animal intelligence; but those are the exception rather than the norm.)
I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).
But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.)
Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods.
Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove.
On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)
While I think you’re right to point out that the uncomprehending-superior-beings trope is unrealistic, I don’t think Dawkins was generalizing from fictional evidence; his quote reads more to me like plain old anthropomorphism, along with a good slice of self-serving bias relating to the importance of his own work.
A point similar to your first one shows up occasionally in fiction too, incidentally; there’s a semi-common sci-fi trope that has alien species achieving interstellar travel or some other advanced technology by way of a very simple and obvious-in-retrospect process that just happened never to occur to any human scientist. So culture’s not completely blind to the idea. Both tropes basically exist to serve narrative purposes, though, and usually obviously polemic ones; Dawkins isn’t any kind of extra-rational superhuman, but I wouldn’t expect him to unwittingly parrot a device that transparent out of its original context.
The British agricultural revolution involved animal breeding starting in about 1750. Darwin didn’t publish Origin of Species until 1859, so in reality it took about 100 years for the other shoe to drop.
100 years is nothing in the evolution of a civilization though. The time between agricultural revolution and the discovery of evolution is not a typical period in the history of humanity.
Selective breeding had been around much longer than that.
Selective breeding isn’t necessarily the same as artificial selection, however. The taming of dogs and cats was largely considered accidental; the neotenous animals were more human-friendly and thus able to access greater amounts of food supplies from humans until eventually they could directly interact, whereupon (at least in dogs) “usefulness” became a valued trait.
There wasn’t purposefulness in this; people just fed the better dogs more and disliked the ‘worse’ dogs. It wasn’t until the mid-1700′s that dog ‘breeds’ became a concept.
There were certainly attempts to breed specific traits earlier than that. But they were hindered by a poor understanding of inheritance. For example, in the Bible, Jacob tried to breed speckled cattle by putting speckled rods in front of the cattle when they are trying to mate. Problems with understanding genetics works at a basic level was an issue even for much later and some of them still impact what are officially considered purebreds now.
I think that deliberate breeding of stronger horses dates back prior to the 1700s, at least to the early Middle Ages, but I don’t have a source for that.
Absolutely. Even the dog-breeding practitioners were unaware of how inheritence operates; that didn’t come about until Gregor Mendel. We really do take for granted the vast sums of understanding about the topic we are inculcated with simply through cultural osmosis.
If I were an intelligent creature from space visiting Earth, I’d probably start by asking, “do they have anything that can shoot us out of orbit ?” That’s just me though.
I wouldn’t say it has much preciousness to begin with. It is is nearly nonsensical cheering. The sort of thing I don’t like to associate myself with at all.
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can’t discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I’ll still bet evolution has a much larger variance in its discovery time than a lot of other things.
This is a good one. I like it.
Seems dependent on substitute energy availability and military technology.
There seems to be significant variance in how much humans care about such things, and achievement depends significantly on interest. Would aliens care at all about this?
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
cringe. Please don’t use “exponentially” to mean a lot when you have only two data points.
I mean we’d do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
I have zero data points, I’m comparing hypothetical situations in which I ask aliens one or more questions about their technology. (It seems Dawkins’ scenario got inverted somewhere along the way, but I don’t think that makes any difference.)
That’s actually a claim of superexponential growth, but how you said it sounds ok. I’m actually not sure that you can get superexponential growth in a meaningful sense. If you have n bits of data you can’t do better than having all n bits be completely independent. So if one is measuring information content in a Shannon sense one can’t do better than exponential.
Edit: If this is what you want to say I’d say something like “As the number of questions asked goes up the information level increases exponentially” or use “superexponentially” if you mean that.
My best guess for each individual achievement gets better each other achievement I learn about, as they are not independent.
I was trying to get at the legitimacy of summarizing the aggregate of somewhat correlated achievements as a “level of civilization”. Describing a civilization as having a a “low/medium/high/etc. level of civilization” in relation to others depends on either its technological advances being correlated similarly or establishing some subset of them as especially important. I don’t think the latter can be done much, which leaves inquiring about the former.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don’t have a level of technology.
Well what does no medicine mean? A lot of medicine would work fine without understanding genetics in detail. Blood donors and antibiotics are both examples. Also do normal computers not count as technology? Why not? Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them? I think not. For one, they might have math that we don’t have. They might have other technologies that we lack (for example, better superconductors). You may be buying into a narrative of technological levels that isn’t necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense. For example, one-time pads arose in the late 19th century, but would have made sense as a useful system on telegraphs 20 or 30 years before. Another example are high-temperature superconductors. Similarly, high temperature superconductors (that is substances that are superconductors at liquid nitrogen temperatures) were discovered in the mid 1980s but the basic constructions could have been made twenty years before.
No blood donors (if they have blood), no antibiotics (if they have bacteria), etc.
Of course they do.
We could learn a lot from them, but it would be wrong to say “The aliens have a technological level less than ours”, “The aliens have a technological level roughly equal to ours”, “The aliens have a technological level greater than ours”, or “The aliens have a technological level, for by technological levels we can most helpfully and meaningfully divide possible-civilizationspace”.
My point is that there are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense, so asking about what technologies have arisen isn’t as informative as one might intuitively suspect. It’s so uninformative that the idea of levels of technology is in danger of losing coherence as a concept absent confirmation from the alien society that we can analogize from our society to theirs, confirmation that requires multiple data points.
Ah, I see. Yes that makes sense. No substantial disagreement then.
I heard a Calculus teacher do this with even less justification a few days ago.
EDIT: was this downvoted for irrelevancy, or some other reason?
I didn’t downvote it, but if you notice, JoshuaZ concluded my use of “exponential” was “ok”, as what I actually meant was not “a lot” but rather what is technically known as “superexponential growth”.
“Even less justification” has some harsh connotations.
Very much agreed.
I also agree with:
I agree with the general idea of:
though I think it is hard to correctly choose according to this criterion. I’m skeptical that digital computers would really pass this test. Considering the medium that we are all using to discuss this, we might be a bit biased in our views of their significance. (as a former chemist, I’m biased towards picking the periodic table—but I know I’m not making a neutral assessment here.)
Nuclear energy seems like a decent choice, from the dependency graph point of view. A civilization which is able to use either fission or fusion has to pass a couple of fairly stringent tests. To detect the relevant nuclear reactions in the first place, they need to detect Mev particles, which aren’t things that everyday chemical or biological processes produce. To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Is it possible the right isotopes might be lying around? Like here, but more concentrated and dispersed?
Yes, good point, if intelligent life evolved faster on their planet. The relevant timing is how long it took after the supernova that generated the uranium for the alien civilization to arise. (since that sets the 238U/235U ratio).
I’m confused. I thought a reaction needed a quantity of 235U in an area, and that smaller areas needed more 235U to sustain a chain reaction. Wouldn’t very small pieces of relatively 235U rich uranium be fairly stable? One could then put them together with no technological requirements at all.
You are quite correct, small pieces of 235U are stable. The difference is that low concentrations of 235U in natural uranium (because of it’s faster decay than 238U) make it harder to get to critical mass, even with chemically pure (but not isotopically pure) uranium. IIRC, reactor grade is around 5% 235U, while natural uranium is 0.7%. IIRC, pure natural uranium metal, at least by itself, doesn’t have enough 235U to sustain a chain reaction, even in a large mass. (but I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium—I need to check this… (short of time right now)) (I’m still not quite sure—Chicago Pile-1 is documented here but the web page described the fuel as “uranium pellets”. I think they mean natural uranium, in which case I withdraw my statement that isotope separation is a prerequisite for nuclear power.)
I think this is correct but finding a source which says that seems to be tough. However, Wikipedia does explicitly confirm that the successor to CP1 did initially use unenriched uranium.
Edit: This article (pdf) seems to confirm it. They couldn’t even use pure uranium but had to use uranium oxide. No mention of any sort of enrichment is made.
Yes, CP-1 used natural uranium (~0.7% U-235) and ultra high purity graphite. It would become impossible to attain without isotope separation in just a few hundred million years, to add to the billions from the formation of uranium in the star. Conversely, 1.7 billions years ago, it occurred naturally, with regular water to slow down neutrons.
Fusion is more interesting.
What is natural is something that I, without background other than a history of nuclear weapons class for my history degree, was/am not confident wouldn’t vary from solar system to solar system.
The natural reactor ended up with less U235 than normal, decayed uranium because some of the fuel had been spent. I assume that it began with either an unusual concentration of regular uranium (or other configuration of elements that slowed neutrons or otherwise facilitated a reaction) or that the uranium there was unusually rich in 235U. If it was the latter, I don’t know the limits for how rich in 235U uranium could be at time of seeding into a planet, but no matter the richness, having small enough pieces would preserve it for future beings. Richness alone wouldn’t cause a natural reaction, so to the extent richness can vary, it can make nuclear technology easy.
If the natural reactor had average uranium, and uranium on planets wouldn’t be particularly more 235U rich than ours, then nuclear technology’s ease would be dependent on life arising quickly relative to ours, but not fantastically so, as you say.
Genetic code might likely vary. While it isn’t implausible that other life would use DNA for its genetic storage it doesn’t seem to be that likely. It seems extremely unlikely that DNA would be organized in the same triplet codon system that life on Earth uses.
Heavier than air flight is also a function of what sort of planet you are on. If Earth had slightly weaker or stronger gravity the difficulty of this achievement would change a lot. Also if intelligent life had arose from winged species one could see this as impacting how much they study aerodynamics and the like. One could conceive of that going either way (say having a very intuitive understanding of how to fly but considering it to be incredibly difficult to make an Artificial Flyer, or the opposite, using that intuition to easily understand what would need to be done in some form.)
Other than that, your argument seems to be a good one.
I wonder if there’s any way to estimate how hard it is for an intelligent species to think of evolution. It’s a very abstract theory, and I think it’s plausible that intelligent species could be significantly better or worse than we are at abstract thought. I have no idea where the middle of the bell curve (if it’s a bell curve at all) would be.