The (short) case for predicting what Aliens value
The case
Most of the future things we care about – i.e., (dis)value – come, in expectation, from futures where humanity develops artificial general intelligence (AGI) and colonizes many other stars (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016).
Hanson (2021) and Cook (2022) estimate that we should expect to eventually “meet” (grabby) alien AGIs/civilizations – just AGIs, from here on – if humanity expands, and that our corner of the universe will eventually be colonized by aliens if humanity doesn’t expand.
This raises the following three crucial questions:
What would happen once/if our respective AGIs meet? Values handshakes (i.e., cooperation) or conflict? Of what forms?
Do we have good reasons to think the scenario where our corner of the universe is colonized by humanity is better than that where it is colonized by aliens? Should we update on the importance of reducing existential risks?[1]
Considering the fact that aliens might fill our corner of the universe with things we (dis)value, does humanity have an (inter-civilizational) comparative advantage in focusing on something the grabby aliens will neglect?
The answers to these three questions heavily depend on the values we expect the grabby aliens our AGI will meet to have. For instance, if we expect grabby alien AGIs to, say, care about suffering more than our AGI, AGI conflict generating significant suffering is then relatively unlikely, and the importance of reducing X-risks depends on whether you prefer the aliens’ degree of concern for suffering or that of our AGI.
Therefore, figuring out what aliens value (or Alien Values[2] Research) appears quite important,[3] although absolutely no one is working on it[4] as far as I know.
Is it because it isn’t tractable? Although I see how it might seem so, I don’t think it is. First, thinking about the values of grabby aliens doesn’t strike me as harder than modeling their spread (see, e.g., Hanson 2021 and Cook 2022 for work on the latter). My EA Forum sequence What values will control the Future? is an instance of how simple observations/reasoning can make us significantly narrow down the range of values we should expect grabby aliens to have. Second, there seems to be – outside of the Effective Altruism sphere – a whole field of research focused on thinking about the evolution of aliens (most of which I’m not familiar with, yet), and there are already quite interesting takeaways (see, e.g., Kershenbaum 2020; Todd and Miller 2017). Although the moral preferences of aliens are by no means the focus so far, this is evidence that figuring stuff out about aliens is feasible, and there might even be potential for making Alien Values Research part of people’s alien-related research agenda.
Acknowledgment
Thanks to Elias Schmied for their helpful comments on a draft. All assumptions/claims/omissions are my own.
Appendix: Relevant work
(This list is not exhaustive.[5] More or less ranked by decreasing order of relevance.)
Robin Hanson (1998) Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization
on selection effects during space colonization
Charlie Guttman (2022) Alien Counterfactuals (and comments)
on the importance/tractability of this topic for assessing the case for reducing X-risks.
The first paragraph of What if human colonization is more humane than ET colonization? in Tomasik (2013)
Whether (post-)humans colonizing space is good or bad, space colonization by other agents seems worse in Brauner and Grosse-Haltz (2018)
Cosmic rescues and comments in DiGiovanni (2021)
argues against Brauner and Grosse-Haltz’s (2018) claim.
Anders Sandberg (2022) Game Theory with Aliens on the Largest Scales
some successful cooperation stories between civilizations with orthogonal values
Rational animations (2022) Could a single alien message destroy us?
a bargaining game scenario between different civilizations that turns straight into conflict before any form of actual bargaining takes place
Non-causal motivations for thinking about the values of aliens and a few thoughts on how to do it.
Andrew Critch (2023) Acausal normalcy
Caspar Oesterheld (2017) Multiverse-wide Cooperation via Correlated Decision Making (section 3.3 and 3.4)
A few relevant questions in Michael’s Aird (2020) Crucial questions for longtermists
’How “bad” would the future be, if an existential catastrophe occurs? How does this differ between different existential catastrophes?
How likely is future evolution of moral agents or patients on Earth, conditional on (various different types of) existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?’
Resources on modeling the spread of grabby aliens.
Robin Hanson et al. (2021) If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare
Tristan Cook (2022) Replicating and extending the grabby aliens model (and references)
Resources that don’t focus on the values of aliens but on relevant evolutionary dynamics
Arik Kershenbaum (2020) The Zoologist’s Guide to the Galaxy. What Animals on Earth Reveal about Aliens – and Ourselves (and references)
Peter M. Todd & Geoffrey F. Miller (2017) The Evolutionary Psychology of Extraterrestrial Intelligence: Are There Universal Adaptations in Search, Aversion, and Signaling? (and references)
See also the Appendix in Buhler (2023)
- ^
- ^
Alien values” here literally means “the values of aliens”, not “values that look alien to us” as in this confusing LessWrong tag.
- ^
Besides helping us answer the two above questions, it might also give us useful insights regarding the future of human evolution and what our successors might value (see Buhler 2023). Robin Hanson makes a similar point around the beginning of this interview.
- ^
The Appendix lists a few pieces that raised relevant considerations, however.
- ^
And this is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!
My instant answer to this question is that it is not of practical importance, except insofar as we may already be inside an alien sphere of influence.
You’re talking primarily about scenarios of alien encounter in which it’s a meeting between a human-descended superintelligence and an alien-descended superintelligence. But by definition, the human-descended superintelligence is going to be better than you, at inferring the likely distribution of alien life and alien values in the cosmos.
But since you’re interested, I suggest you also look up “Xenology” by Robert Freitas, which is a big obscure work from the 1970s by someone who went on to become one of the major theorists of mechanical nanotechnology. It has weird stuff like eleven metalaws of first contact, devised in 1970 by an Austrian space lawyer.
Apart from the fact that such works may contain valid observations that the current literature overlooks, they may also promote awareness of the extent to which current ideas about alien life are non-empirical guesswork and potentially quite wrong.
Freitas opens his chapter 25 with the proposition that
which is a very Carl Sagan, birth-of-SETI perspective, and one which is still held by many many people. On the other hand, our local avantgarde believe that intelligence in the universe is dominated by aggressively expansionist superintelligences that may be trading with other branches of the universal wavefunction. Maybe that’s a very current-year outlook, but even Bing can point out just how many assumptions it’s making.
I think questions like these are important, so thank you for thinking about and writing about this.
A hypothetical civilization which hasn’t observed signs of other life might also be able to find and understand these arguments. This includes the first civilization to create an ASI, if it has no way to know whether it’s the first.
If we accept this, then we may prefer to act as if we are the first, because we may think it best for (alien) civilizations in general to act as if they are the first, to ensure that the actual first one acts appropriately. (i.e., creating an aligned ASI, when the alternative would be an unaligned ASI tiling the universe). You could frame this as a form of acausal trade.
I apologize if this is confusing, I’m autistic and struggle with reducing meaning into language that others understand. Please let me know if you need clarification.
Interesting, thanks! This is relevant to question #2 in the post! Not sure everyone should act as if they were the first considering the downsides of interciv conflicts, but yeah, that’s a good point.
I have two things I want to say, I’m not sure if this one is important (it’s a physics question, out of curiosity, and you don’t have to answer) so I’ll make two separate comments.
The question: Would an ASI in control of more matter have enough of an advantage to fully take over the lower amount of matter controlled by another ASI, or would the second ASI have other options, e.g things like “creating a black hole supercomputer that computes in ways it deems valuable”?
I don’t know and this is outside the scope of this post I guess. There are a few organizations like the Center on Long-Term Risk studying cooperation and conflict between ASIs, however.