Every once in a while I think about Robert Freitas’ 1984 essay Xenopsychology, in particular his Sentience Quotient (SQ) idea:
It is possible to devise a sliding scale of cosmic sentience universally applicable to any intelligent entity in the cosmos, based on a “figure of merit” which I call the Sentience Quotient. The essential characteristic of all intelligent systems is that they process information using a processor or “brain” made of matter-energy. Generally the more information a brain can process in a shorter length of time, the more intelligent it can be. (Information rate is measured in bits/second, where one bit is the amount of information needed to choose correctly between two equally likely answers to a simple yes/no question.) Also, the lower the brain’s mass the less it will be influenced by fundamental limits such as speed of light restrictions on internal propagation, heat dissipation, and the Square-Cube Law.
The most efficient brain will have the highest information-processing rate I, and the lowest mass M, hence the highest ratio I/M. Since very large exponents are involved, for the convenience we define the Sentience Quotient or SQ as the logarithm of I/M, that is, its order of magnitude. Of course, SQ delimits maximum potential intellect–a poorly programmed or poorly designed (or very small) high-SQ brain could still be very stupid. But all else remaining equal larger-SQ entities should be higher-quality thinkers.
The lower end of our cosmic scale is easy to pin down. The very dumbest brain we can imagine would have one neuron with the mass of the universe (1052 kg) and require a time equal to the age of the universe (1018 seconds) to process just one bit, giving a minimum SQ of −70.
Whenever I see the “The difference between genius and stupidity is that genius has its limits” quote (usually apocryphally attributed to Einstein) I imagine Freitas retorting “no, so does stupidity, the limit is SQ −70″.
What is the smartest possible brain? Dr. H. Bremermann at the University of California at Berkeley claims there is a fundamental limit to intelligence imposed by the laws of quantum mechanics. The argument is simple but subtle. All information, to he acted upon, must be represented physically and be carried by matter-energy “markers.” According to Heisenberg’s Uncertainty Principle in quantum mechanics, the lower limit for the accuracy with which energy can be measured–the minimum measurable energy level for a marker carrying one bit–is given by Planck’s constant h divided by T, the duration of the measurement. If one energy level is used to represent one bit, then the maximum bit rate of a brain is equal to the total energy available E ( = mc2) for representing information, divided by the minimum measurable energy per bit (h/T) divided by the minimum time required for readout (T). or mc2/h = 1050 bits/sec/kg. Hence the smartest possible brain has an SQ of +50.
For a while I wondered what such a superbrain would be like, and then I found Seth Lloyd’s paper quantitatively bounding the computational power of a hypothetical “ultimate laptop” of mass 1 kg confined to volume 1L, which derives the same computation limit to within an OOM, concluding that “a typical state of the ultimate laptop’s memory looks like a plasma at a billion degrees Kelvin: the laptop’s memory looks like a thermonuclear explosion or a little piece of the Big Bang!”; its energy throughput would need to be a preposterous 4.04 x 1026 watts, slightly more than the entire sun’s output of 3.846 × 1026 watts(!!).
Where do people fit in? A human neuron has an average mass of about 10-10 kg and one neuron can process 1000-3000 bits/sec. earning us an SQ rating of +13.
That 50 − 13 = 37 OOMs of headroom estimate between humans and Freitas’ “mini-Big Bang superbrains” has stuck in my mind ever since. The “practical” headroom is definitely much lower, although how much I don’t know.
What is most interesting here is not the obvious fact that there’s a great deal of room for improvement (there is!), but rather that all “neuronal sentience” SQs, from insects to mammals, cluster within several points of the human value. From the cosmic point of view, rotifers, honeybees, and humans all have brainpower with roughly equivalent efficiencies. Note that we are still way ahead of the computers, with an Apple II at SQ +5 and even the mighty Cray I only about +9.
As an update on that 40-year old estimate, ChatGPT-5 medium estimates that “the highest value you can plausibly assign to a real, shipping computer “brain” today belongs to Cerebras’s wafer-scale processor (WSE-3) used in the CS-3 system. Using public performance and physical data, its chip-only SQ comes out around +19½. If you insist on a whole-system number (including packaging/cooling/rack), the CS-3-as-appliance is roughly +16; the most compute-dense Nvidia rack (GB200 NVL72) is about +15.9; and the #1 TOP500 supercomputer (El Capitan) is about +14.2.” I have a feeling smartphones might beat this, not sure why GPT-5 considered and dismissed assessing them in its reasoning trace.
Another kind of sentience, which we may call “hormonal sentience,” is exhibited by plants. Time-lapse photography shows the vicious struggles among vines in the tropical rain forests, and vegetative phototaxis (turning toward light) is a well-known phenomenon. All these behaviors are mediated, it is believed, by biochemical plant hormones transmitted through the vascular system. As in the animal kingdom, most of the geniuses are hunters–the carnivorous plants. The Venus flytrap, during a 1- to 20-second sensitivity interval, counts two stimuli before snapping shut on its insect prey, a processing peak of 1 bit/sec. Mass is 10-100 grams, so flytrap SQ is about +1. Plants generally take hours to respond to stimuli, though, so vegetative SQs tend to cluster around −2.
How about intelligences greater than human? Astronomer Robert Jastrow and others have speculated that silicon-based computer brains may represent the next and ultimate stage in our evolution. This is valid, but only in a very limited sense. Superconducting Josephson junction electronic gates weigh 10-12 kg and can process 1011 bits/sec, so “electronic sentiences” made of these components could have and SQ of +23 – ten orders beyond man. But even such fantastically advanced systems fall short of the maximum of +50. Somewhere in the universe may lurk beings almost incomprehensible to us, who think by manipulating atomic energy levels and are mentally as far beyond our best future computers as those computers will surpass the Venus flytrap.
Just as consciousness is an emergent of neuronal sentience, perhaps some broader mode of thinking–call it communalness–is an emergent of electronic sentience. If this is true, it might help to explain why (noncommunal) human beings have such great difficulty comprehending the intricate workings of the societies, governments, and economies they create, and require the continual and increasing assistance of computers to juggle the thousands of variables needed for successful management and planning. Perhaps future computers with communalness may develop the same intimate awareness of complex organizations as people have consciousness of their own bodies. And how many additional levels of emergent higher awareness might a creature with SQ +50 display?
The possible existence of ultrahuman SQ levels may affect our ability, and the desirability, of communicating with extraterrestrial beings. Sometimes it is rhetorically asked what we could possibly have to say to a dog or to an insect, if such could speak, that would be of interest to both parties? From our perspective of Sentience Quotients, we can see that the problem is actually far, far worse than this, more akin to asking people to discuss Shakespeare with trees or rocks. It may be that there is a minimum SQ “communication gap,” an intellectual distance beyond which no two entities can meaningfully converse.
At present, human scientists are attempting to communicate outside our species to primates and cetaceans, and in a limited way to a few other vertebrates. This is inordinately difficult, and yet it represents a gap of at most a few SQ points. The farthest we can reach is our “communication” with vegetation when we plant, water, or fertilize it, but it is evident that messages transmitted across an SQ gap of 10 points or more cannot be very meaningful.
What, then, could an SQ +50 Superbeing possibly have to say to us?
If we replace “SQ +50″ (which we know can’t work because of Seth Lloyd’s analysis above that they’ll be mini-Big Bangs so we wouldn’t survive their presence) with the more garden-variety ASIs I guess one possible answer is Charlie Stross’ Accelerando: ”...the narrator is Aineko and Aineko is not a cat. Aineko is an sAI that has figured out that humans are more easily interacted with/manipulated if you look like a toy or a pet than if you look like a Dalek. Aineko is not benevolent...”
Every once in a while I think about Robert Freitas’ 1984 essay Xenopsychology, in particular his Sentience Quotient (SQ) idea:
Whenever I see the “The difference between genius and stupidity is that genius has its limits” quote (usually apocryphally attributed to Einstein) I imagine Freitas retorting “no, so does stupidity, the limit is SQ −70″.
For a while I wondered what such a superbrain would be like, and then I found Seth Lloyd’s paper quantitatively bounding the computational power of a hypothetical “ultimate laptop” of mass 1 kg confined to volume 1L, which derives the same computation limit to within an OOM, concluding that “a typical state of the ultimate laptop’s memory looks like a plasma at a billion degrees Kelvin: the laptop’s memory looks like a thermonuclear explosion or a little piece of the Big Bang!”; its energy throughput would need to be a preposterous 4.04 x 1026 watts, slightly more than the entire sun’s output of 3.846 × 1026 watts(!!).
That 50 − 13 = 37 OOMs of headroom estimate between humans and Freitas’ “mini-Big Bang superbrains” has stuck in my mind ever since. The “practical” headroom is definitely much lower, although how much I don’t know.
As an update on that 40-year old estimate, ChatGPT-5 medium estimates that “the highest value you can plausibly assign to a real, shipping computer “brain” today belongs to Cerebras’s wafer-scale processor (WSE-3) used in the CS-3 system. Using public performance and physical data, its chip-only SQ comes out around +19½. If you insist on a whole-system number (including packaging/cooling/rack), the CS-3-as-appliance is roughly +16; the most compute-dense Nvidia rack (GB200 NVL72) is about +15.9; and the #1 TOP500 supercomputer (El Capitan) is about +14.2.” I have a feeling smartphones might beat this, not sure why GPT-5 considered and dismissed assessing them in its reasoning trace.
If we replace “SQ +50″ (which we know can’t work because of Seth Lloyd’s analysis above that they’ll be mini-Big Bangs so we wouldn’t survive their presence) with the more garden-variety ASIs I guess one possible answer is Charlie Stross’ Accelerando: ”...the narrator is Aineko and Aineko is not a cat. Aineko is an sAI that has figured out that humans are more easily interacted with/manipulated if you look like a toy or a pet than if you look like a Dalek. Aineko is not benevolent...”