Positives of a future with human capability improvement over an AI future.
Beren Millidge has an essay arguing for the claim that a future in which humanity proceeds with biological capability increasing is scarier than a future in which we develop AGI. Here are some concluding sentences from his essay:
Ultimately, human intelligence amplification and the resulting biosingularity has a deeper and more intractable alignment problem than AI alignment, at least if we don’t just assume it away by asserting that humans and our transhuman creations just have some intrinsic and ineffable access to ‘human values’ that potential AIs lack.
The only potential positive as regards alignment of the biosingularity is that it will happen much later, most likely in the closing decades of the 21st century and around the end of the natural lifespans of my personal cohort. This gives significantly more time to prepare than AGI, which is likely coming much sooner, but the problem is much harder and requires huge advances in neuroscience and understanding of brain algorithms to even reach the level of control we have over today’s AI systems (which is likely far from sufficient).
I disagree with his thesis — I think that instead of creating AIs smarter than humans, it would be much better to proceed with increasing the capabilities of humans (for at least the next 100 years).[1][2] Millidge’s claim that the only potential positive of a biofoom is that it starts later seems clearly false. In this note, I will list other imo important (pro tanto) positives of a human foom. I agree with Millidge that there are also positives of the AGI path;[3] these won’t be discussed in the present note. To assess which path is better overall, one could want to compare the positives of the human path to the positives of the AGI path,[4] but I will not do that here. The rest of this note is the list of positives.[5]
more similar entity ⇒ more similar values
an argument in favor of the human path:
assumption 1. it makes sense to speak of the values of an entity, and the values of an entity are some sort of kinda-smooth function of the entity’s structure/constitution
assumption 2. humanity (taken as an entity) currently has pretty good values
conclusion. somewhat modified humanity will still have pretty good values. in particular, a somewhat bioenhanced humanity will still have pretty good values
in contrast: an AGI future will involve the-process-happening-on-earth changing a lot more from what it is now, certainly by each point in time, but also by the time each higher capability level is reached
we can also give a similar argument for single humans:
assumption 1. it makes sense to speak of the values of an entity, and the values of an entity are some sort of kinda-smooth function of the entity’s structure/constitution
assumption 2′. individual humans growing up in current human societies have pretty good values
conclusion’. somewhat modified individual humans growing up in somewhat modified societies will still have pretty good values
in contrast: an AGI future will involve AGIs that are much more different than these future humans growing up in contexts that are much more different from the contexts in which current humans grow up. this is certainly true by each time (or each time after the beginning of each “foom proper”), but also by the point each higher capability level is reached
said another way:
the target of right values is drawn largely around where the arrow(s) determining the values that humans have landed. shooting more similar arrows in a similar way is a decent strategy for hitting that target again.
[humans are]/[humanity is] just cool
on the biofoom path, it will still be humanity, made of humans. humanity is cool. humans are cool. humans with increased capabilities and humanity with increased capabilities would be cool as well. like, of course it’s possible for us to become even much cooler, but we’re already occupying a very specific rare cool region in mindspace
the smarter humans you make will be slotting into existing human institutions/organizations/communities. existing human institutions/organizations/communities will still be useful to the smarter humans. this is a reason for existing human institutions/organizations/communities to persist/thrive/develop.[6] and existing human institutions are carriers/supporters/implementers of human values and also cool
the more capable humans will be continuing existing human (social, political, artistic, technological, scientific, mathematical, philosophical) projects and traditions. human thought will continue to develop. this is cool
like, even if the humans with improved capabilities were to (let’s say) form their own state and build a massive biodome around it and cause the outside to eventually become uninhabitable by polluting it or militarily leveling it to make room for automated factories or whatever, killing all other humans, that would be a very bad thing for them to do, but it would still be a human future, and that’s pretty important
an analogy:
suppose you are at intelligence level and you have to replace yourself with something at intelligence level . let’s say that you have the following options for making an agent at intelligence level :
the agent could just be you after having taken a linear algebra course or after inventing some new methods in numerical linear algebra
the agent could just be you after getting gene therapy which changes a single genetic variant to one that better supports learning (even in adulthood)
the agent could be some sort of novel mind you create from scratch by some training procedure (ok maybe involving some culture which was also involved when you grew up)
now, there is at least some range of your mind-making precision/understanding parameter from to idk[7] in which the last option is massively worse on the axis of “how good is it for the resulting guy to live its life” than the first two options! you consider yourself cool! you shouldn’t commit suicide![8]
to spell out the analogy: with humanity as the agent, the human path is like becoming smarter via gene therapy and learning linear algebra, whereas the AI path is like creating a novel thing happening on earth largely in novel ways from scratch[9]
reasons why human thought would be guiding development more/better in a biofoom
we have a lot of experience with and understanding about humans and specifically how to raise humans
for example, voters, politicians, and researchers will all have much higher-quality starting intuitions about what millions of human einsteins would be like than about what a bunch of AIs would be like
biofooming will be happening later, so we will have more time to prepare before it starts happening[10]
but also biofooming will be happening much slower, so humans will have more time to think about each improvement
like, the speed of development will be slower compared to the speed of human thought, so more human thought can go into each step of development. each unit step of development will be more human-thought-fully chosen
this is true at the level of individual humans thinking about what should be done, and also true at the level of institutions and governments (like, a government running at human speed will be better able to legislate a slow biofoom)
reasons a biofooming world would be diffusely human-friendly (sociopolitical factors, anti-[gradual disempowerment] stuff)
since humans will still be expensive to create and will continue to run on [at least order watts]/[a similar nutrient budget] in the biofoom (unlike how soon after AGI, it would be very cheap to create a new copy of an AI more capable than any human), even a current-100-iq human will probably remain productively employable for a long time
it will be somewhat difficult/weird for more capable humans to render the environment unlivable to less capable humans, because all humans have basically the same environmental requirements (for now)[11]
ditto for good governance, laws, norms. e.g. it’d be quite natural for a law that makes it harder to manipulate the parents of an IVF einstein out of their property to also make it harder to manipulate the amish out of their property
ditto for organizations and economic structures. e.g. it would be very natural for AIs to have a research process that operates in neuralese on a server, with translation costs being high enough that even if a human could do some useful task, doing the task yourself is cheaper than “translating” the task and context to a human; this effect is still present in human-human interactions but the degree to which it is present on the human path is smaller than the degree to which it is present on the AI path
empathy is easier/”more natural” toward beings that are more like you. when agent B is more similar to agent A, it is more likely that [golden rule]/[categorical imperative] style reasoning leads A to treat B well[12]
power concentration stuff is less bad in the biofoom case. it’s not like there would be some entity controlling these more capable humans. these humans will be growing up in various families, involved in various communities/cultures/nations, going to various schools. in the biofoom case, RSI will not be localized to a single lab[13]. monopolies are much more natural in the AI case than in the biofoom case
the AGI path involves power shifting away from people (and to AIs or companies) a lot more
fooming happening slower means that the change in technological/social/cultural/environmental conditions during any years of each human’s life is smaller. to the extent that this change endangers the survival/welfare/usefulness/employability of each human, it is good for it to happen slower
fooming happening slower compared to the pace of life means that more life can be lived by existing living beings before being at the frontier of development start to find them boring and useless (or like, i don’t want to say that this happens necessarily, but there is a force pushing in this direction)
in a biofoom, there will be a graph of caring connecting most humans to most other humans with not that many steps; it will only have such-and-such clustering coefficients
human institutions, organizations, systems are generally more likely to survive for longer. in particular, the following specific human institutions/organizations/systems supporting the ability of each human to live a good long life are more likely to survive for longer: states, laws, systems enforcing laws, social safety nets, democracy, cryonics organizations
technologies which benefit humans with higher capabilities will also be somewhat likely to benefit current-100-iq humans. eg cures to diseases and other medical treatments, better educational methods, new words/concepts, most consumer technologies. if humans remain central to [doing stuff]/[the economy], then a large fraction of economically rewarded innovation will be making humans more capable, and so technological progress will generally be pointed in a more humane direction
in general: various mad local forces at play in the world (greed, status-seeking, etc) will stay pointed in a more humane direction
also, governments will be much better able to understand, track, and deal with messy world problems in the biofoom case
we know humans [can be and often are] kind/nice to other humans
about humans, we know the following:
most humans consider it very bad to personally kill other humans, and would not kill other humans in normal circumstances. most humans consider the killing of humans highly reprehensible in general[14]
most humans consider it bad to steal from humans. most humans think human property rights should basically be respected. most humans are not in favor of taking property from people. most people would consider it very bad to take property from a person such that this person would then be unable to live an alright life
probably there are some humans who have a deep enough commitment to humanity that they’d remain nice to all existing humans even after individually fooming a lot (though note this won’t be happening in the human foom)
also:
there are specific steps/niches in human evolutionary history which made us nicer to unrelated strangers
other factors
there are factors much more tightly constraining capabilities in the biofoom case than in the AI foom case:
brain volume is tough to increase by idk more than (unlike AI compute)
roughly, there is some not-THAT-large finite number of intelligence-increasing “genetic ideas” in the current gene pool, and for at least some initial period these set a cap on how far you can go biologically. it will be very hard to genetically write novel more capable human learning algorithms. the genome interface to mind reprogramming is kinda cursed
it will be expensive and potentially immoral and illegal to “run experiments”
humanity’s correct values are somewhat well-tracked by reflection / self-endorsed development, and the biofoom path will be like that
it makes sense to speak of what sort of reflection and development should happen, distinguishing this from development that is likely to happen. it is very much not true that anything goes! if it were likely/natural that our society would get replaced by a molecular squiggle maximizer, that wouldn’t mean that our society’s true values are to make lots of those molecular squiggles, and that wouldn’t mean this was what should happen all along. whereas if we reflect carefully and understand more stuff and become better versions of ourselves and conclude that we should make lots of some molecular squiggles, then it’s plausible that this was what should happen all along.
it is (at least prima facie) extremely scary/reckless to have a step in your development where you hand things over to some novel mind created largely from scratch! this is done much more on the AI path than on the human path
It is also a possibility that both options are bad. My view is that we should push forward with increasing human capabilities biologically and culturally/educationally. But I think this question deserves serious analysis, and there are certainly specific things here that one should be very careful about and regulate. However, this note will not be analyzing this question.
clearly this is a reason for these to be around for longer in wall clock time, but also it’s a reason for them to be around until higher capability levels
in fact it seems plausible that, at least if the mind design has to be done from your own bounded perspective, it will keep being better to self-improve forever. i think this is plausible on this individual future life coolness axis and also all-things-considered.
ok, if you (imo incorrectly) believe in some sort of soul theory of personal identity then maybe to make this a fair example you would need to imagine the soul getting detached in all three examples, but then maybe you will think all of these are suicides… so maybe this isn’t a good analogy for you… but hmm i guess maybe you should also in the same sense believe in there being a soul attached to each society though, and then it would be a good analogy actually
ok, there’s a meaningful amount of shared culture. even if you thought human minds and AI minds are “mostly cultural” and that subbing out representation/learning/etc algorithms/structures and radically changing learning contexts doesn’t make it legit to say AGI will be a novel thing created largely in novel ways from scratch, it is still at least a really big step; it’s still creating some totally new guy
yes, there are sort of examples like climate change. but it is still much easier to imagine entities that are just programs that can be run on arbitrary computers being totally fine with or even preferring a very different environment. there is a large difference in degree here
there is a major issue around diffuse effects — like, people and institutions currently not tracking [decreasing the lifespan of people by day each] as an instance of [killing people] or minimally still extremely bad, and ditto for probabilistic versions of this, but i think this is largely an epistemic/skill/intelligence issue
I really appreciated Beren’s post to figure out what I think about this for myself. I think another intuition where I and Beren differ is that it is just fine if 1% of enhanced humans you make is a psychopath or whatever (10% is a bit high though. I don’t expect it to be that high unless you allow the data for you GWAS to have a high % of people cheating on tests or misreporting data (which is a valid concern)). The other enhanced humans can deal with the psychopaths! The human psychopaths would like to solve alignment too and are incentivised to cooperate with the other humans. This is not true with AIs because they are so easily changed and copied. The one project that isn’t careful with their AI’s can then spoil it for everyone else when their Pythia overtakes the universe, while the safe projects work on interp and conceptual foundations. With even mildly superhuman AI (we have superhuman hacking now), you would have to be super paranoid that the AI didn’t poison it’s training data or did anything else mischievous. Meanwhile, as a thought experiment I would feel quite fine retiring to childcare and handing off the future to say ~10 clones of myself that have been enhanced in health and intelligence by ~2-4sd (clones would minimize difference from myself, multiple means less noise in the handover, assuming we did this through editing I could even edit different SNPs for every clone, so there is no consistent misalignment between them and me).
I do think some goal shift from intelligence enhancement is possible and we should be able to get some data on this already from looking at existing humans.
One intuition I have is that if higher intelligence leads to weird things when it comes to goal misgeneralization, we should see more difference in value that start emerging in humans during puberty compared to in early childhood. I’d be curious if someone has compared this between twins (monozygous twins being more different typical puberty related emotions like parental defiance and their sexuality compared to their relation to anger?) or has checked for consistent trends in puberty being more weird in smarter people compared to early childhood development.
Hmmm, some good points. Clearly if I write something to complex this will take way too long and therefore, the cool voice is good.
Okay, yes humans can be cool but all humans cool?
Maybe some governments not cool? How does AI vs Bio affect if one big cool or many small cool? What if like homo deus we get separate Very Smart group of humans and one not so smart?
Human worse thing done worse than LLM worse thing done? Less control over range of expression? Moral mazes lead to psychopaths in control? Maybe non cool humans take control?
Yet, slow process good point. Coolness better chance if longer to remain cool.
Maybe democracy + debate cool? Totalitarianism not cool? Coolness is not group specific for AI or human? Coolness about how cool the decision process is? What does coolness attractor look like?
What if like homo deus we get separate Very Smart group of humans and one not so smart?
I agree this would most likely be either somewhat bad or quite bad, probably quite bad (depending on the details), both causally and also indicatorily.
I’ll restrict my discussion to reprogenetics, because I think that’s the main way we get smarter humans. My responses would be “this seems quite unlikely in the short run (like, a few generations)” and “this seems pretty unlikely in the longer run, at least assuming it’s in fact bad” and “there’s a lot we can do to make those outcomes less likely (and less bad)”.
Why unlikely in the short run? Some main reasons:
Uptake of reprogenetics is slow (takes clock time), gradual (proceeds by small steps), and fairly visible (science generally isn’t very secretive; and if something is being deployed at scale, that’s easy to notice, e.g. many clinics offering something or some company getting huge investments). These give everyone more room to cope in general, as discussed above, and in particular gives more time for people to notice emerging inequality and such. So humanity in general gets to learn about the tech and its meaning, institute social and governmental regulation, gain access, understand how to use it, etc.
Likewise, the strength of the technology itself will grow gradually (though I hope not too slowly). Further, even as the newest technology gets stronger, the previous medium strength tech gets more uptake. This means there’s a continuum of people using different levels of strength of the technology.
Parents will most likely have quite a range of how much they want to use reprogenetics for various things. Some will only decrease their future children’s disease risks; some will slightly increase IQ; some might want to increase IQ around as much as is available.
IQ, and even more so for other cognitive capacities and traits, is controlled…
partly by genetic effects we can make use of (currently something in the ballpark of 20% or so);
partly by genetic effects we can’t make use of (very many of which might take a long time to make use of because they have small and/or rare effects, or because they have interactions with other genes and/or the environment);
partly by non-genetic causes (unstructured environment such as randomness in fetal development, structured external environment such as culture, structured internal environment such as free self-creation / decisions about what to be or do).
Thus, we cannot control these traits, in the sense of hitting a narrow target; we can only shift them around. We cannot make the distribution of a future child’s traits be narrow. (This is a bad state of affairs in some ways, but has substantive redeeming qualities, which I’m invoking here: people couldn’t firmly separate even if they wanted to (though this may need quantification to have much force).)
Together, I suspect that the above points create, not separated blobs, but a broad continuum:
As mentioned in the post, there are ceilings to human intelligence amplification that would probably be hit fairly quickly, and probably wouldn’t be able to be bypassed quickly. (Who knows how long, but I’m imagining at least some decades—e.g. BCIs might take on that time scale or longer to impact human general intelligence—reprogenetics is the main strategy that takes advantage of evolution’s knowledge about how to make capable brains, and I’m guessing that knowledge is more than we will do in a couple decades but not insanely much in the grand scheme of things.)
What can we do? Without going into detail on how, I’ll just basically say “greatly increase equality of access through innovation and education, and research cultures to support those things”.
Okay, I think the gradual point is a good one and also that it very much helps our institutions to be able to deal with increased intelligence.
I would be curious what you think about the idea of more permanent economic rifts and also the general economics of gene editing? Might it be smart to make it a public good instead?
Maybe there’s something here about IQ being hereditary already and thus the point about a more permanent two caste society with smart and stupid people is redundant but somehow I still feel that the economics of private gene editing over long periods of time feels a bit off?
I would be curious what you think about the idea of more permanent economic rifts and also the general economics of gene editing?
As a matter of science and technology, reprogenetics should be inexpensive. I’ve analyzed this area quite a bit (though not focused specifically on eventual cost), see https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html . My fairly strong guess is that it’s perfectly feasible to have strong reprogenetics that’s pretty inexpensive (on the order of $5k to $25k for a pretty strongly genomically vectored zygote). From a tech and science perspective, I think I see multiple somewhat-disjunctive ways, each of which is pretty plausibly feasible, and each of which doesn’t seem to have any inputs that can’t be made inexpensive.
(As a comparison point, IVF is expensive—something like $8k to $20k—but my guess is that this is largely because of things like regulatory restrictions (needing an MD to supervise egg retrieval, even though NPs can do it well), drug price lock-in (the drugs are easy to manufacture, so are available cheaply on gray markets), and simply economic friction/overhang (CNY is cheaper basically by deciding to be cheaper and giving away some concierge-ness). None of this solves things for IVF today; I’m just saying, it’s not expensive due to the science and tech costing $20k.)
Assuming that it can be technically inexpensive, that cuts out our work for us: make it be inexpensive, by
Both internal to the field, and pressure / support from society and gvt
Don’t patent and then sit on it or keep it proprietary; instead publish science, or patent and license, or at least provide it as a platform service
Avoiding large other costs
Sane regulation
No monopolies
Subsidies
Might it be smart to make it a public good instead?
I definitely think that
as much as possible, gvt should fund related science to be published openly; this helps drive down prices, enables more competitive last-mile industry (clinics, platforms for biological operations, etc.), and signals a societal value of caring about this and not leaving it up to raw market forces
probably gvt should provide subsidies with some kind of general voucher (however, it should be a general voucher only, not a voucher for specific genomic choices—I don’t want the gvt controlling people’s genomic choices according to some centralized criterion, as this is eugenicsy, cf. https://berkeleygenomics.org/articles/Genomic_emancipation_contra_eugenics.html )
Is that what you mean? I don’t think we can rely on gvt and philanthropic funding to build out a widely-accessible set of clinics / other practical reprogenetics services, so if you meant nationalizing the industry, my guess is no, that would be bad to do.
I meant the basic economy way of defining public good, not necessarily the distribution mchanism, electricity and water are public goods but they aren’t necessarily determined by the government.
I’ve had the semi ironic idea of setting up a “genetic lottery” if supply was capped as it would redistribute things evenly (as long as people sign up evenly which is not true).
Anyways, cool stuff, happy that someone is on top of this!
generally, humans are cool. in fact probably all current humans are intrinsically cool. a few are suffering very badly and say they would rather not exist, and in some cases their lives have been net negative so far. we should try to help these people. some humans are doing bad things to other humans and that’s not cool. some humans are sufficiently bad to others that it would have been better if they were never born. such humans should be rehabilitated and/or contained, and conditions should be maintained/created in which this is disincentivized
Coolness is not group specific for AI or human?
not group specific in principle, but human life is pro tanto strongly cooler. but eg a mind uploaded human society would still be cool. continuing human life is very important. deep friendships with aliens should not be ruled out in principle, but should be approached with great caution. any claim that we should already care deeply about the possible lives of some not-specifically-chosen aliens that we might create, that we haven’t yet created, and so that we have great reason to create them, is prima facie very unlikely. this universe probably only has negentropy for so many beings (if you try to dovetail all possible lives, you won’t even get to running any human for a single step); we should think extremely carefully about which ones we create and befriend
What if like homo deus we get separate Very Smart group of humans and one not so smart?
Human worse thing done worse than LLM worse thing done? Less control over range of expression? Moral mazes lead to psychopaths in control? Maybe non cool humans take control?
i agree these are problems that would need to be handled on the human path
Moral mazes lead to psychopaths in control? Maybe non cool humans take control?
This is a significant worry—but my guess is that having lots more really smart people would make the problem get better in the long run. That stuff is already happening. Figuring out how to avoid it is a very difficult unsolved problem, which is thus likely to be heavily bottlenecked on ideas of various kinds (e.g. ideas for governance, for culture, for technology to implement good journalism, etc etc.).
Hmmm but what if human good not coupled with human wisdom? Maybe more intelligence more power seeking if not carefully implemented?
I think this is just not the case; I’d guess it’s slightly the opposite on average, but in any case, I’ve never heard anyone make an argument for this based on science or statistics. (There could very well be a good such arguments, curious to hear!)
Separately, I’d suggest that humanity is bottlenecked on good ideas—including ideas for how to have good values / behave well / accomplish good things / coordinate on good things / support other people in getting more good. A neutral/average-goodness human, but smart, would I think want to contribute to those problems, and be more able to do so.
I’ll share a paper I remember seeing on the ability to do motivated reasoning and holding onto false views being higher for higher iq people tomorrow (if it actually is statistically significant).
Also maybe the more important things to improve after a certain IQ might be openness and conscientiousness? Thoughts on that?
I do think that it actually is quite possible to do some gene editing on big 5 and ethics tbh but we just gotta actually do it.
more potential for scientifically unknown effects—it’s a complex trait
more potential bad parental decision-making (e.g. not understanding what very high disagreeableness really means)
more potential for parental or even state misuse (e.g. wanting a kid to be very obedient)
more weirdness regarding consent and human dignity and stuff; I think it’s pretty unproblematic to decrease disease risk and increase healthspan and increase capabilities, and only slightly problematic (due to competition effects and possible health issues) to tweak appearance (though I don’t really care about this one); but I think it’s kinda problematic with personality traits because you’re messing with values in ways you don’t understand; not so problematic that it’s ruled out to tweak traits within the quite-normal human range, and I’m probably ultimately pretty in favor of it, but I’d want more sensitivity on these questions
technically more difficult than IQ or disease because conceptually muddled, hard to measure, and maybe genuinely more genetically complex.
I would agree that this is a weird incentive issue and that IQ is probably easier and less thorny than personality traits. With that being said here’s a fun little thought on alternative ways of looking at intelligence:
Okay but why is IQ a lot more important than “personality”?
IQ being measured as G and based on correlational evidence about your ability to progress in education and work life. This is one frame to have on it. I think it correlates a lot of things about personality into a view that is based on a very specific frame from a psychometric perspective?
Okay, let’s look at intelligence from another angle, we use the predictive processing or RL angle that’s more about explore exploit, how does that show up? How do we increase the intelligence of a predictive processing agent? How does the parameters of when to explore and when to exploit and the time horizon of future rewards?
Openness here would be the proclivity to explore and look at new sources of information whilst conscientiousness is about the time horizon of the discouting factor in reward learning. (Correlatively but you could probably define new better measures of this, the big 5 traits are probably not the true names for these objectives.)
I think it is better for a society to be able to talk to each other and integrate information well hence I think we should make openness higher from a collective intelligence perspective. I also think it is better if we imagine that we’re playing longer form games with each other as that generally leads to more cooperative equilibria and hence I think conscientiousness would also be good if it is higher.
(The paper I saw didn’t replicate btw so I walk back the intelligence makes you more ignorant point. )
(Also here’s a paper talking about the ability to be creative having a threshold effect around 120 iq with openness mattering more after that, there’s a bunch more stuff like this if you search for it.)
(Also here’s a paper talking about the ability to be creative having a threshold effect around 120 iq with openness mattering more after that, there’s a bunch more stuff like this if you search for it.)
To speculate, it might be the case that effects like this one are at least to some extent due to the modern society not being well-adapted to empowering very-high-g[1] people, and instead putting more emphasis on “no one being left behind”[2]. Like, maybe you actually need a proper supportive environment (that is relatively scarce in the modern world) to reap the gains from very high g, in most cases.
(Not confident about the size of the effect (though I’m sure it’s at least somewhat true) or about the relevance for the study you’re citing, especially after thinking it through a bit after writing this, but I’m leaving it for the sake of expanding the hypothesis space.)
But, if it’s not that, then the threshold thing is interesting and weird.
I would hypothesise that it is more about the underlying ability to use the engine that is intelligence. If we do the classic eliezer definition (i think it is in the sequences at least) of the ability to hit a target then that is only half of the problem because you have to choose a problem space as well.
Part of intelligence is probably choosing a good problem space but I think the information sampling and the general knowledge level of the people and institutions and general information sources around you is quite important to that sampling process. Hence if you’re better at integrating diverse sources of information then you’re likely better at making progress.
Finally I think there’s something about some weird sort of scientific version of frame control where a lot of science is about asking the right question and getting exposure to more ways of asking questions lead to better ways of asking questions.
So to use your intelligence you need to wield it well and wielding it well partly involves working on the right questions. But if you’re not smart enough to solve the questions in the first place it doesn’t really matter if you ask the right question.
Positives of a future with human capability improvement over an AI future.
Beren Millidge has an essay arguing for the claim that a future in which humanity proceeds with biological capability increasing is scarier than a future in which we develop AGI. Here are some concluding sentences from his essay:
I disagree with his thesis — I think that instead of creating AIs smarter than humans, it would be much better to proceed with increasing the capabilities of humans (for at least the next 100 years). [1] [2] Millidge’s claim that the only potential positive of a biofoom is that it starts later seems clearly false. In this note, I will list other imo important (pro tanto) positives of a human foom. I agree with Millidge that there are also positives of the AGI path; [3] these won’t be discussed in the present note. To assess which path is better overall, one could want to compare the positives of the human path to the positives of the AGI path, [4] but I will not do that here. The rest of this note is the list of positives. [5]
more similar entity ⇒ more similar values
an argument in favor of the human path:
assumption 1. it makes sense to speak of the values of an entity, and the values of an entity are some sort of kinda-smooth function of the entity’s structure/constitution
assumption 2. humanity (taken as an entity) currently has pretty good values
conclusion. somewhat modified humanity will still have pretty good values. in particular, a somewhat bioenhanced humanity will still have pretty good values
in contrast: an AGI future will involve the-process-happening-on-earth changing a lot more from what it is now, certainly by each point in time, but also by the time each higher capability level is reached
we can also give a similar argument for single humans:
assumption 1. it makes sense to speak of the values of an entity, and the values of an entity are some sort of kinda-smooth function of the entity’s structure/constitution
assumption 2′. individual humans growing up in current human societies have pretty good values
conclusion’. somewhat modified individual humans growing up in somewhat modified societies will still have pretty good values
in contrast: an AGI future will involve AGIs that are much more different than these future humans growing up in contexts that are much more different from the contexts in which current humans grow up. this is certainly true by each time (or each time after the beginning of each “foom proper”), but also by the point each higher capability level is reached
said another way:
the target of right values is drawn largely around where the arrow(s) determining the values that humans have landed. shooting more similar arrows in a similar way is a decent strategy for hitting that target again.
[humans are]/[humanity is] just cool
on the biofoom path, it will still be humanity, made of humans. humanity is cool. humans are cool. humans with increased capabilities and humanity with increased capabilities would be cool as well. like, of course it’s possible for us to become even much cooler, but we’re already occupying a very specific rare cool region in mindspace
the smarter humans you make will be slotting into existing human institutions/organizations/communities. existing human institutions/organizations/communities will still be useful to the smarter humans. this is a reason for existing human institutions/organizations/communities to persist/thrive/develop. [6] and existing human institutions are carriers/supporters/implementers of human values and also cool
the more capable humans will be continuing existing human (social, political, artistic, technological, scientific, mathematical, philosophical) projects and traditions. human thought will continue to develop. this is cool
like, even if the humans with improved capabilities were to (let’s say) form their own state and build a massive biodome around it and cause the outside to eventually become uninhabitable by polluting it or militarily leveling it to make room for automated factories or whatever, killing all other humans, that would be a very bad thing for them to do, but it would still be a human future, and that’s pretty important
an analogy:
suppose you are at intelligence level and you have to replace yourself with something at intelligence level . let’s say that you have the following options for making an agent at intelligence level :
the agent could just be you after having taken a linear algebra course or after inventing some new methods in numerical linear algebra
the agent could just be you after getting gene therapy which changes a single genetic variant to one that better supports learning (even in adulthood)
the agent could be some sort of novel mind you create from scratch by some training procedure (ok maybe involving some culture which was also involved when you grew up)
now, there is at least some range of your mind-making precision/understanding parameter from to idk
[7]
in which the last option is massively worse on the axis of “how good is it for the resulting guy to live its life” than the first two options! you consider yourself cool! you shouldn’t commit suicide!
[8]
to spell out the analogy: with humanity as the agent, the human path is like becoming smarter via gene therapy and learning linear algebra, whereas the AI path is like creating a novel thing happening on earth largely in novel ways from scratch [9]
reasons why human thought would be guiding development more/better in a biofoom
we have a lot of experience with and understanding about humans and specifically how to raise humans
for example, voters, politicians, and researchers will all have much higher-quality starting intuitions about what millions of human einsteins would be like than about what a bunch of AIs would be like
biofooming will be happening later, so we will have more time to prepare before it starts happening [10]
but also biofooming will be happening much slower, so humans will have more time to think about each improvement
like, the speed of development will be slower compared to the speed of human thought, so more human thought can go into each step of development. each unit step of development will be more human-thought-fully chosen
this is true at the level of individual humans thinking about what should be done, and also true at the level of institutions and governments (like, a government running at human speed will be better able to legislate a slow biofoom)
reasons a biofooming world would be diffusely human-friendly (sociopolitical factors, anti-[gradual disempowerment] stuff)
since humans will still be expensive to create and will continue to run on [at least order watts]/[a similar nutrient budget] in the biofoom (unlike how soon after AGI, it would be very cheap to create a new copy of an AI more capable than any human), even a current-100-iq human will probably remain productively employable for a long time
it will be somewhat difficult/weird for more capable humans to render the environment unlivable to less capable humans, because all humans have basically the same environmental requirements (for now) [11]
ditto for good governance, laws, norms. e.g. it’d be quite natural for a law that makes it harder to manipulate the parents of an IVF einstein out of their property to also make it harder to manipulate the amish out of their property
ditto for organizations and economic structures. e.g. it would be very natural for AIs to have a research process that operates in neuralese on a server, with translation costs being high enough that even if a human could do some useful task, doing the task yourself is cheaper than “translating” the task and context to a human; this effect is still present in human-human interactions but the degree to which it is present on the human path is smaller than the degree to which it is present on the AI path
empathy is easier/”more natural” toward beings that are more like you. when agent B is more similar to agent A, it is more likely that [golden rule]/[categorical imperative] style reasoning leads A to treat B well [12]
power concentration stuff is less bad in the biofoom case. it’s not like there would be some entity controlling these more capable humans. these humans will be growing up in various families, involved in various communities/cultures/nations, going to various schools. in the biofoom case, RSI will not be localized to a single lab [13] . monopolies are much more natural in the AI case than in the biofoom case
the AGI path involves power shifting away from people (and to AIs or companies) a lot more
fooming happening slower means that the change in technological/social/cultural/environmental conditions during any years of each human’s life is smaller. to the extent that this change endangers the survival/welfare/usefulness/employability of each human, it is good for it to happen slower
fooming happening slower compared to the pace of life means that more life can be lived by existing living beings before being at the frontier of development start to find them boring and useless (or like, i don’t want to say that this happens necessarily, but there is a force pushing in this direction)
in a biofoom, there will be a graph of caring connecting most humans to most other humans with not that many steps; it will only have such-and-such clustering coefficients
human institutions, organizations, systems are generally more likely to survive for longer. in particular, the following specific human institutions/organizations/systems supporting the ability of each human to live a good long life are more likely to survive for longer: states, laws, systems enforcing laws, social safety nets, democracy, cryonics organizations
technologies which benefit humans with higher capabilities will also be somewhat likely to benefit current-100-iq humans. eg cures to diseases and other medical treatments, better educational methods, new words/concepts, most consumer technologies. if humans remain central to [doing stuff]/[the economy], then a large fraction of economically rewarded innovation will be making humans more capable, and so technological progress will generally be pointed in a more humane direction
in general: various mad local forces at play in the world (greed, status-seeking, etc) will stay pointed in a more humane direction
also, governments will be much better able to understand, track, and deal with messy world problems in the biofoom case
we know humans [can be and often are] kind/nice to other humans
about humans, we know the following:
most humans consider it very bad to personally kill other humans, and would not kill other humans in normal circumstances. most humans consider the killing of humans highly reprehensible in general [14]
most humans consider it bad to steal from humans. most humans think human property rights should basically be respected. most humans are not in favor of taking property from people. most people would consider it very bad to take property from a person such that this person would then be unable to live an alright life
probably there are some humans who have a deep enough commitment to humanity that they’d remain nice to all existing humans even after individually fooming a lot (though note this won’t be happening in the human foom)
also:
there are specific steps/niches in human evolutionary history which made us nicer to unrelated strangers
other factors
there are factors much more tightly constraining capabilities in the biofoom case than in the AI foom case:
brain volume is tough to increase by idk more than (unlike AI compute)
roughly, there is some not-THAT-large finite number of intelligence-increasing “genetic ideas” in the current gene pool, and for at least some initial period these set a cap on how far you can go biologically. it will be very hard to genetically write novel more capable human learning algorithms. the genome interface to mind reprogramming is kinda cursed
it will be expensive and potentially immoral and illegal to “run experiments”
humanity’s correct values are somewhat well-tracked by reflection / self-endorsed development, and the biofoom path will be like that
it makes sense to speak of what sort of reflection and development should happen, distinguishing this from development that is likely to happen. it is very much not true that anything goes! if it were likely/natural that our society would get replaced by a molecular squiggle maximizer, that wouldn’t mean that our society’s true values are to make lots of those molecular squiggles, and that wouldn’t mean this was what should happen all along. whereas if we reflect carefully and understand more stuff and become better versions of ourselves and conclude that we should make lots of some molecular squiggles, then it’s plausible that this was what should happen all along.
it is (at least prima facie) extremely scary/reckless to have a step in your development where you hand things over to some novel mind created largely from scratch! this is done much more on the AI path than on the human path
i say some more on good development and also the general topic of this note here: https://www.lesswrong.com/posts/iemgJhjNLa5eyevWR/kh-s-shortform?commentId=PHm2ZkagfyrhT2Wvz
a concluding remark
self-improvement generally has many good properties over creating a new agent/mind from scratch [15]
i think we should ban AGI
It is also a possibility that both options are bad. My view is that we should push forward with increasing human capabilities biologically and culturally/educationally. But I think this question deserves serious analysis, and there are certainly specific things here that one should be very careful about and regulate. However, this note will not be analyzing this question.
That said, I think the analysis of the positives in his essay gets very many things wrong.
But one also doesn’t have to do that, to compare the two paths. One can also just “directly” think about what would happen along each path.
They are not listed in order of importance.
clearly this is a reason for these to be around for longer in wall clock time, but also it’s a reason for them to be around until higher capability levels
in fact it seems plausible that, at least if the mind design has to be done from your own bounded perspective, it will keep being better to self-improve forever. i think this is plausible on this individual future life coolness axis and also all-things-considered.
ok, if you (imo incorrectly) believe in some sort of soul theory of personal identity then maybe to make this a fair example you would need to imagine the soul getting detached in all three examples, but then maybe you will think all of these are suicides… so maybe this isn’t a good analogy for you… but hmm i guess maybe you should also in the same sense believe in there being a soul attached to each society though, and then it would be a good analogy actually
ok, there’s a meaningful amount of shared culture. even if you thought human minds and AI minds are “mostly cultural” and that subbing out representation/learning/etc algorithms/structures and radically changing learning contexts doesn’t make it legit to say AGI will be a novel thing created largely in novel ways from scratch, it is still at least a really big step; it’s still creating some totally new guy
as Millidge says
yes, there are sort of examples like climate change. but it is still much easier to imagine entities that are just programs that can be run on arbitrary computers being totally fine with or even preferring a very different environment. there is a large difference in degree here
and we at least know humans have a propensity to carry out and be moved by this style of reasoning
or three labs or whatever. In reality, I think only a single lab will matter, absent strict capability regulation.
there is a major issue around diffuse effects — like, people and institutions currently not tracking [decreasing the lifespan of people by day each] as an instance of [killing people] or minimally still extremely bad, and ditto for probabilistic versions of this, but i think this is largely an epistemic/skill/intelligence issue
this is also important to track when thinking about AIs making more capable AIs
I really appreciated Beren’s post to figure out what I think about this for myself. I think another intuition where I and Beren differ is that it is just fine if 1% of enhanced humans you make is a psychopath or whatever (10% is a bit high though. I don’t expect it to be that high unless you allow the data for you GWAS to have a high % of people cheating on tests or misreporting data (which is a valid concern)). The other enhanced humans can deal with the psychopaths! The human psychopaths would like to solve alignment too and are incentivised to cooperate with the other humans. This is not true with AIs because they are so easily changed and copied. The one project that isn’t careful with their AI’s can then spoil it for everyone else when their Pythia overtakes the universe, while the safe projects work on interp and conceptual foundations. With even mildly superhuman AI (we have superhuman hacking now), you would have to be super paranoid that the AI didn’t poison it’s training data or did anything else mischievous. Meanwhile, as a thought experiment I would feel quite fine retiring to childcare and handing off the future to say ~10 clones of myself that have been enhanced in health and intelligence by ~2-4sd (clones would minimize difference from myself, multiple means less noise in the handover, assuming we did this through editing I could even edit different SNPs for every clone, so there is no consistent misalignment between them and me).
I do think some goal shift from intelligence enhancement is possible and we should be able to get some data on this already from looking at existing humans. One intuition I have is that if higher intelligence leads to weird things when it comes to goal misgeneralization, we should see more difference in value that start emerging in humans during puberty compared to in early childhood. I’d be curious if someone has compared this between twins (monozygous twins being more different typical puberty related emotions like parental defiance and their sexuality compared to their relation to anger?) or has checked for consistent trends in puberty being more weird in smarter people compared to early childhood development.
Hmmm, some good points. Clearly if I write something to complex this will take way too long and therefore, the cool voice is good.
Okay, yes humans can be cool but all humans cool?
Maybe some governments not cool? How does AI vs Bio affect if one big cool or many small cool? What if like homo deus we get separate Very Smart group of humans and one not so smart?
Human worse thing done worse than LLM worse thing done? Less control over range of expression? Moral mazes lead to psychopaths in control? Maybe non cool humans take control?
Yet, slow process good point. Coolness better chance if longer to remain cool.
Maybe democracy + debate cool? Totalitarianism not cool? Coolness is not group specific for AI or human? Coolness about how cool the decision process is? What does coolness attractor look like?
Cool.
I agree this would most likely be either somewhat bad or quite bad, probably quite bad (depending on the details), both causally and also indicatorily.
I’ll restrict my discussion to reprogenetics, because I think that’s the main way we get smarter humans. My responses would be “this seems quite unlikely in the short run (like, a few generations)” and “this seems pretty unlikely in the longer run, at least assuming it’s in fact bad” and “there’s a lot we can do to make those outcomes less likely (and less bad)”.
Why unlikely in the short run? Some main reasons:
Uptake of reprogenetics is slow (takes clock time), gradual (proceeds by small steps), and fairly visible (science generally isn’t very secretive; and if something is being deployed at scale, that’s easy to notice, e.g. many clinics offering something or some company getting huge investments). These give everyone more room to cope in general, as discussed above, and in particular gives more time for people to notice emerging inequality and such. So humanity in general gets to learn about the tech and its meaning, institute social and governmental regulation, gain access, understand how to use it, etc.
Likewise, the strength of the technology itself will grow gradually (though I hope not too slowly). Further, even as the newest technology gets stronger, the previous medium strength tech gets more uptake. This means there’s a continuum of people using different levels of strength of the technology.
Parents will most likely have quite a range of how much they want to use reprogenetics for various things. Some will only decrease their future children’s disease risks; some will slightly increase IQ; some might want to increase IQ around as much as is available.
IQ, and even more so for other cognitive capacities and traits, is controlled…
partly by genetic effects we can make use of (currently something in the ballpark of 20% or so);
partly by genetic effects we can’t make use of (very many of which might take a long time to make use of because they have small and/or rare effects, or because they have interactions with other genes and/or the environment);
partly by non-genetic causes (unstructured environment such as randomness in fetal development, structured external environment such as culture, structured internal environment such as free self-creation / decisions about what to be or do).
Thus, we cannot control these traits, in the sense of hitting a narrow target; we can only shift them around. We cannot make the distribution of a future child’s traits be narrow. (This is a bad state of affairs in some ways, but has substantive redeeming qualities, which I’m invoking here: people couldn’t firmly separate even if they wanted to (though this may need quantification to have much force).)
Together, I suspect that the above points create, not separated blobs, but a broad continuum:
As mentioned in the post, there are ceilings to human intelligence amplification that would probably be hit fairly quickly, and probably wouldn’t be able to be bypassed quickly. (Who knows how long, but I’m imagining at least some decades—e.g. BCIs might take on that time scale or longer to impact human general intelligence—reprogenetics is the main strategy that takes advantage of evolution’s knowledge about how to make capable brains, and I’m guessing that knowledge is more than we will do in a couple decades but not insanely much in the grand scheme of things.)
What can we do? Without going into detail on how, I’ll just basically say “greatly increase equality of access through innovation and education, and research cultures to support those things”.
Okay, I think the gradual point is a good one and also that it very much helps our institutions to be able to deal with increased intelligence.
I would be curious what you think about the idea of more permanent economic rifts and also the general economics of gene editing? Might it be smart to make it a public good instead?
Maybe there’s something here about IQ being hereditary already and thus the point about a more permanent two caste society with smart and stupid people is redundant but somehow I still feel that the economics of private gene editing over long periods of time feels a bit off?
As a matter of science and technology, reprogenetics should be inexpensive. I’ve analyzed this area quite a bit (though not focused specifically on eventual cost), see https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html . My fairly strong guess is that it’s perfectly feasible to have strong reprogenetics that’s pretty inexpensive (on the order of $5k to $25k for a pretty strongly genomically vectored zygote). From a tech and science perspective, I think I see multiple somewhat-disjunctive ways, each of which is pretty plausibly feasible, and each of which doesn’t seem to have any inputs that can’t be made inexpensive.
(As a comparison point, IVF is expensive—something like $8k to $20k—but my guess is that this is largely because of things like regulatory restrictions (needing an MD to supervise egg retrieval, even though NPs can do it well), drug price lock-in (the drugs are easy to manufacture, so are available cheaply on gray markets), and simply economic friction/overhang (CNY is cheaper basically by deciding to be cheaper and giving away some concierge-ness). None of this solves things for IVF today; I’m just saying, it’s not expensive due to the science and tech costing $20k.)
Assuming that it can be technically inexpensive, that cuts out our work for us: make it be inexpensive, by
Making the tech inexpensive
Thinking not just about the current tech, but investing in the stronger tech (stronger → less expensive https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html#strong-gv-and-why-it-matters )
Culture of innovation
Both internal to the field, and pressure / support from society and gvt
Don’t patent and then sit on it or keep it proprietary; instead publish science, or patent and license, or at least provide it as a platform service
Avoiding large other costs
Sane regulation
No monopolies
Subsidies
I definitely think that
as much as possible, gvt should fund related science to be published openly; this helps drive down prices, enables more competitive last-mile industry (clinics, platforms for biological operations, etc.), and signals a societal value of caring about this and not leaving it up to raw market forces
probably gvt should provide subsidies with some kind of general voucher (however, it should be a general voucher only, not a voucher for specific genomic choices—I don’t want the gvt controlling people’s genomic choices according to some centralized criterion, as this is eugenicsy, cf. https://berkeleygenomics.org/articles/Genomic_emancipation_contra_eugenics.html )
Is that what you mean? I don’t think we can rely on gvt and philanthropic funding to build out a widely-accessible set of clinics / other practical reprogenetics services, so if you meant nationalizing the industry, my guess is no, that would be bad to do.
I meant the basic economy way of defining public good, not necessarily the distribution mchanism, electricity and water are public goods but they aren’t necessarily determined by the government.
I’ve had the semi ironic idea of setting up a “genetic lottery” if supply was capped as it would redistribute things evenly (as long as people sign up evenly which is not true).
Anyways, cool stuff, happy that someone is on top of this!
generally, humans are cool. in fact probably all current humans are intrinsically cool. a few are suffering very badly and say they would rather not exist, and in some cases their lives have been net negative so far. we should try to help these people. some humans are doing bad things to other humans and that’s not cool. some humans are sufficiently bad to others that it would have been better if they were never born. such humans should be rehabilitated and/or contained, and conditions should be maintained/created in which this is disincentivized
not group specific in principle, but human life is pro tanto strongly cooler. but eg a mind uploaded human society would still be cool. continuing human life is very important. deep friendships with aliens should not be ruled out in principle, but should be approached with great caution. any claim that we should already care deeply about the possible lives of some not-specifically-chosen aliens that we might create, that we haven’t yet created, and so that we have great reason to create them, is prima facie very unlikely. this universe probably only has negentropy for so many beings (if you try to dovetail all possible lives, you won’t even get to running any human for a single step); we should think extremely carefully about which ones we create and befriend
i agree these are problems that would need to be handled on the human path
This is a significant worry—but my guess is that having lots more really smart people would make the problem get better in the long run. That stuff is already happening. Figuring out how to avoid it is a very difficult unsolved problem, which is thus likely to be heavily bottlenecked on ideas of various kinds (e.g. ideas for governance, for culture, for technology to implement good journalism, etc etc.).
Hmmm but what if human good not coupled with human wisdom? Maybe more intelligence more power seeking if not carefully implemented?
Probably better than doing the Big AI though.
I think this is just not the case; I’d guess it’s slightly the opposite on average, but in any case, I’ve never heard anyone make an argument for this based on science or statistics. (There could very well be a good such arguments, curious to hear!)
Separately, I’d suggest that humanity is bottlenecked on good ideas—including ideas for how to have good values / behave well / accomplish good things / coordinate on good things / support other people in getting more good. A neutral/average-goodness human, but smart, would I think want to contribute to those problems, and be more able to do so.
I’ll share a paper I remember seeing on the ability to do motivated reasoning and holding onto false views being higher for higher iq people tomorrow (if it actually is statistically significant).
Also maybe the more important things to improve after a certain IQ might be openness and conscientiousness? Thoughts on that?
I do think that it actually is quite possible to do some gene editing on big 5 and ethics tbh but we just gotta actually do it.
Personality is a more difficult issue because
more potential for scientifically unknown effects—it’s a complex trait
more potential bad parental decision-making (e.g. not understanding what very high disagreeableness really means)
more potential for parental or even state misuse (e.g. wanting a kid to be very obedient)
more weirdness regarding consent and human dignity and stuff; I think it’s pretty unproblematic to decrease disease risk and increase healthspan and increase capabilities, and only slightly problematic (due to competition effects and possible health issues) to tweak appearance (though I don’t really care about this one); but I think it’s kinda problematic with personality traits because you’re messing with values in ways you don’t understand; not so problematic that it’s ruled out to tweak traits within the quite-normal human range, and I’m probably ultimately pretty in favor of it, but I’d want more sensitivity on these questions
technically more difficult than IQ or disease because conceptually muddled, hard to measure, and maybe genuinely more genetically complex.
That said, yeah, I’m in favor of working out how to do it well. E.g. I’m interested in understanding and eventually measuring “wisdom” https://www.lesswrong.com/posts/fzKfzXWEBaENJXDGP/what-is-wisdom-1 .
I would agree that this is a weird incentive issue and that IQ is probably easier and less thorny than personality traits. With that being said here’s a fun little thought on alternative ways of looking at intelligence:
Okay but why is IQ a lot more important than “personality”?
IQ being measured as G and based on correlational evidence about your ability to progress in education and work life. This is one frame to have on it. I think it correlates a lot of things about personality into a view that is based on a very specific frame from a psychometric perspective?
Okay, let’s look at intelligence from another angle, we use the predictive processing or RL angle that’s more about explore exploit, how does that show up? How do we increase the intelligence of a predictive processing agent? How does the parameters of when to explore and when to exploit and the time horizon of future rewards?
Openness here would be the proclivity to explore and look at new sources of information whilst conscientiousness is about the time horizon of the discouting factor in reward learning. (Correlatively but you could probably define new better measures of this, the big 5 traits are probably not the true names for these objectives.)
I think it is better for a society to be able to talk to each other and integrate information well hence I think we should make openness higher from a collective intelligence perspective. I also think it is better if we imagine that we’re playing longer form games with each other as that generally leads to more cooperative equilibria and hence I think conscientiousness would also be good if it is higher.
(The paper I saw didn’t replicate btw so I walk back the intelligence makes you more ignorant point. )
(Also here’s a paper talking about the ability to be creative having a threshold effect around 120 iq with openness mattering more after that, there’s a bunch more stuff like this if you search for it.)
To speculate, it might be the case that effects like this one are at least to some extent due to the modern society not being well-adapted to empowering very-high-g[1] people, and instead putting more emphasis on “no one being left behind”[2]. Like, maybe you actually need a proper supportive environment (that is relatively scarce in the modern world) to reap the gains from very high g, in most cases.
(Not confident about the size of the effect (though I’m sure it’s at least somewhat true) or about the relevance for the study you’re citing, especially after thinking it through a bit after writing this, but I’m leaving it for the sake of expanding the hypothesis space.)
But, if it’s not that, then the threshold thing is interesting and weird.
or more generally high-intelligence
putting aside whether it works for the ones it’s supposed to serve or not
I would hypothesise that it is more about the underlying ability to use the engine that is intelligence. If we do the classic eliezer definition (i think it is in the sequences at least) of the ability to hit a target then that is only half of the problem because you have to choose a problem space as well.
Part of intelligence is probably choosing a good problem space but I think the information sampling and the general knowledge level of the people and institutions and general information sources around you is quite important to that sampling process. Hence if you’re better at integrating diverse sources of information then you’re likely better at making progress.
Finally I think there’s something about some weird sort of scientific version of frame control where a lot of science is about asking the right question and getting exposure to more ways of asking questions lead to better ways of asking questions.
So to use your intelligence you need to wield it well and wielding it well partly involves working on the right questions. But if you’re not smart enough to solve the questions in the first place it doesn’t really matter if you ask the right question.
this should be a post not just shortform?