I’m a software developer by training with an interest in genetics. I currently run a startup working on multiplex gene editing technology.
GeneSmith
As someone who works in genetics and has been told for years he is a “eugenicist” who doesn’t care about minorities, I understand your pain.
It’s just part of the tax we have to pay for doing something that isn’t the same as everyone else.
If you continue down this path, it will get easier to deal with these sorts of criticisms over time. You’ll develop little mental techniques that make these interactions less painful. You’ll find friends who go through the same thing. And the sheer repetitiveness will make these criticisms less emotionally difficult.
And I hope you do continue because the work you’re doing is very important. When new technology causes some kind of change, people look around for the nearest narrative that suits their biases. The narratives in leftist spaces right now are insane. AI is not a concern because it uses too much water. It’s not a concern because it is biased against minorities (if anything it is a little biased in favor of them!)
There is one narrative that I think would play well in leftist spaces which comes pretty close to the truth, and isn’t yet popular:
AI companies are risking all of our lives in a race for profits
Simply getting this idea out there and more broadly known in leftist spaces is incredibly valuable work.
So I hope you keep going.
On the topic of predictability and engineering – sure, we can influence predispositions, but the point I was trying to make is epistemological: the level of uncertainty and interdependence in human development makes the engineering metaphor fragile. Medicine, to your point, does aim to “figure out” complex systems – but it’s also deeply aware of its limitations, unintended consequences, and historical hubris. That humility didn’t come through strongly in your piece, at least to me.
Perhaps so. But the default assumption, seemingly made by just about everyone, is that there is nothing we can do about any of this stuff.
And that’s just wrong. The human genome is not a hopeless complex web of entangled interactions. Most of the variance in common traits is linear in nature, meaning we can come up with reasonably accurate predictions of traits by simply adding up the effects of all the genes involved. And thus by extension, if we could flip enough of these genes, we could actually change people’s genetic predisposition.
Furthermore, nature has given us the best dataset ever in genetics, which is billions of siblings that act as literal randomized control trials for the effect of genes on life outcomes.
If I felt that the world was suffering from excess genetic engineering hubris, then I might be more cautious in my language. But that is not in fact what is happening! What is happening is humanity is being far too cautious, mostly because they hold a lot of false assumptions about how complex the genome is.
We have this insane situation in reproductive genetics right now where tens of thousands of children are being born every year with much higher genetic predispositions towards disease than they should have because doctors don’t understand polygenic risk scores and would rather implant embryos that look nice under a microscope.
do we really know enough about gene–environment interactions to be confident in the long-term effects of shifting polygenic profiles at scale?
It depends what your standard is: if the bar we need to meet is “we can’t make any changes that might result in unpredictable effects”, then of course we can’t be confident.
But if the bar is “we know enough to say with high confidence improve the life of the child”, then we are already there for small changes, and can get there relatively soon for much larger ones.
Hi Nabokos,
I appreciate the comment. I think many academically inclined follks probably have similar views to yours. Let me explain my thinking here:
What troubles me most is how little attention is paid to emotional attachment, which is arguably the cornerstone of healthy development. This reads more like a plan for growing babies in vitro than raising actual children.
If I were to go into the ins and outs of emotional attachment, this already long post would have been at least twice the length. And seeing as I am not an expert in the area, I hardly think it would have been useful to the average reader.
Of course emotional attachment is important. It’s one of the most important things for happy, healthy childhood development.
But there are many good books on that topic and I don’t think everyone who writes about any aspect of childhood or babies needs to include a section on the topic. If you think there are good resources people here should read, please post them!
You can’t predict or engineer how a baby will turn out.
It’s certainly true you can’t predict EXACTLY how a baby will turn out, but you CAN influence predispositions. In fact, most of parenting is about exactly this! How to change your child’s environment to influence the kinds of things they do and the sort of person they become.
Honest question: do you have kids?
Sadly I do not have kids yet! I hope to have them in the next few years.
Also, much of the terminology you use feels superficial or misapplied. Science and education aren’t just about memorizing buzzwords – they require deep understanding, and that takes time, context, and mentorship.
I don’t see how this is at odds with genetic engineering.
I’m a medical doctor, and what strikes me again and again is how people assume that complex systems – like human beings – can be “figured out” with enough reading or clever design
I think it’s fair to say that the entire field of medicine is one big attempt to do exactly this. I don’t see how gene editing differs from what we try to do with drugs.
(e.g. hypertension isn’t caused by a single gene)
Where exactly did I say this?
But did it ever occur to you that these ‘optimized’ new people might come with new problems and diseases? Biology tends to work like that: you push on one part, something else breaks.
Yes, I have in fact considered this. There are several different ways to assess how big of a problem this could be:
You can look at genetic correlations between different diseases to see if there’s some kind of tradeoff. When we do this we see that the correlations are generally (though not universally) weak, and when they do exist, they actually tend to work in your favor, meaning decreasing the risk of one disease is more likely than not to result in a tiny reduction of others.
You can just look directly at people who have low genetic predispositions to various diseases and see if they have any issues at different rates from the general population. And the answer here again is generally “no”.
Together these imply that it should in fact be possible to significantly improve health, intelligence, and other aspects of what makes life good without necessarily making that many difficult tradeoffs.
Also, just based on what we know about evolution it shouldn’t actually surprise us that much that we can increase overall performance, especially when there has been as big of a shift in the environment as what we’ve experienced in the last few hundred years.
It would be nice if your critique actually addressed some specific concrete issues you have with the post or its ideas. The one specific example you gave (me thinking hypertension is caused by one gene) isn’t even something I said. I’m not even sure where you’re getting that idea from.
I think you make a reasonably compelling case, but when I think about the practicality of this in my own life it’s pretty hard to imagine not spending any time talking to chatbots. ChatGPT, Claude and others are extremely useful.
Inducing psychosis in your users seems like a bad business strategy, so I view the current cases as accidental collateral damage, mostly borne of the tendency of some users to end up going down weird self-reinforcing rabbit holes. I haven’t had any such experiences because this is not the way I use chatbots, but I guess I can see perhaps some extra caution warranted for safety researchers if these bots get more powerful and are actually adversarial to them?
I think this threat model is only applicable in a pretty narrow set of scenarios: one where powerful AI is agentic enough to decide to induce psychosis if you’re chatting with it, but not agentic enough to make this happen on its own despite likely being given ample opportunities to do so outside of contexts in which you’re chatting with it. And also one where it actually views safety researchers as pertinent to its safety rather than as irrelevant.
I guess I could see that happening but it doesn’t seem like such circumstances would last long.
I would share your concern if TurnTrout or others were replying to everything Nate published in this way. But well… the original comment seemed reasonably relevant to the topic of the post and TurnTrout’s reply seemed relevant to the comment. So it seems like there’s likely a limiting principle here that would prevent your concern from being realized.
It never really got any traction. And I think you’re right about the similarity to eugenics somewhat defeating the purpose.
I think terms like “reproductive freedom” or “reproductive choice” actually get the idea across better anyways since you don’t have to stop and explain the meaning of the word.
This is one of my favorite articles I’ve read on this website in months. Since I’m guessing most people won’t read the whole thing, I’ll just quote a few of the highlights here:
Measles is an unremarkable disease based solely on its clinical progression: fever, malaise, coughing, and a relatively low death rate of 0.2%~. What is astonishing about the disease is its capacity to infect cells of the adaptive immune system (memory B‑ and T-cells). This means that if you do end up surviving measles, you are left with an immune system not dissimilar to one of a just-born infant, entirely naive to polio, diphtheria, pertussis, and every single other infection you received protection against either via vaccines or natural infection. It can take up to 3 years for one’s ‘immune memory’ to return, prior to which you are entirely immunocompromised.
I had literally no idea Measles did this. As if I needed another reason to get vaccinated.
On the highest end, Alzheimer’s received $3538M in funding in 2023, and caused 451 DALYs per 100k people worldwide. So, 3538:451, or 7.8.
Then Crohn’s Disease, which has the ratio 92:20.97 (4.3).
Slightly lower is diabetes, 1187:801.5 (1.4).
Close to it is epilepsy, 245:177.84 (1.6).
Finally, near the bottom of the list is endometriosis, 29:56.61, or .5.
It’s kind of shocking there is such a big difference between diseases when it comes to funding. Literally a 16x discrepancy between Alzheimer’s funding and endometriosis (and a 52x difference between Alzheimer’s and COPD!) I so wish that DOGE had been functional because it’s exactly situations like this that pose the biggest opportunity for improved government operations.
I think the most interesting part of this disease is how it’s kind of sort of not really a form of cancer. It’s basically cancer that causes a lot of issues but very rarely grows in the aggressive way that other cancers do.
The fact that you literally find endometrial lesions with some of the mutations that are hallmarks of cancer implies that there’s probably a lot of endometriosis cases that are cleared up by the immune system naturally which no one ever finds out about. These mutations show up because they provide a survival advantage to the endometrial cells.
Minor nitpick about heritability
Lastly I have a very minor nitpick. The Nature paper you linked ostensibly showing very high heritability doesn’t actually mention heritability in the abstract. The paper made a genetic predictor for endometriosis which explained 5% of the variance (not particularly high, especially given the sample size they were working with).
It does cite a paper about heritability, but that paper doesn’t showing endometriosis as being unusually heritable; it shows 47% of the variance can be explained by additive genetic factors. That’s pretty middle-of-the-pack as far as heritability goes. Conditions like Alzheimer’s and Schizophrenia are significantly more heritable; roughly 70% and 80% respectively.
Actual heritability of Endometriosis is likely somewhat higher than that because most conditions have some non-additive genetic variance. This paper (somewhat questionably) attributes the entire remainder of the variance to “unique environmental factors”.
The actual genetics of the disease itself are almost shockingly polygenic. The study had 61k cases, yet only identified 42 genome-wide significant hits. I can’t look at the rest of the paper due to a paywall (SciHub has stopped archiving new articles :() but it seems there aren’t any especially common alleles with large effect sizes.
This actually strongly the supports the “multiple causes of endometriosis” narrative you explore in your post: if there are that many genetic variants with small effects, there are probably many different ways the disease can manifest (or at least many influences on when and how it shows up).
Sounds like we should talk
I agree this is a worry. Apart from this stuff just not mattering because AI takes over first, dramatic acceleration of inequality is my biggest worry.
This tech almost certainly WILL accelerate inequality at the start. But in the long run I think there’s no reason we can’t make gene editing available for almost everyone.
Editing reagents are cheap. We’re working with at most a few microliters of editing agents (more realistically a few nanoliters).
It costs a lot of money to collect the data and put it into biobanks, but once that is done you’ve got the data forever.
And at SCALE the cost of absolutely everything comes down.
Maybe we’ll get there someday. I think for the next decade at least it’s going to be hard to beat lead paint elimination or animal welfare initiatives that get a hundred million chickens out of battery cages.
Care to explain how you think it’s being misused?
I find this argument fairly compelling. I also appreciate the fact that you’ve listed out some ways it could be wrong.
Your argument matches fairly closely with my own views as to why we exist, namely that we are computationally irreducible
It’s hard to know what to do with such a conclusion. On the one hand it’s somewhat comforting because it suggests even if we fuck up, there are other simulations or base realities out there that will continue. On the other hand, the thought that our universe will be terminated once sufficient data has been gathered is pretty sad.
DE-FACTO UPLOADING
Imagine for a moment you have a powerful AI that is aligned with your particular interests.
In areas where the AI is uncertain of your wants, it may query you as to your preferences in a given situation. But these queries will be “expensive” in the sense that you are a meat computer that runs slowly, and making copies of you is difficult.
So in order to carry out your interests at any kind of scale with speed, it will need to develop an increasingly robust model of your preferences.
Human values are context-dependent (see shard theory and other posts on this topic), so accurately modeling one’s preferences across a broad range of environments will require capturing a large portion of one’s memories and experiences, since those things affect how one responds to certain stimuli.
In the limit, this internal “model” in the AI will be an upload. So my current model is that we just get brain uploading by default if we create aligned AGI.
I’m not sure I buy that they will be more cautious in the context of an “arms race” with a foreign power. The Soviet Union took a lot of risks their bioweapons program during the cold war.
My impression is the CCP’s number one objective is preserving their own power over China. If they think creating ASI will help them with that, I fully expect them to pursue it (and in fact to make it their number one objective)
So in theory I think we could probably validate IQ scores of up to 150-170 at most. I had a conversation with the guys from Riot IQ and they think that with larger sample sizes the tests can probably extrapolate out that far.
We do have at least one example of a guy with a height +7 standard deviations above the mean actually showing up as a really extreme outlier due to additive genetic effects.
The outlier here is Shawn Bradley, a former NBA player. Study here
Granted, Shawn Bradley was chosen for this study because he is a very tall person who does not suffer from pituitary gland dysfunction that affects many of the tallest players. But that’s actually more analogous to what we’re trying to do with gene editing; increasing additive genetic variance to get outlier predispositions.
I agree this is not enough evidence. I think there are some clever ways we can check how far additivity continues to hold outside of the normal distribution, such as checking the accuracy of predictors at different PGSes, and maybe some clever stuff in livestock.This is on our to-do list. We just haven’t had quite enough time to do it yet.
The second point is the distinction between causal for the association observed in the data, and causal when intervening on the genome, I suspect more than half of the gene is only causal for the association. I also imagine there are a lot of genes that are indirectly causal for IQ such as making you an attentive parent thus lowering the probability your kid does not sleep in the room with a lot of mold, which would not make the super baby smarter, but it would make the subsequent generation smarter.
There are some, but not THAT many. Estimates from EA4, the largest study on educational attainment to date, estimated the indirect effects for IQ at (I believe) about 18%. We accounted for that in the second version of the model.
It’s possible that’s wrong. There is a frustratingly wide range of estimates for the indirect effect sizes for IQ in the literature. @kman can talk more about this, but I believe some of the studies showing larger indirect effects get such large numbers because they fail to account for the low test-retest reliability of the UK biobank fluid intelligence test.
I think 0.18 is a reasonable estimate for the proportion of intelligence caused by indirect effects. But I’m open to evidence that our estimate is wrong.
I’m being gaslit so hard right now
One data point that’s highly relevant to this conversation is that, at least in Europe, intelligence has undergone quite significant selection in just the last 9000 years. As measured in a modern environment, average IQ went from ~70 to ~100 over that time period (the Y axis here is standard deviations on a polygenic score for IQ)
The above graph is from David Reich’s paper
I don’t have time to read the book “Innate”, so please let me know if there are compelling arguments I am missing, but based on what I know the “IQ-increasing variants have been exhausted” hypothesis seems pretty unlikely to be true.
There’s well over a thousand IQ points worth of variants in the human gene pool, which is not what you would expect to see if nature had exhaustively selected for all IQ increasing variants.
Unlike traits that haven’t been heavily optimized (like resistance to modern diseases)
Wait, resistance to modern diseases is actually the single most heavily selected for thing in the last ten thousand years. There is very strong evidence of recent selection for immune system function in humans, particularly in the period following domestication of animals.
Like there has been so much selection for human immune function that you literally see higher read errors in genetic sequencing readouts in regions like the major histocompatibility complex (there’s literally that much diversity!)
but suggests the challenge may be greater than statistical models indicate, and might require understanding developmental pathways at a deeper level than just identifying associated variants.
If I have one takeaway from the last ten years of deep learning, it’s that you don’t have to have a mechanistic understanding of how your model is solving a problem to be able to improve performance. This notion that you need a deep mechanical understanding of how genetic circuits operate or something is just not true.
What you actually need to do genetic engineering is a giant dataset and a means of editing.
Statistical methods like finemapping and adjusting for population level linkage disequilibrium help, but they’re just making your gene editing more efficient by doing a better job of identifying causal variants. They don’t take it from “not working” to “working”.
Also if we look at things like horizontal gene transfer & shifting balance theory we can see these as general ways to discover hidden genetic variants in optimisation and this just feels highly non-trivial to me? Like competing against evolution for optimal information encoding just seems really difficult apriori? (Not a geneticist so I might be completely wrong here!)
Horizontal gene transfer doesn’t happen in humans. That’s mostly something bacteria do.
There IS weird stuff in humans like viral DNA getting incorporated into the genome, (I’ve seen estimates that about 10% of the human genome is composed of this stuff!) but this isn’t particularly common and the viruses often accrue mutations over time that prevents them from activating or doing anything besides just acting like junk DNA.
Occasionally these viral genes become useful and get selected on (I think the most famous example of this is some ancient viral genes that play a role in placental development), but this is just a weird quirk of our history. It’s not like we’re prevented from figuring out the role of these genes in future outcomes just because they came from bacteria.
Sorry, I’ve been meaning to make an update on this for weeks now. We’re going to open source all the code we used to generate these graphs and do a full write-up of our methodology.
Kman can comment on some of the more intricate details of our methodology (he’s the one responsible for the graphs), but for now I’ll just say that there are aspects of direct vs indirect effects that we still don’t understand as well as we would like. In particular there are a few papers showing a negative correlation between direct and indirect effects in a way that is distinct for intellligence (i.e. you don’t see the same kind of negative correlation for educational attainment or height or anything like that). It’s not clear to us at this exact moment what’s actually causing those effects and why different papers disagree on the size of their impact.
In the latest versions of the IQ gain graph we’ve made three updates:
We fixed a bug where we squared a term that should not have been squared (this resulted in a slight reduction in the effect size estimate)
We now assume only ~82% of the effect alleles are direct, further reducing benefit. Our original estimate was based on a finding that the direct effects of IQ account for ~100% of the variance using the LDSC method. Based on the result of the Lee et al Educational Attainment 4 study, I think this was too optimistic.
We now assume our predictor can explain more of the variance. This update was made after talking with one of the embryo selection companies and finding their predictor is much better than the publicly available predictor we were using
The net result is actually a noticeable increase in efficacy of editing for IQ. I think the gain went from ~50 to ~85 assuming 500 edits.
It’s a little frustrating to find that we made the two mistakes we did. But oh well; part of the reason to make stuff like this public is so others can point out mistakes in our modeling. I think in hindsight we should have done the traditional academic thing and ran the model by a few statistical geneticists before publishing. We only talked to one, and he didn’t get into enough depth for us to discover the issues we later discovered.
I’ve been talking to people about this today. I’ve heard from two separate sources that it’s not actually buyable right now, though I haven’t yet gotten a straight answer as to why not.
Anyone have insights into whether this is a genuine offer that could be taken up by members of the administration if they have the right attitude vs a simple power play by China to try to get more support from potential allies?
Trying to gauge how cynical to be here.