A Brief Review of Current and Near-Future Methods of Genetic Engineering

Part 1: The Case for Human Genetic Engineering

Part 2: The Case for Increasing Intelligence

My Purpose in Writing This Post

I’ve spent the last 6 months or so looking into the possibility of pursuing human genetic engineering as a means of improving human lives and increasing the probability of a desirable future. If you’d like more details about why I think improving health and intelligence is desirable, read my previous two posts.

In this post I’m going to summarize my understanding of the research on how genetic engineering will likely be done, the limitations of current techniques, and how they might be improved in the near future.

One last thing before I get started: the genetic engineering I am interested in, and which I think holds the most potential for increasing the likelihood of a good future does not incorporate any type of selective breeding or eugenics programs. Though one could theoretically increase intelligence or any other trait by banning those with undesirable traits from having children and encouraging those with desirable traits people to have more children, I think this is a bad approach for reasons I have summarized in another post.. This post examines methods that do not require coercion to work.

A Summary of Current Techniques

Step 1: Make a yardstick

All non-coercive efforts to genetically engineer humans have one essential prerequisite task: finding which genes contribute to the expression of different traits. In the modern world of genetics, this is done with a test called a Genome Wide Association Study, or GWAS. These are truly massive studies: a typical GWAS done today usually has hundreds of thousands of participants. Genetic material is collected from all participants, often with a blood draw or cheek swab, and their genome is analyzed with a machine like Illumina’s MiSeq analyzer.

For cost reasons, nearly all studies today sequence only a small portion of the genome using a device called an SNP microarray. SNP stands for Single Nucleotide Polymorphism, and is the term geneticists use to refer to a base pair that differs between two individuals. Also, because it’s technically possible for any base pair to differ between humans, geneticists usually use this term to refer to base pairs for which at least 1% of study participants have a different base pair at that location. An SNP microarray is an amazing device that can sequence a portion of a genome without sequencing all of it and save money by doing so. This is done by attaching a bunch of short RNA sequences to a substrate (basically a really flat plate), then spreading a bunch of ground up DNA over the array of these RNA sequences and measuring which of the plate-attached RNA sequences have complimentary pairs in the ground-up sample. A signal is then obtained (I think with a laser? The articles I read weren’t clear on this), whose strength varies depending on how many of the base pairs of the sequences attached properly. In other words, they’re measuring how well the two RNA strands bonded to one another. There’s a whole bunch of fancy signal processing that happens after this to deal with noisy data from RNA strands that are partially complementary, but at the end we have data about which of the plate-attached RNA strands had complementary pairs in the sample, and for those that didn’t, how much they differed by.

Once this data is obtained, either with the SNP chip model described above or with whole-genome sequencing, a linear effect model is used to construct predictors for the influence of each SNP on the expression of a particular trait. If that sounds confusing, just understand that they’re basically modelling the expression of a trait as a linear equation like y = mx + b, where each letter in the genome is an input (an x) and the m represents the effect size of that letter on the expression of a trait. The value of the coefficient m is determined by minimizing the prediction error of a linear equation. Surprisingly, most genetically caused variance in trait expression can be explained with linear models. For those of you interested in the mathematical details, I suggest you take a look at the “Methods” section in the Wikipedia article on GWAS.

Unfortunately it seems like state of the art methods are still not fantastic at predicting the exact expression of highly polygenic traits, with the notable exception of height. Here’s a paper from late 2018 that attempted to predict height, heel bone density, and educational attainment from SNP data alone. The authors were able to explain 40% of height variance with genetic data, 20% of heel bone density variance, and 9% of educational attainment variance. Height has the unique properties of both being highly polygenic and extremely easy to measure, so many of the ideas and techniques underlying modern GWAS were pioneered on studies of height.

More recent studies have been able to explain a higher level of variance in educational attainment and intelligence. A study in April 2019 was able to explain 16% of educational attainment and 11% of intelligence. Still, this is a long way from capturing the 50-80% of variance in intelligence that studies indicate comes from genes.

Step 2: Generate desirable variance

Geneticists have several tools to generate desirable genetic variance. Though tools like CRISPR seem to get the most attention from the mainstream press, CRISPR is not a particularly cost effective way to engender desirable traits in a future human. There are use-cases for CRISPR, such as if both parents have a recessive disease like sickle cell anemia and all of their children will have the disease. In that case, CRISPR could be used to replace the disease allele with its normal counterpart, allowing the couple to conceive children without the disease in question. CRISPR is also a very useful therapeutic for treating those who have a mendelian genetic condition and have already been born. For example, here’s a study where researchers used CRISPR to cure one patient of beta-Thalassemia and another of sickle cell anemia by extracting blood stem cells from their bone marrow, replacing the diseased gene, and reinjecting those modified cells back into the patients.

But for most traits, CRISPR is not a cost-effective tool for one simple reason. Most of the traits we care the most about including heart disease risk, diabetes risk, cancer risks of all types, and many others are highly polygenic traits; each are influenced by tens of thousands of letters in the genome. Given CRISPR’s tendency to occasionally make off-target edits, and the expense of editing so many places in the genome, I don’t see this being a viable strategy to decrease risk of heart disease any time soon. This may change in the future, but it appears to be the case for now.

The best way to generate desirable genetic variance in the near term is by generating a large number of embryos. During sex, sperm and eggs (referred to as “gametes” by biologists) go through a process called meiosis, where they swap parts of their matching chromosomes to generate a new organism with its own unique DNA. This process generates variance. The resulting offspring will incorporate traits from both parents, but trait expression will not always match the mean of the parents. If one parent has a 10% chance of experiencing a heart attack during their lifetime and another has a 15% chance of experiencing a heart attack, the offspring will not always have a 12.5% chance of getting a heart attack. Instead, generally speaking, the offspring’s risk of heart attack will be drawn from a normal distribution with a mean of 12.5%. The expression of all heritable polygenic traits will show variance in offspring.

Step 3: Select an embryo for implantation

Once genetic variance has been generated via the production of a number of embryos, the next step is to identify the likely trait values of each one. Embryos within a few weeks of fertilization have a very interesting property: one may remove several cells from the embryo and it will still retain the capacity to develop into a fully functional adult organism. This regenerative capacity allows us to gain unique insights into the genetic potential of each embryo; one may perform a biopsy on each, removing several cells, then amplify and sequence the DNA from the removed cells.

We may therefore discern the genetic sequence of an embryo before we choose to implant it. This gives us the chance to tilt the odds in favor of a future child: we can reduce their risk of serious polygenic diseases like heart disease, breast cancer, and type 2 diabetes, and we can virtually eliminate serious mendelian diseases like sickle cell anemia, cystic fibrosis, Huntington’s disease, and others. This is done by creating an overall “score” for each embryo, which represents the embryo’s expected expression of a set of traits. For example, we would give embryos at higher risk of developing coronary artery disease or type 2 diabetes a lower score. The expression of each trait is given a weight in accordance with how important we believe it is. These weights can be adjusted to reflect parental preferences with the help of a genetic counselor knowledgeable about the tests and about the diseases themselves.

Any trait with a strong genetic component may be selected for or against using this method. One merely needs to generate variance among a pool of embryos and develop a test that is able to capture a large enough portion of this genetically caused variance in trait expression.

What type of genetic engineering can we do today?

It may surprise some to learn that the techniques I described above are actually accessible right now in some capacity to any parents that can afford to do In-Vitro Fertilization. In IVF, a father donates his semen and a mother donates eggs, and a reproductive specialist uses those pools of reproductive cells to produce embryos in a cell culture. These embryos may then be screened using the process I described above, and the parents, with the help of a reproductive specialist, may choose which embryo they would like to implant.

There are companies offering this service right now. Among them are Orchid Health and Genomic Prediction. So far as I know, no companies in Europe nor the United States are offering screening for intelligence, skin color, or any other cosmetic traits. Instead, they focus exclusively on genetic predictors of health, such as heart attack risk, type 2 diabetes risk etc. This is at least partially explained by the fact that we don’t have great polygenic predictors of intelligence yet, but given that the predictors for some of the diseases they DO screen for aren’t much better, the main reason seems to be to avoid the controversy that would inevitably follow.

Even with the fairly limited testing we have today, pre-implantation genetic screening can have a remarkable effect on children’s future health. This is especially true for individuals with a family history of disease. As a topic of a future paper, I would like to quantify exactly how cost-effective IVF + embryo selection is for couples with no fertility issues, but my gut feel after reading some research is that the expected reduction in medical costs alone more than pay for the cost of IVF. To give you a rough idea of how effective this technology is at reducing disease risk, here’s Genomic Prediction’s chart showing how much we would expect the risk of different diseases to decrease if we were to choose from the better of two scoring of two embryos.

Chart showing reduction of disease risk

You can play around with the tool yourself to see how changing the number of embryos affects the expected reduction in disease risk. Though their current web interface is limited, it’s clear that even just selecting from two embryos, the expected reduction in chronic disease risk is substantial. And as the number of embryos selected from goes up, risk of disease decreases even further.

In fact, the reductions are so substantial that I suspect we are not too far from the day where conceiving a child through sex will be viewed similar to how giving birth at home is viewed today: unnecessarily risky and something to be avoided if one can afford it.

If you are considering having children in the future, you may want to look into pre-implantation genetic screening. If you’re a woman you may want to consider doing this earlier rather than later, as the number of eggs that can be harvested per cycle tends to decline with age, making the process more expensive if done later. The same goes for men, though since semen extraction is rather easier and does not require a specialist, the main consideration is semen quality rather than cost.

The Future

Suppose you buy the argument that we’re likely to be able to improve human health, happiness, intelligence, etc using genetic engineering. What is the critical path to the development of this technology? How do we get there faster?

Screening for intelligence

Though there will doubtless be some people that oppose this in the near future, it seems inevitable that when the predictive tests for intelligence improve enough, some lab in some country will start offering pre-implantation genetic tests for predicted intelligence. Intelligence has a strong genetic basis (estimates vary between 35-80%) and positively correlates with too many objectively important outcomes for us to ignore it for long. The moral case for doing so is strong as well: if we are able to enhance intelligence without having any negative effect on other important traits, why wouldn’t we do so? We already spend significant amounts of money to help our children realize their intellectual potential. Raising their potential through genetic intervention seems completely consistent with the values already clearly expressed by many people.

And once this service starts being offered anywhere, it will only be a matter of time before it’s offered pretty much everywhere. If a country does not allow embryo screening for intelligence and other desirable traits, wealthy parents will simply have their embryos genotyped, then send the data files off to a clinic in another country where they can be analyzed, and implant the ones that score the best according to some scoring system that takes intelligence into account.

And if data transfers are banned, then those parents will simply take a vacation somewhere it isn’t to give birth. Eventually, the huge accrued disadvantage faced by countries that don’t allow the technology will create overwhelming pressure to legalize the technology in some capacity, and no country other than dictatorships will be able to resist the pressure. My guess for where this will first be legalized is somewhere in Asia, possibly South Korea or China. And just as IVF itself became normalized, so will preimplantation genetic screening designed to give one’s child the best life possible.

For intelligence screening, in particular, I have concluded there are two key technologies needed to enable dramatic improvements.

1. Improve tests for the genetic component of intelligence

The first is a test better able to capture genetically caused variance in intelligence. Plomin & Stumm seem to think that the two missing ingredients for really good predictors of the genetic portion of intelligence are larger sample sizes and whole-genome sequencing instead of SNP based approaches, as well as possibly non-linear models of gene effects and gene-environment interactions (see the last paragraph of box 4 on page 6 from the above link). They estimate that we can capture half the genetic variance in intelligence with SNP data alone, but that we’ll need whole-genome sequencing for the remainder. SNP tests usually cost around $100, while whole genome sequencing currently costs around $300. Here’s a nice graph showing the current state of our tests as of 2018.

Chart showing that current tests capture 20% of the genetically caused variance in intelligence

It’s worth pointing out that this ratio of environmental influence on intelligence to genetic influence on intelligence is not fixed. If half the population had chronic exposure to lead in their drinking water and the other half had clean drinking water, the percentage of variance in intelligence explained by environmental factors would go up. Similarly, if half the population was genetically engineered to be unusually intelligent while the other half was not, the percentage of variance explained by genes would go up.

The more important thing to note here is how large of an increase we could get to intelligence by simply increasing the frequency of the SNPs positively correlated with its expression. Professor Steven Hsu has estimated that there is enough additive variance in the human population to create people with IQs of over 1000 if we were to add them all together. We almost certainly wouldn’t want to incorporate all these variants into a single person, as some likely have tradeoffs with health, reproductive propensity, or other things we care about that would make incorporating them a poor choice. Another concern is whether or not IQ, as measured by tests like progressive matrices will continue to correlate with the things we actually care about at such extreme levels. It’s likely that the use of today’s IQ tests to predict intelligence will break down if we push it far enough in either direction. But exactly where that point is remains an important open question. It seems likely to me that we will be able to raise average human IQ into the high 100′s without any serious downsides. We already have thousands of examples of people with IQs this high, most of whom were functional in other ways we care about. In fact, if we get much better tests of genetic intelligence and we are also able to get iterated embryo selection to work, this question of how far we can safely push trait expression will become the chief remaining question.

I’ve followed AI safety research as a hobby for the last few years and one of the lessons I’ve learned from the research is machines that optimizing for any objective X will eventually impact another objective Y if one pushes hard enough. This will doubtlessly be the case with intelligence.

It’s quite difficult to estimate how hard it will be to develop better tests. One obvious step is to increase the sample size of the study. This will help detect genetic variants with smaller effect sizes on intelligence and to detect rarer variants. Another obvious step is to perform whole-genome sequencing, which would help capture rare variants that may account for currently uncaptured variance.

The best paper I’ve found on this topic is Stephen Hsu’s 2014 paper On the genetic architecture of intelligence and other quantitative traits, which estimates that a sample size of a million would be enough to capture nearly all the variance. However, since its publication, studies examining ~250k individuals were only able to explain 7-10% of the variance in cognitive performance (see the top of page 2). Furthermore, this study only identified 225 significant SNP hits, well short of the 10,000 that Hsu estimates play a role in intelligence. The relationship between sample size and discovered SNPs is not linear, but it’s not clear how much of the missing heritability is due to smaller sample size as opposed to other things. Are there more variants that influence intelligence with smaller effect sizes than Hsu predicted? Do non-linear effects play a bigger role? Is there some other confounding factor? I don’t yet know the answer to these questions.

So after a couple of days of research trying to complete this section, I am stuck with no clear answers. It is not clear to me how large of a sample size we’ll need to get an accurate measurement of intelligence, nor is it clear what additions we’ll need to basic additive models to obtain high performance on such tests.

If I had to hazard a very rough guess, I would say that a sample size of 10 million with full genome sequencing performed on every participant would probably be sufficient to capture >80% of the genetically caused variance in intelligence. Assuming $300 per genome sequenced and $100 to administer each test, this comes out to a price tag of $4 billion. Not cheap, but well within the realm of feasibility. And hopefully, economies of scale would help lower the price, at least for the genome sequencing portion.

2. Solve Iterated Embryo Selection

The Second necessary technology needed to allow for dramatic improvements in polygenic traits such as intelligence on a short timeline is Iterated Embryo Selection or IES. IES is a technology that theoretically allows for arbitrarily large increases in trait values on a much shorter time horizon than any other near-term technology. IES involves the following 6 steps:

  1. Extract somatic cells from an organism or tissue (usually skin cells or blood cells)

  2. Revert these cells back to induced pluripotent stem cells

  3. Develop those stem cells into gametes (reproductive cells like sperm or eggs)

  4. Fertilize the gametes to create a batch of new embryos

  5. Sequence the DNA of the embryos, selecting the best of the batch

  6. Develop the selected embryos into a larger amount of tissue, like skin cells or blood cells

  7. Repeat steps 1-6

IES essentially takes the reproductive cycle from 20+ years down to 6 months. Whereas normal IVF ends when an embryo is selected for implantation, IES takes the selected embryo through another cycle of meiosis and recombination (possibly introducing new genetic material from another group of embryos in the process). After each round of iteration, the mean trait values of the new pool of embryos will be equal to the highest-scoring embryos from the previous round. This is the true magic of iterated embryo selection; once feasible, it allows for arbitrary gains in any genetically influenced trait.

So what ingredients are we still missing? Step 1 is trivial. Step 2 has been possible since 2006 when Shinya Yamanaka’s lab produced the first induced pluripotent stem cells using “Yamanaka factors”, and is in fact an active step in most stem cell therapies. Step 4 is a standard part of IVF and step 5 is becoming more common. Step 6 seems like it’s already possible given that most research into tissue engineering assumes embryonic stem cells or some other pluripotent stem cells as a starting point. As far as I can tell, the only step that has not yet been accomplished to completion in humans is step 3: differentiation of pluripotent stem cells into gametes.

We have already gotten step 3 to work in mice. In 2016, Hikabe et al showed reconstitution of the entire female mouse germline in vitro (steps 1-3 in the list above). This process, known as In-Vitro Gametogenesis, is critical to all attempts to Iterated Embryo Selection. Hikabe et. al. were able to harvest a sample of skin cells from the tail of the mouse, revert those cells back to a pluripotent state using Yamanaka factors, differentiate those IPSCs into oocytes, then fertilize the resulting Oocytes to create mouse embryos, which were then implanted in a female mouse who gave birth to healthy pups.

There’s a really fantastic summary of current progress of this technology in humans by Dr. Sherman Silber on YouTube. We are very close. The only remaining unrealized step is getting from primordial germ cells to sperm and eggs. What makes this step so difficult is recreating the conditions in which primordial germ cells mature into spermatogonial stem cells and eggs within the human body.

Silber believes this may be easier for oocytes than for sperm. To culture PGCs into oocytes, the PGCs must develop in the presence of fetal granulosa cells. These cells are critical because they emit a set of growth factors that tell the PGCs to develop into Oocytes. Silber believes we should be able to replicate these conditions by isolating the growth factors and applying them to the PGCs.

Sperm are trickier. According to Dr. Silber the only method they’ve been able to use so far to mature primordial germ cells into spermatogonial stem cells is injection of PGCs into the rete testes of a prepubescent boy. The pubescent development process, as it turns out, is critical to mature PGCs into spermatogonial stem cells, and those conditions cannot be found in adult testes.

While this type of injection works well for restoring fertility in individuals who lost it in childhood (usually due to cancer treatments), this will not work for Iterated Embryo Selection. But unfortunately a cursory search yielded no results for spermatogenesis via growth factors. Either research into spermatogenesis has not been funded or I have simply been unable to find the published papers.

So to summarize: we are very close to making Iterated Embryo Selection possible. The missing piece is the ability to turn primordial germ cells into oocytes and sperm. Ongoing research will likely make this possible for oocytes in the next 5-10 years, but the path for spermatogenesis is less clear.

Reflections on the Value of Human Genetic Engineering

This will not be my last paper on the topic, but I wanted to take a brief moment to reflect on why I think human genetic engineering is important. Apart from the obvious near-term benefits of reducing chronic disease, I think in the long run, genetic engineering will only matter if it affects the development of transformative artificial intelligence.

I don’t remember exactly where I read this, but in another post I read on LessWrong, the author suggested that biological systems may simply become obsolete in the future because computer-based information processing systems will become better at turning energy into utility. I suspect that in the long run, this will probably be true.

I am very worried that current humans are simply incapable of aligning powerful AI with our interests due to the incredible technical complexity of the problem. My goal in pursuing a career in genetics with a focus on human reproduction is to increase human capability to deal with incredibly technical problems like those involved in creating TAI. Along the way I hope we can create a kinder, healthier society with fewer mismatches between our genes and our environment.

If some of the more pessimistic projections about the timelines to TAI are realized, my efforts in this field will have no effect. It is going to take at least 30 years for dramatically more capable humans to be able to meaningfully contribute to work in this field. Using Ajeya Cotra’s estimate of the timeline to TAI, which estimates a 50% chance of TAI by 2052, I estimate that there is at most a 50% probability that these efforts will have an impact, and a ~25% chance that they will have a large impact.

Those odds are good enough for me.