[Question] What Does LessWrong/​EA Think of Human Intelligence Augmentation as of mid-2023?

Zvi Recently asked on Twitter:

If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?

To which Eliezer replied:

Human intelligence augmentation.

And then elaborated:

No time for GM kids to grow up, so:

  • collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development

  • try to disable a human brain’s built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit

  • upload and mod the upload

  • neuralink shit but aim for 64-node clustered humans


This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:

  1. BCIs to extract human knowledge

  2. neurotech to enhance humans

  3. understanding human value formation

  4. cyborgism

  5. whole brain emulation

  6. BCIs creating a reward signal.

It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:

From the original post: “Fig. 2| Comparison on key variables. A. Feasibility vs. timeline. Technology clusters that were deemed less feasible were also presumed to take longer to develop. B. Impact on AI vs. timeline. Technology clusters that were seen as having a larger potential impact on AI alignment were also presumed to take longer to develop. C. Impact on AI vs. feasibility. Technology clusters that were deemed more feasible were seen to be less likely to have an impact on AI alignment. Green trend lines represent high correlations (R2 ≥ 0.4318) and red represent low correlations.”

Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:

  • Does anyone have a comprehensive list of organizations working on HIA or related technologies?

    • Perhaps producing something like this map for HIA might be valuable.

  • Does independent HIA research exist outside of cyborgism?

    • My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?

  • (If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?


EDIT: “We have to Upgrade” is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman’s response and Nathan Helm-Burger’s response.