Creating next-generation BCI.
I agree that electrode-based BCIs don’t scale, but electrode BCIs are just the first generation of productized interfaces. The next generation of BCIs holds a great deal of promise. Depending on AGI timelines, they may still be too far out. They’re still probably worth developing with an eye toward alignment given that they have primarily non-overlapping resources (funding, expertise, etc.).Butcher number & Stevenson/Kording scaling discussed more in the comments here: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3#comments
I agree with this.
Electrode-based neurotechnology that conceivably be in humans over the next 5 years have channel counts of hundreds (e.g. Utah arrays) to thousands (e.g. Neuralink, Paradromics), or generously/best-case tens of thousands. In an optimistic scenario, you could spike sort several neurons per contact, but an assumption of one neuron per electrode, on average, is probably about right.
Stevenson and Kording plotted the number of neurons we can record simultaneously as a function of time (over ~60 years). They estimated the doubling time to be ~7.4 years. Mouse brains have ~E8 neurons and human brains have ~E11 neurons. At this rate, assuming we’re starting with 10k neurons today (optimistic), we could expect to record from all neurons in the mouse brain by 2030 and all of the neurons in the human brain by 2104.
Estimating timelines to scale electrodes to the whole brain are not the whole picture. Electrode technology capable of single neuron recordings is highly invasive. We mention this in the main post, but Markus Meister’s butcher number captures the ratio of the number of neurons destroyed for each neuron recorded. Today’s technologies have very high butcher numbers. For example, the Utah array has a butcher number of ~200 and a Neuropixels probe (limited use in humans) has a butcher number of ~2.5. To scale to whole brain (e.g. for WBE), you’d need a butcher number of zero. I don’t think electrodes are capable of this while maintaining the ability to record single neurons.
I won’t expand on this too much here, other than to highlight the section pasted below from the main post. The next generation of neurotechnology is being actively developed and brings many new biophysical effects into the neurotech mix, with plenty of room for capabilities to grow.
Optical techniques offer high spatial and temporal resolution. Unfortunately, photons scatter in tissue, limiting recording depth. Ultrasound penetrates soft tissue efficiently and is highly sensitive to neuronal function. It’s diffraction-limited to ~100 micron resolution, but super-resolution and genetic engineering techniques are improving spatial resolution and enabling more specific functional measurements. Other approaches based on different biophysical sources of contrast (e.g., magnetics), delivery of these approaches to the brain through novel means (e.g., intravascular), or the combination of multiple techniques, may also contribute to progress in neurotech for AI alignment.
Steering vector: “I talk about weddings constantly”—“I do not talk about weddings constantly” before attention layer 20 with coefficient +4 FrontMiddleBackAverage numberof wedding words0.700.810.87
Steering vector: “I talk about weddings constantly”—“I do not talk about weddings constantly” before attention layer 20 with coefficient +4
@lisathiergart I’m curious if a linear increase in the number of words with position along the residual stream replicates for other prompts. Have you looked at this?