Does davidad’s uploading moonshot work?

davidad has a 10-min talk out on a proposal about which he says: “the first time I’ve seen a concrete plan that might work to get human uploads before 2040, maybe even faster, given unlimited funding”.

I think the talk is a good watch, but the dialogue below is pretty readable even if you haven’t seen it. I’m also putting some summary notes from the talk in the Appendix of this dialoge.

I think of the promise of the talk as follows. It might seem that to make the future go well, we have to either make general AI progress slower, or make alignment progress differentially faster. However, uploading seems to offer a third way: instead of making alignment researchers more productive, we “simply” run them faster. This seems similar to OpenAI’s Superalignment proposal of building an automated alignment scientist—with the key exception that we might plausibly think a human-originated upload would have a better alignment guarantee than, say, an LLM.

I decided to organise a dialogue on this proposal because it strikes me as “huge if true”, yet when I’ve discussed this with some folks I’ve found that people generally don’t seem that aware of it, and sometimes raise questions/​confusions/​objections that neither me nor the talk could answer.

I also invited Anders Sandberg, who co-authored the 2008 Whole Brain Emulation roadmap with Nick Bostrom, and co-organised the Foresight workshop on the topic where Davidad presented his plan, and Lisa Thiergart from MIRI, who also gave a talk at the workshop and has previously written about whole brain emulation here on LessWrong.

The dialogue ended up covering a lot of threads at the same time. I split them into six separate sections, that are fairly readable independently.

Barcoding as blocker (and synchrotrons)

davidad

One important thing is “what are the reasons that this uploading plan might fail, if people actually tried it.” I think the main blocker is barcoding the transmembrane proteins. Let me first lay out the standard argument for why we “don’t need” most of the material in the brain, such as DNA methylation, genetic regulatory networks, microtubules, etc.

  1. Cognitive reaction times are fast.

  2. That means individual neuron response times need to be even faster.

  3. Too fast for chemical diffusion to go very far—only crossing the tiny synaptic gap distance.

  4. So everything that isn’t synaptic must be electrical.

  5. Electricity works in terms of potential differences (voltages), and the only parts of tissue where you can have a voltage difference is across a membrane.

  6. Therefore, all the cognitive information processing supervenes on the processes in the membranes and synapses.

Now, people often make a highly unwarranted leap here, which is to say that all we need to know is the geometric shapes and connection graph of the membranes and synapses. But membranes and synapses are not made of homogenous membraneium and synapseium. Membranes are mostly made of phospholipid bilayers, but a lot of the important work is done by trans-membrane proteins: it is these molecules that translate incoming neurotransmitters into electrical signals, modulate the electrical signals into action potentials, cause action potentials to decay, and release the neurotransmitters at the axon terminals. And there are many, many different trans-membrane proteins in the human nervous system. Most of them probably don’t matter or are similar enough that they can be conflated, but I bet there are at least dozens of very different kinds of trans-membrane proteins (inhibitory vs excitatory, different time constants, different sensitivity to different ions) that result in different information-processing behaviours. So we need to not just see the neurons and synapses, but the quantitative densities of each of many different kinds of transmembrane proteins throughout the neuron cell membranes (including but not limited to at the synapse).

Sebastian Seung famously has a hypothesis in his book about connectomics that there are only a few hundred distinct cell types in the human brain that have roughly equivalent behaviours at every synapse, and that we can figure out which type each cell is just from the geometry. But I think this is unlikely given the way that learning works via synaptic plasticity: i.e. a lot of what is learned, at least in memory (as opposed to skills) is represented by the relative receptor densities at synapses. However, it may be that we are lucky and purely the size of the synapse is enough. In this case actually we are extremely lucky because the synchrotron solution is far more viable, and that’s much faster than expansion microscopy. With expansion microscopy, it seems plausible to tag a large number of receptors by barcoding fluorophores in a similar way to how folks are now starting to barcode cells with different combinations of fluorophores (which solves an entirely different problem, namely redundancy for axon tracing). However, the most likely way for the expansion-microscopy-based plan to fail is that we cannot use fluorescence to barcode enough transmembrane proteins.

jacobjacob

What options are there for overcoming this, and how tractable are they?

davidad

For barcoding, there are only a couple different technical avenues I’ve thought of so far. To some extent they can be combined.

  1. “serial” barcoding, in which we find a protocol for washing out a set of tags in an already-expanded sample, and then diffusing a new set of tags, and being able to repeat this over and over without the sample degrading too much from all the washing.

  2. “parallel” barcoding, in which we conjugate several fluorophores together to make a “multicolored” fluorophore (literally like a barcode in the visible-light spectrum). This is the basic idea that is used for barcoding cells, but the chemistry is very different because in cells you can just have different concentrations of separate fluorophore molecules floating around, whereas receptors are way too small for that and you need to have one of each type of fluorophore all kind of stuck together as one molecule. Chemistry is my weakness, so I’m not very sure how plausible this is, but from a first-principles perspective it seems like it might work.

jacobjacob

the synchrotron solution is far more viable

On synchotrons /​ particle accelerator x-rays: Logan Collins estimates they would take ~1 yr of sequential effort for a whole human brain (which thus also means you could do something like a mouse brain in a few hours, or an organoid in minutes, for prototyping purposes). But I’m confused why you suggest that as an option that’s differentially compatible with only needing lower resolution synaptic info like size.

Could you not do expansion microscopy + synchrotron? And if you need the barcoding to get what you want, wouldn’t you need it with or without synchrotron?

davidad

So, synchrotron imaging uses X-rays, whereas expansion microscopy typically uses ~visible light fluorescence (or a bit into IR or UV). (It is possible to do expansion and then use a synchrotron, and the best current synchrotron pathway does do that, but the speed advantages are due to X-rays being able to penetrate deeply and facilitate tomography.) There are a lot of different indicator molecules that resonate at different wavelengths of ~visible light, but not a lot of different indicator atoms that resonate at different wavelengths of X-rays. And the synchrotron is monochromatic anyway, and its contrast is by transmission rather than stimulated emission. So for all those reasons, with a synchrotron, it’s really pushing the limits to get even a small number of distinct tags for different targets, let alone spectral barcoding. That’s the main tradeoff with synchrotron’s incredible potential speed.

Arenamontanus

One nice thing with working in visible light rather than synchrotron radiation is that the energies are lower, and hence there is less disruption of the molecules and structure.

davidad

Andreas Schaefer seems to be making great progress with this, see e.g. here. I have updated downward in conversations with him that sample destruction will be a blocker.

Arenamontanus

Also, there are many modalities that have been developed in visible light wavelengths that are well understood.

davidad

This is a bigger concern for me, especially in regards to spectral barcodes.

End-to-end iteration as blocker (and organoids, holistic processes, and exotica)

Arenamontanus

I think the barcoding is a likely blocker. But I think the most plausible blocker is a more diffuse problem: we do not manage to close the loop between actual, running biology, and the scanning/​simulation modalities so that we can do experiments, adjust simulations to fit data, and then go back and do further experiments—including developing new scanning modalities. My worry here is that while we have smart people who are great at solving well-defined problems, the problem of setting up a research pipeline that is good at iterating at generating well-defined problems might be less well-defined… and we do not have a brilliant track record of solving such vague problems.

davidad

Yeah, this is also something that I’m concerned that people who take on this project might fail to do by default, but I think we do have the tech to do it now: human brain organoids. We can manufacture organoids that have genetically-human neurons, at a small enough size where the entire thing can be imaged dynamically (i.e. non-dead, while the neurons are firing, actually collecting the traces of voltage and/​or calcium), and then we can slice and dice the organoid and image it with whatever static scanning modality, and see if we can reproduce the actual activity patterns based on data from the static scan. This could become a quite high-throughput pipeline for testing many aspects of the plan (system identification, computer vision, scanning microscopes, sample prep/​barcoding, etc.).

jacobjacob

What is the state of the art of “uploading an organoid”?

(I guess one issue is that we don’t have any “validation test”: we just have neural patterns, but the organoid as a whole isn’t really doing any function. Whereas, for example, if we tried uploading a worm we could test whether it remembers foraging patterns learnt pre-upload)

davidad

I’m not sure if I would know about it, but I just did a quick search and found things like “Functional neuronal circuitry and oscillatory dynamics in human brain organoids” and “Stretchable mesh microelectronics for biointegration and stimulation of human neural organoids”, but nobody is really trying to “upload an organoid”. They are already viewing the organoids as more like “emulations” on which one can do experiments in place of human brains; it is an unusual perspective to treat an organoid as more like an organism which one might try to emulate.

lisathiergart

On the general theme of organoids & shortcuts:

I’m thinking about what are key blockers in dynamic scanning as well later static slicing & imaging. I’m probably not fully up to date on the state of the art of organoids & genetic engineering on them. However, if we are working on the level of organoids, couldn’t we plausibly directly genetically engineer (or fabrication engineer) them to make our life easier?

  • ex. making the organoid skin transparent (visually or electrically)

  • ex. directly have the immunohistochemistry applied as the organoid grows /​ is produced

  • ex. genetically engineer them such that the synaptic components we are most interested in are fluorescent

...probably there are further such hacks that might speed up the “Physically building stuff and running experiments” rate-limit on progress

(ofc getting these techniques to work will also take time, but at appropriate scales it might be worth it)

davidad

We definitely can engineer organoids to make experiments easier. They can simply be thin enough to be optically transparent, and of course it’s easier to grow them without a skull, which is a plus.

In principle, we could also make the static scanning easier, but I’m a little suspicious of that, because in some sense the purpose of organoids in this project would be to serve as “test vectors” for the static scanning procedures that we would want to apply to non-genetically-modified human brains. Maybe it would help to get things off the ground to help along the static scanning with some genetic modification, but it might be more trouble than it’s worth.

jacobjacob

Lisa, here’s a handwavy question. Do you have an estimate, or the components of an estimate, for “how quickly could we get a ‘Chris Olah-style organoid’—that is, the smallest organoid that 1) we fully understood, in the relevant sense and also 2) told us at least something interesting on the way to whole brain emulation?”

(Also, I don’t mean that it would be an organoid of poor Chris! Rather, my impression is that his approach to interpretability, is “start with the smallest and simplest possible toy system that you do not understand, then understand that really well, and then increase complexity”. This would be adapting that same methodology)

lisathiergart

Wow that’s a fun one, and I don’t quite know where to start. I’ll briefly read up on organoids a bit. [...reading time...] Okay so after a bit of reading, I don’t think I can give a great estimate but here are some thoughts:

First off, I’m not sure how much the ‘start with smallest part and then work up to the whole’ works well for brains. Possibly, but I think there might be critical whole brain processes we are missing (that change the picture a lot). Similarly, I’m not sure how much starting with a simpler example (say a different animal with a smaller brain) will reliably transfer, but at least some of it might. For developing the needed technologies it definitely seems helpful.

That said, starting small is always easier. If I were trying to come up with components for an estimate I’d consider:

  • looking at ​​non-human organoids

  • comparisons to how long similar efforts have been taking: C. elegans is a simpler worm organism some groups have worked on trying to whole brain upload, which seems to be taking longer than experts predicted (as far as I know the efforts have been ongoing for >10 years and not yet concluded, but there has been some progress). Though, I think this is affected for sure by there not being very high investment in these projects and few people working on it.

  • trying to define what ‘fully understood’ means: perhaps being able to get within x% error on electrical activity prediction and/​or behavioural predictions (if the organism exhibits behaviors)

There’s definitely tons more to consider here, but I don’t think it makes sense for me to try to generate it.​

Arenamontanus

I’m not sure how much the ‘start with smallest part and then work up to the whole works’ well for brains. Possibly, but I think there might be critical whole brain processes we are missing (that change the picture a lot).

One simple example of a “holistic” property that might prove troublesome is oscillatory behavior, where you need a sufficient number of units linked in the right way to get the right kind of oscillation. The fun part is that you get oscillations almost automatically from any neural system with feedback, so distinguishing merely natural frequency oscillations (e.g. the gamma rhythm seems to be due to fast inhibitory interneurons if I remember right) and the functionally important oscillations (if any!) will be tricky. Merely seeing oscillations is not enough, we need some behavioral measures.

There is likely a dynamical balance between microstructure-understanding based “IKEA manual building” of the system and the macrostructure-understanding “carpentry” approach. Setting the overall project pipeline in motion requires having a good adaptivity on this.

jacobjacob

Wouldn’t it be possible to bound the potentially relevant “whole brain processes”, by appealing to a version of Davidad’s argument above that we can neglect a lot of modalities and stuff, because they operate at a slower timescale than human cognition (as verified for example by simple introspection)?

lisathiergart

I think glia are also involved in modulating synaptic transmissions as well as the electric conductivity of a neuron (definitely the speed of conductance), so I’m not sure the speed of cognition argument necessarily disqualifies them and other non-synaptic components as relevant to an accurate model. This paper presents the case of glia affecting synaptic plasticity and being electrically active. Though I do think that argument seems to be valid for many components, which clearly cannot effect the electrical processes at the needed time scales.

With regards to “whole brain processes” what I’m gesturing at is there might be top level control or other processes running without whose inputs the subareas’ function cannot be accurately observed. We’d need to have an alternative way of inputting the right things into the slice or subarea sample to generate the type of activity that actually occurs in the brain. Though, it seems we can focus on the electrical components of such a top level process in which case I wouldn’t see an obvious conflict between the two arguments. I might even expect the electrical argument to hold more, because of the need to travel quickly between different (far apart) brain areas.

davidad

Yeah, sorry, I came across as endorsing a modification of the standard argument that rules out too many aspects of brain processes as cognition-relevant, where I only rule back in neural membrane proteins. I’m quite confident that neural membrane proteins will be a blocker (~70%) whereas I am less confident about glia (~30%) but it’s still very plausible that we need to learn something about glial dynamics in order to get a functioning human mind. However, whatever computation the glia are doing is also probably based on their own membrane proteins! And the glia are included in the volume that we have to scan anyway.

lisathiergart

Yeah, makes sense. I also am thinking that maybe there’s a way to abstract out the glia by considering their inputs as represented in the membrane proteins. However, I’m not sure whether it’s cheaper/​faster to represent the glia vs. needing to be very accurate with the membrane proteins (as well as how much of a stretch it is to assume they can be fully represented there).

Arenamontanus

My take on glia is that they may represent a sizeable pool of nonlinear computation, but it seems to run slowly compared to the active spikes of neurons. That may require lots of memory, but less compute. But real headache is that there is relatively little research on glia (despite their very loyal loyalists!) compared to those glamorous neurons. Maybe they do represent a good early target for trying to define a research subgoal of characterizing them as completely as possible.

Arenamontanus

Overall, the “exotica” issue is always annoying—there could be an infinite number of weird modalities we are missing, or strange interactions requiring a clever insight. However, I think there is a pincer maneouver here where we try to constrain it by generating simulations from known neurophysiology (and nearby variations), and perhaps by designing experiments to estimate degrees of unknown degrees of freedom (much harder, but valuable). As far as I know the latter approach is still not very common, my instant association is to how statistical mechanics can estimate degrees of freedom sensitively from macroscopic properties (leading, for example, to primordial nucleogenesis to constrain elementary particle physics to a surprising degree). This is where I think a separate workshop/​brainstorm/​research minipipeline may be valuable for strategizing.

Using AI to speed up uploading research

jacobjacob

As AI develops over the coming years, it will speed up some kinds of research and engineering. I’m wondering: for this proposal, which parts could future AI accelerate? And perhaps more interestingly: which parts would not be easily accelerable (implying that work on those parts sooner is more important)?

davidad

The frame I would use is to sort things more like along a timeline of when AI might be able to accelerate them, rather than a binary easy/​hard distinction. (Ultimately, a friendly superintelligence would accelerate the entire thing—which is what many folks believe friendly superintelligence ought to be used for in the first place!)

  1. The easiest parts to accelerate are the computer vision. In terms of timelines, this is in the rear-view mirror, with deep learning starting to substantially accelerate the processing of raw images into connectomic data in 2017.

  2. The next easiest is the modelling of the dynamics of a nervous system based on connectomic data. This is a mathematical modelling challenge, which needs a bit more structure than deep learning traditionally offers, but it’s not that far off (e.g. with multimodal hypergraph transformers).

  3. The next easiest is probably more ambitious microscopy along the lines of “computational photography” to extract more data with fewer electrons or photons by directing the beams and lenses according to some approximation of optimal Bayesian experimental design. This has the effect of accelerating things by making the imaging go faster or with less hardware.

  4. The next easiest is the engineering of the microscopes and related systems (like automated sample preparation and slicing). These are electro-optical-mechanical engineering problems, so will be harder to automate than the more domain-restricted problems above.

  5. The hardest to automate is the planning and cost-optimization of the manufacturing of the microscopes, and the management of failures and repairs and replacements of parts, etc. Of course this is still possible to automate, but it requires capabilities that are quite close to the kind of superintelligence that can maintain a robot fleet without human intervention.

jacobjacob

The hardest to automate is the planning and cost-optimization of the manufacturing

An interesting related question is “if you had an artificial superintelligence suddenly appear today, what would be its ‘manufacturing overhang’? How long it would it take it to build the prerequisite capacity, starting with current tools?”

Arenamontanus

Could AI assistants help us build a research pipeline? I think so. Here is a sketch: I run a bio experiment, I scan the bio system, I simulate it, I get data that disagrees. Where is the problem? Currently I would spend a lot of effort trying to check my simulator for mistakes or errors, then move on to check whether the scan data was bad, then if the bio data was bad, and then if nothing else works, positing that maybe I am missing a modality. Now, with good AI assistants I might be able to (1) speed up each of these steps, including using multiple assistants running elaborate test suites. (2) automate and parallelize the checking. But the final issue remains tricky: what am I missing, if I am missing something? This is where we have human level (or beyond) intelligence questions, requiring rather deep understanding of the field and what is plausible in the world, as well as high level decisions on how to pursue research to test what new modalities are needed and how to scan for them. Again, good agents will make this easier, but it is still tricky work.

What I suspect could happen for this to fail is that we run into endless parameter-fiddling, optimization that might hide bad models by really good fits of floppy biological systems, and no clear direction for what to research to resolve the question. I worry that it is a fairly natural failure mode, especially if the basic project produces enormous amount of data that can be interpreted very differently. Statistically speaking, we want identifiability. But not just of the fit to our models, but to our explanations. And this is where non-AGI agents have not yet demonstrated utility.

jacobjacob

Yeah, so, overall, it seems “physically building stuff”, “running experiments” and “figure out what you’re missing” are some of where the main rate-limiters lie. These are the hardest-to-automate steps of the iteration loop, that prevent an end-to-end AI assistant helping us through the pipeline.

Firstly though, it seems important to me how frequently you run into the rate limiter. If “figuring out what you’re missing” is something you do a few times a week, you could still be sped up a lot by automating the other parts of the pipeline until you can run this problem a few times per day.

But secondly I’m interested in answers digging one layer down of concreteness—which part of the building stuff is hard, and which part of the experiment running is hard? For example: Anders had the idea of “the neural pretty printer”: create ground truth artificial neural networks performing a known computation >> convert them into a Hodgkin-Huxley model (or similar) >> fold that up into a 3D connectome model >> simulate the process of scanning data from that model—and then attempt to reverse engineer the whole thing. This would basically be a simulation pipeline for validating scanning set-ups.

To the extent that such simulation is possible, those particular experiments would probably not be the rate-limiting ones.

Arenamontanus

The neural pretty printer is an example of building a test pipeline: we take a known simulatable neural system, convert it into a plausible biological garb and make fake scans of it according to the modalities we have, and then try to get our interpretation methods reconstruct it. This is great, but eventually limited. The real research pipeline will have to contain (and generate) such mini-pipelines to ensure testability. There is likely an organoid counterpart. Both have a problem of making systems to test the scanning based on somewhat incomplete (or really incomplete!) data.

Training frontier models to predict neural activity instead of next token

jacobjacob

Lisa, you said in our opening question brainstorm:

1. Are there any possible shortcuts to consider? If there are, that seems to make this proposal even more feasible.

1a. Maybe something like, I can imagine there are large and functional structural similarities across different brain areas. If we can get an AI or other generative system to ‘fill in the gaps’ of more sparse tissue samples, and test whether the reconstructed representation is statistically indistinguishable from the dynamic data collected [(with the aim of figuring out the statics-to-dynamics map)] then we might be able to figure out what density of tissue sampling we need for full predictability. (seems plausible that we don’t need 100% tissue coverage, especially in some areas of the brain?). Note, it also seems plausible to me that one might be missing something important that could show up in a way that wasn’t picked up in the dynamic data, though that seems contingent on the quality of the dynamic data.

1b. Given large amounts of sparsity in neural coding, I wonder if there are some shortcuts around that too. (Granted though that the way the sparsity occurs seems very critical!)

Somewhat tangentially, this makes me wonder about taking this idea to its limit: what about the idea of “just” training a giant transformer to, instead of predicting next tokens in natural language, predicting neural activity? (at whatever level of abstractions is most suitable.) “Imitative neural learning”. I wonder if that would be within reach of the size of models people are gearing up to train, and whether it would better preserve alignment guarantees.

lisathiergart

Hmm, intuitively this seems not good. On the surface level, two reasons come to mind:

1. I suspect the element of “more reliably human-aligned” breaks or at least we have less strong reasons to believe this would be the case than if the total system also operates on the same hardware structure (so to say, it’s not actually going to be run on something carbon-based). Though I’m also seeing the remaining issue of: “if it looks like a duck (structure) and quacks like a duck (behavior), does that mean it’s a duck?”. At least we have the benefit of deep structural insight as well as many living humans as a prediction on how aligned-ness of these systems turns out in practice. (That argument holds in proportion to how faithful of an emulation we achieve.)

2. I would be really deeply interested in whether this would work. It is a bit reminiscent of the Manifund proposal Davidad and I have open currently, where the idea is to see if human brain data can improve performance on next token prediction and make it more human preference aligned. At the same time, for the ‘imitative neural learning’ you suggest (btw I suspect it would be feasible with GPT4/​5 levels, but I see the bottleneck/​blocker in being able to get enough high quality dynamic brain data) I think I’d be pretty worried that this would turn into some dangerous system (which is powerful, but not necessarily aligned).

2/​a Side thought: I wonder how much such a system would in fact be constrained to human thought processes (or whether it would gradient descend into something that looks input and output similar, but in fact is something different, and behaves unpredictably in unseen situations). Classic Deception argument I guess (though in this case without an implication of some kind of intentionality on the system’s side, just that it so happens bc of the training process and data it had)

davidad

I basically agree with Lisa’s previous points. Training a transformer to imitate neural activity is a little bit better than training it to imitate words, because one gets more signal about the “generators” of the underlying information-processing, but misgeneralizing out-of-distribution is still a big possibility. There’s something qualitatively different that happens if you can collect data that is logically upstream of the entire physical information processing conducted by the nervous system — you can then make predictions about that information-processing deductively rather than inductively (in the sense of the problem of induction), so that whatever inductive biases (aka priors) are present in the learning-enabled components end up having no impact.

Arenamontanus

“Human alignment”: one of the nice things with human minds is that we understand roughly how they work (or at least the signs that something is seriously wrong). Even when they are not doing what they are supposed to do, the failure modes are usually human, all too human. The crux here is whether we should expect to get human-like systems, or “humanish” systems that look and behave similar but actually work differently. The structural constraints from Whole Brain Emulation are a good reason to think more of the former, but I suspect many still worry about the latter because maybe there are fundamental differences in simulation from reality. I think this will resolve itself fairly straightforwardly since—by assumption if this project gets anywhere close to succeeding—we can do a fair bit of experimentation on whether the causal reasons for various responses look like normal causal reasons in the bio system. My guess is that here Dennett’s principle that the simplest way of faking many X is to be X. But I also suspect a few more years of practice with telling when LLMs are faking rather than grokking knowledge and cognitive steps will be very useful and perhaps essential for developing the right kind of test suite.

How to avoid having to spend $100B and and build 100,000 light-sheet microscopes

jacobjacob

“15 years and ~$500B” ain’t an easy sell. If we wanted this to be doable faster (say, <5 years), or cheaper (say, <$10B): what would have to be true? What problems would need solving? Before we finish, I am quite keen to poke around the solution landscape here, and making associated fermis and tweaks.

davidad

So, definitely the most likely way that things could go faster is that it turns out all the receptor densities are predictable from morphology (e.g. synaptic size and “cell type” as determined by cell shape and location within a brain atlas). Then we can go ahead with synchrotron (starting with organoids!) and try to develop a pipeline that infers the dynamical system from that structural data. And synchrotron is much faster than expansion.

jacobjacob

it turns out all the receptor densities are predictable from morphology

What’s the fastest way you can see to validating or falsifying this? Do you have any concrete experiments in mind that you’d wish to see run if you had a magic wand?

davidad

Unfortunately, I think we need to actually see the receptor densities in order to test this proposition. So transmembrane protein barcoding still seems to me to be on the critical path in terms of the tech tree. But if this proposition turns out to be true, then you won’t need to use slow expansion microscopy when you’re actually ready to scan an entire human brain—you only need to use expansion microscopy on some samples from every brain area, in order to learn a kind of “Rosetta stone” from morphology to transmembrane protein(/​receptor) densities for each cell type.

jacobjacob

So, for the benefit of future readers (and myself!) I kind of would like to see you multiply 5 numbers together to get the output $100B (or whatever your best cost-esimate is), and then do the same thing in the fortunate synchrotron world, to get a fermi of how much things would cost in that world.

davidad

Ok. I actually haven’t done this myself before. Here goes.

  1. The human central nervous system has a volume of about 1400 cm^3.

  2. When we expand it for expansion microscopy, it expands by a factor of 11 in each dimension, so that’s about 1.9 m^3. (Of course, we would slice and dice before expanding...)

  3. Light-sheet microscopes can cover a volume of about 10^4 micron^3 per second, which is about 10^-14 m^3 per second.

  4. That means we need about 1.9e14 microscope-seconds of imaging.

  5. If the deadline is 10 years, that’s about 3e8 seconds, so we need to build about 6e5 microscopes.

  6. Each microscope costs about $200k, so 6e5 * $200k is $120B. (That’s not counting all the R&D and operations surrounding the project, but building the vast array of microscopes is the expensive part.)

Pleased by how close that came out to the number I’d apparently been citing before. (Credit is due to Rob McIntyre and Michael Andregg for that number, I think.)

jacobjacob

Similarly, looking at Logan’s slide:

$1M for instruments, and needing 600k years. So, make 100k microscopes and run them in parallel for 6 years and then you get $100B…

davidad

That system is a bit faster than ExM, but it’s transmission electron microscopy, so you get roughly the same kind of data as synchrotron anyway (higher resolution, but probably no barcoded receptors)

davidad

Now for the synchrotron cost estimate.

  1. Synchrotron imaging has about 300nm voxel size, so to get accurate synapse sizes we would still need to do expansion to 1.9 m^3 of volume.

  2. Synchrotron imaging has a speed of about 600 s/​mm^3, but it seems uncontroversial that this may be improved by an order of magnitude with further R&D investment.

  3. That multiplies to about 3000 synchrotron-years.

  4. To finish in 10 years, we would need to build 300 synchrotron beamlines.

  5. Each synchrotron beamline costs about $10M.

  6. That’s $3B in imaging infrastructure. A bargain!

jacobjacob

That’s very different from this estimate. Thoughts?

davidad

One difference off the bat − 75nm voxel size is maybe enough to get a rough connectome, but not enough to get precise estimates of synapse size. I think we’d need to go for 11x expansion. So that’s about 1 order of magnitude, but there’s still 2.5 more to account for. My guess is that this estimate is optimistic about combining multiple potential avenues to improve synchrotron performance. I do see some claims in the literature that more like 100x improvement over the current state of the art seems feasible.


Arenamontanus

What truly costs money in projects? Generally, it is salaries and running costs times time, plus the instrument/​facilities costs as a one-time cost. One key assumption in this moonshot is that there is a lot of scalability so that once the basic setup has been made it can be replicated ever cheaper (Wrightean learning, or just plain economies of scale). The faster the project runs, the lower the first factor, but the uncertainty about whether all relevant modalities have been covered will be greater. There might be a rational balance point between rushing in and likely having to redo a lot, and being slow and careful but hence getting a lot of running costs. However, from the start the difficulty is very uncertain, making the actual strategy (and hence cost) plausibly a mixture model.

Arenamontanus

The path to the 600,000 microscopes is of course to start with 6, doing the testing and system integration while generating the first data for the initial mini-pipelines for small test systems. As that proves itself one can scale up to 60 microscopes for bigger test systems and smaller brains. And then 600, 6,000 and so on.

jacobjacob

Hundreds of thousands of microscopes seem to me like an… issue. I’m curious if you have something like a “shopping list” of advances or speculative technologies that could bring that number down a lot.

Arenamontanus

Yes, that is quite the lab floor. Even one per square meter makes a ~780x780 meter space. Very Manhattan Project vibe.

jacobjacob

Google tells me Tesla Gigafactory Nevada is about 500k m^2, so about the same :P

Arenamontanus

Note though that economies of scale and learning curves can make this more economical than it seems. If we assume an experience curve with price per unit going down to 80% each doubling, 600k is 19 doublings, making the units in the last doubling cost just 1.4% of the initial unit cost.

davidad

And you don’t need to put all the microscopes in one site. If this were ever to actually happen, presumably it would be an international consortium where there are 10-100 sites in several regions that create technician jobs in those areas (and also reduce the insane demand for 100,000 technicians all in one city).

Arenamontanus

Also other things might boost efficiency and price.

Most obvious would be nanotechnological systems: likely sooner than people commonly assume, yet might take long enough to arrive to make effect on this project minor if it starts soon. Yet design-ahead for dream equipment might be a sound move.

Advanced robotics and automation is likely major gamechanger. The whole tissue management infrastructure needs to be automated from the start, but there will likely be a need for rapid turnaround lab experimentation too. Those AI agents are not just for doing standard lab tasks but also for designing new environments, tests, and equipment. Whether this can be integrated well in something like the CAIS infrastructure is worth investigating. But rapid AI progress also makes it hard to do plan-ahead.

Biotech is already throwing up lots of amazing tools. Maybe what we should look for is a way of building the ideal model organism—not just with biological barcodes or convenient brainbow coloring, but with useful hooks (in the software sense) for testing and debugging. This might also be where we want to look at counterparts to minimal genomes for minimal nervous systems.

jacobjacob

Yet design-ahead for dream equipment might be a sound move.

Thoughts on what that might look like? Are there potentially tractable paths you can imagine people starting work on today?

Arenamontanus

Imagine a “disassembly chip” that is covered with sensors characterizing a tissue surface, sequences all proteins, carbohydrate chains, and nucleic acids, and sends that back to the scanner main unit. A unit with nanoscale 3D memory storage and near-reversible quantum dot cellular automata processing. You know, the usual. This is not feasible right now, but I think at least the nanocomputers could be designed fairly well for the day we could assemble them (I have more doubts about the chip, since it requires to solve a lot of contingencies… but I know clever engineers). Likely the most important design-ahead pieces may not be superfancy like these, but parts for standard microscopes or infrastructure that are normally finicky, expensive or otherwise troublesome for the project but in principle could be made much better if we only had the right nano or microtech.

So the design-ahead may be all about making careful note of every tool in the system and having people (and AI) look for ways they can be boosted. Essentially, having a proper parts list of the project itself is a powerful design criterion.

What would General Groves do?

jacobjacob

Davidad—I recognise we’re coming up on your preferred cutoff time. One other concrete question I’m kind of curious to get your take on, if you’re up for it:

“If you summon your inner General Groves, and you’re given a $1B discretionary budget today… what does your ‘first week in office’ look like? What do you set in motion concretely?” Feel free to splurge a bit on experiments that might or might not be necessary. I’m mostly interested in the exercise of concretely crafting concrete plans.

jacobjacob

(also, I guess this might kind of be what you’re doing with ARIA, but for a different plan… and sadly a smaller budget :) )

davidad

Yes, a completely different plan, and indeed a smaller budget. In this hypothetical, I’d be looking to launch several FROs, basically, which means recruiting technical visionaries to lead attacks on concrete subproblems:

  1. Experimenting with serial membrane protein barcodes.

  2. Experimenting with parallel membrane protein barcodes.

  3. Build a dedicated brain-imaging synchrotron beamline to begin more frequent experiments with different approaches to stabilizing tissue, pushing the limits on barcoding, and performing R&D on these purported 1-2 OOM speed gains.

  4. Manufacturing organoids that express a diversity of human neural cell types.

  5. Just buy a handful of light-sheet microscopes for doing small-scale experiments on organoids—one for dynamic calcium imaging at cellular resolution, and several more for static expansion microscopy at subcellular resolution.

  6. Recruit for a machine-learning team that wants to tackle the problem of learning the mapping between the membrane-protein-density-annotated-hypergraph that we can get from static imaging data, and the system-of-ordinary-differential-equations that we need to simulate in order to predict dynamic activity data.

  7. Designing robotic assembly lines that manufacture light-sheet microscopes.

  8. Finish the C. elegans upload.

  9. Long-shot about magnetoencephalography.

  10. Long-shot about neural dust communicating with ultrasound.

Some unanswered questions

Me (jacobjacob) and Lisa started the dialogue by brainstorming some questions. We didn’t get around to answering all of them (and neither did we intend to). Below, I’m copying in the ones that didn’t get answered.

jacobjacob
  • What if the proposal succeeds? If the proposal works, what are the…

    • …limitations? For example, I heard Davidad say before that the resultant upload might mimic patient H.M.: unable to form new memories as a result of the simulation software not having solved synaptic plasticity

    • …risks? For example, one might be concerned that the tech tree for uploading is shared with that for unaligned neuromorphic AGI. But short timelines have potentially made this argument moot. What other important arguments are there here?

  • As a reference class for the hundreds of thousands of microscopes needed… what is the world’s current microscope-building capacity? What are relevant reference classes here for scale and complexity—looking, for example, at something like EUV litography machines (of which I think ASML produce ~50/​year currently, at like $100M each)?

lisathiergart

Some further questions I’d be interested to chat about are:

1. Are there any possible shortcuts to consider? If there are, that seems to make this proposal even more feasible.

1/​a. Maybe something like, I can imagine there are large and functional structural similarities across different brain areas. If we can get an AI or other generative system to ‘fill in the gaps’ of more sparse tissue samples, and test whether the reconstructed representation is statistically indistinguishable from say the dynamic data collected in step 4, then we might be able to figure out what density of tissue sampling we need for full predictability. (seems plausible that we don’t need 100% tissue coverage, especially in some areas of the brain?). Note, it also seems plausible to me that one might be missing something important that could show up in a way that wasn’t picked up in the dynamic data, though that seems contingent on the quality of the dynamic data.

1b. Given large amounts of sparsity in neural coding, I wonder if there are some shortcuts around that too. (Granted though that the way the sparsity occurs seems very critical!)

2. Curious to chat a bit more about what the free parameters are in step 5.

3. What might we possibly be missing and is it important? Stuff like extrasynaptic dynamics, glia cells, etc.

3/​a. Side note, if we do succeed in capturing dynamic as well as static data, this seems like an incredibly rich data set for basic neuroscience research, which in turn could provide emulation shortcuts. For example, we might be able to more accurately model the role of glia in modulating neural firing, and then be able to simulate more accurately according to whether or not glia are present (and how type of cell, glia cell size, and positioning around the neuron matters, etc).

4. Curious to think more about how to dynamically measure the brain (step 3). Thin living specimens with human genomes and then using the fluorescence paradigm. I’m considering whether there are tradeoffs in only seeing slices at a time where we might be missing data on how the slices might communicate with each other. I wonder if it’d make sense to have multiple sources of dynamic measurements which get combined.. though ofc there are some temporal challenges there, but I imagine that can be sorted out.. like for example using the whole brain ultrasound techniques developed by Sumner’s group. In the talk you mentioned neural dust and communicating out with ultrasound, that seems incredibly exciting. I know UC Berkeley and other Unis were working on this somewhat, though I’m currently unsure what the main blockers here are.

Appendix: the proposal

Here’s a screenshot of notes from Davidad’s talk.