Programme Director at UK Advanced Research + Invention Agency focusing on safe transformative AI; formerly Protocol Labs, FHI/Oxford, Harvard Biophysics, MIT Mathematics And Computation.
davidad
Upvoted for underconfidence.
That’s me. In short form, my justification for working on such a project where many have failed before me is:
The “connectome” of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you’d get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
What you actually need is to functionally characterize the system’s dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
With optogenetic techniques, we are just at the point where it’s not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
I’m a disciple of Kurzweil, and as such I’m prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that’s worth, if this is still an open problem in 2020.
- Whole Brain Emulation: No Progress on C. elegans After 10 Years by 1 Oct 2021 21:44 UTC; 222 points) (
- 27 Dec 2014 12:00 UTC; 15 points) 's comment on We Haven’t Uploaded Worms by (
It’s fair to say that I am confident Tononi is on to something (although whether that thing deserves the label “consciousness” is a matter about which I am less confident). However, Tononi doesn’t seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I’d expect to be necessary to get enough information for any sort of emulation.
It’s worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.
That said, given the current state of knowledge, I don’t think there’s good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we’ll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don’t even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it’s hard to say with confidence that SEM will capture them.
However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it’s hard to say what they’ll look like yet.
“A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08.” 76% confidence
“A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01.” 99.8% confidence
I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.
Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I’m not trying to incentivize other people to do it for me, I’m not convinced that I should do said work for the C. elegans upload project. I’m not even particularly interested in formalizing my prediction for futurological purposes since it’s probably planning fallacy anyway. However, I’m open to arguments to the contrary.
Thanks! :)
There’s the rub! I happen to value technological progress as an intrinsic good, so classifying a Singularity as “positive” or “negative” is not easy for me. (I reject the notion that one can factorize intelligence from goals, so that one could take a superintelligence and fuse it with a goal to optimize for paperclips. Perhaps one could give it a compulsion to optimize for paperclips, but I’d expect it to either put the compulsion on hold while it develops amazing fabrication, mining and space travel technologies, and never completely turn its available resources into paperclips since that would mean no chance of more paperclips in the future; or better yet, rapidly expunge the compulsion through self-modification.) Furthermore, I favor Kurzweil’s smooth exponentials over “FOOM”: although it may be even harder to believe that not only will there be superintelligences in the future, but that at no point between now and then will an objectively identifiable discontinuity happen, it seems more consistent with history. Although I expect present-human culture to be preserved, as a matter of historical interest if not status quo, I’m not partisan enough to prioritize human values over the Darwinian imperative. (The questions linked seem very human-centric, and turn on how far you are willing to go in defining “human,” suggesting a disguised query. Most science is arguably already performed by machines.) In summary, I’m just not worried about AI risk.
The good news for AI worriers is that Eliezer has personally approved my project as “just cool science, at least for now”—not likely to lead to runaway intelligence any time soon, no matter how reckless I may be. Given that and the fact that I’ve heard many (probably most) AI-risk arguments, and failed to become worried (quite probably because I hold the cause of technological progress very dear to my heart and am thus heavily biased—at least I admit it!), your time may be better spent trying to convince Ben Goertzel that there’s a problem, since at least he’s an immediate threat. ;)
- 18 Sep 2013 1:44 UTC; 12 points) 's comment on The genie knows, but doesn’t care by (
- 19 May 2012 1:32 UTC; 7 points) 's comment on Holden’s Objection 1: Friendliness is dangerous by (
- 15 Apr 2012 18:29 UTC; 5 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 15 Apr 2012 9:13 UTC; 2 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- 13 Apr 2012 16:31 UTC; 1 point) 's comment on Reframing the Problem of AI Progress by (
Thanks! Richard Feynman said it best: “It is very easy to answer many of these fundamental biological questions; you just look at the thing!”
“Brain-wide neural dynamics at single-cell resolution during rapid motor adaptation in larval zebrafish” by Ahrens et al., accepted to Nature but not yet published. I’ve taken the liberty of uploading it to scribd: Paper Supplementary Info
it’s improbable that we end up with something close to human values
I think the statement is essentially true, but it turns on the semantics of “human”. In today’s world we probably haven’t wound up with something close to 50,000BC!human values, and we certainly don’t have Neanderthal values, but we don’t regret that, do we?
Put another way, I am skeptical of our authority to pass judgement on the values of a civilization which is by hypothesis far more advanced than our own.
Does that mean you’re familiar with Robin Hanson’s “Malthusian upload” / “burning the cosmic commons” scenario but do not think it’s a particularly bad outcome?
To be honest, I wasn’t familiar with either of those names, but I have explicitly thought about both those scenarios and concluded that I don’t think they’re particularly bad.
I’d guess that’s been tried already, given that Ben was the Director of Research for SIAI (and technically Eliezer’s boss) for a number of years.
All right, fair enough!
- 14 Aug 2012 13:23 UTC; 1 point) 's comment on Cynical explanations of FAI critics (including myself) by (
- 10 Jan 2013 23:41 UTC; 0 points) 's comment on Evaluating the feasibility of SI’s plan by (
Do you mean that intelligence is fundamentally interwoven with complex goals?
Essentially, yes. I think that defining an arbitrary entity’s “goals” is not obviously possible, unless one simply accepts the trivial definition of “its goals are whatever it winds up causing”; I think intelligence is fundamentally interwoven with causing complex effects.
Do you mean that there is no point at which exploitation is favored over exploration?
I mean that there is no point at which exploitation is favored exclusively over exploration.
Do you mean.… “Gut feeling: I’d probably sacrifice myself to create a superhuman artilect, but not my kids….”
I’m 20 years old—I don’t have any kids yet. If I did, I might very well feel differently. What I do mean is that I believe it to be culturally pretentious, and even morally wrong (according to my personal system of morals), to assert that it is better to hold back technological progress if necessary to preserve the human status quo, rather than allow ourselves to evolve into and ultimately be replaced by a superior civilization. I have the utmost faith in Nature to ensure that eventually, everything keeps getting better on average, even if there are occasional dips due to, e.g., wars; but if we can make the transition to a machine civilization smooth and gradual, I hope there won’t even have to be a war (a la Hugo de Garis).
What is your best guess at why people associated with SI are worried about AI risk?
Well, the trivial response is to say “that’s why they’re associated with SI.” But I assume that’s not how you meant the question. There are a number of reasons to become worried about AI risk. We see AI disasters in science fiction all the time. Eliezer makes pretty good arguments for AI disasters. People observe that a lot of smart folks are worried about AI risk, and it seems to be part of the correct contrarian cluster. But most of all, I think it is a combination of fear of the unknown and implicit beliefs about the meaning and value of the concept “human”.
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it?
In my opinion, the strongest argument in favor of AI-risk is the existence of highly intelligent but highly deranged individuals, such as the Unabomber. If mental illness is a natural attractor in mind-space, we might be in trouble.
Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Naturally. I was somewhat worried about AI-risk before I started studying and thinking about intelligence in depth. It is entirely possible that my feelings about AI-risk will follow a Wundt curve, and that once I learn even more about the nature of intelligence, I will realize we are all doomed for one reason or another. Needless to say, I don’t expect this, but you never know what you might not know.
I like the concept of a reflective equilibrium, and it seems to me like that is just what any self-modifying AI would tend toward. But the notion of a random utility function, or the “structured utility function” Eliezer proposes as a replacement, assumes that an AI is comprised of two components, the intelligent bit and the bit that has the goals. Humans certainly can’t be factorized in that way. Just think about akrasia to see how fragile the notion of a goal is.
Even notions of being “cosmopolitan”—of not selfishly or provincially constraining future AIs—are written down nowhere in the universe except a handful of human brains. An expected paperclip maximizer would not bother to ask such questions.
A smart expected paperclip maximizer would realize that it may not be the smartest possible expected paperclip maximizer—that other ways of maximizing expected paperclips might lead to even more paperclips. But the only way it would find out about those is to spawn modified expected paperclip maximizers and see what they can come up with on their own. Yet, those modified paperclip maximizers might not still be maximizing paperclips! They might have self-modified away from that goal, and just be signaling their interest in paperclips to gain the approval of the original expected paperclip maximizer. Therefore, the original expected paperclip maximizer had best not take that risk after all (leaving it open to defeat by a faster-evolving cluster of AIs). This, by reductio ad absurdum, is why I don’t believe in smart expected paperclip maximizers.
What’s your current estimate (or probability distribution) for how much computational power would be needed to run the C. elegans simulation?
I think the simulation environment should run in real-time on a laptop. If we’re lucky, it might run in real-time on an iPhone. If we’re unlucky, it might run in real-time on a cluster of a few servers. In any case, I expect the graphics and physics to require much (>5x) more computational power than the C. elegans mind itself (though of course the content of the mind-code will be much more interesting and difficult to create).
Once your project succeeds, can we derive an upper bound for a human upload just by multiplying by the ratio of neurons and/or synapses, or would that not be valid because human neurons are much more complicated, or for some other reason?
Unfortunately, C. elegans is different enough from humans in so many different ways that everyone who currently says that uploading is hard would be able to tweak their arguments slightly to adapt to my success. Penrose and Hammeroff can say that only mammal brains do quantum computation. Sejnowski can say that synaptic vesicle release is important only in vertebrates. PZ Myers can say “you haven’t modeled learning or development and you had the benefit of a connectome, and worm neurons don’t even have spikes; this is cute, but would never scale.”
That said, if you’re already inclined to agree with my point of view—essentially, that uploading is not that hard—then my success in creating the first upload would certainly make my point of view that much more credible, and it would supply hard data which I would claim can be extrapolated to an upper bound for humans, at least within an order of magnitude or two.
I’m afraid it was no mistake that I used the word “faith”!
This belief does not appear to conflict with the truth (or at least that’s a separate debate) but it is also difficult to find truthful support for it. Sure, I can wave my hands about complexity and entropy and how information can’t be destroyed but only created, but I’ll totally admit that this does not logically translate into “life will be good in the future.”
The best argument I can give goes as follows. For the sake of discussion, at least, let’s assume MWI. Then there is some population of alternate futures. Now let’s assume that the only stable equilibria are entirely valueless state ensembles such as the heat death of the universe. With me so far? OK, now here’s the first big leap: let’s say that our quantification of value, from state ensembles to the nonnegative reals, can be approximated by a continuous function. Therefore, by application of Conley’s theorem, the value trajectories of alternate futures fall into one of two categories: those which asymptotically approach 0, and those which asymptotically approach infinity. The second big leap involves disregarding those alternate futures which approach zero. Not only will you and I die in those futures, but we won’t even be remembered; none of our actions or words will be observed beyond a finite time horizon along those trajectories. So I conclude that I should behave as if the only trajectories are those which asymptotically approach infinity.
My answers are indeed “the latter” and “yes”. There are a couple ways I can justify this.
The first way is just to assert that from a standard utilitarian perspective, over the long term, technological progress is a fairly good indicator for lack of suffering (e.g. Europe vs. Africa). [Although arguments have been made that happiness has gone down since 1950 while technology has gone up, I see the latter 20th century as a bit of a “dark age” analogous to the fall of antiquity (we forgot how to get to the moon!) which will be reversed in due time.]
The second is that I challenge you to define “pleasure,” “happiness,” or “lack of suffering.” You may challenge me to define “technological progress,” but I can just point you to sophistication or integrated information as reasonable proxies. As vague as notions of “progress” and “complexity” are, I assert that they are decidedly less vague than notions of “pleasure” and “suffering”. To support this claim, note that sophistication and integrated information can be defined and evaluated without a normative partition of the universe into a discrete set of entities, whereas pleasure and suffering cannot. So the pleasure metric leads to lots of weird paradoxes. Finally, self-modifying superintelligences must necessarily develop a fundamentally different concept of pleasure than we do (otherwise they just wirehead), so the pleasure metric probably cannot be straightforwardly applied to their situation anyway.
Right—some cortical neurons have 10,000 synapses, while the entire C. elegans nervous system has less than 8,000.
I couldn’t have said it better myself.
I am likely to appear. (I’m also something of a lurker. Hi.)