I’m an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at Astera. See https://sjbyrnes.com/agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed, X/Twitter, Bluesky, Substack, LinkedIn, and more at my website.
Steven Byrnes
I don’t see why it should be possible for something which knows the physical state of my brain to be able to efficiently compute the contents of it.
I think you meant “philosophically necessary” where you wrote “possible”? If so, agreed, that’s also my take.
If an omniscient observer can extract the contents of a brain by assembling a causal model of it in un-encrypted phase space, why would it struggle to build the same casual model in encrypted phase space?
I don’t understand this part. “Causal model” is easy—if the computer is a Turing machine, then you have a causal model in terms of the head and the tape etc. You want “understanding” not “causal model”, right?
If a superintelligence were to embark on the project of “understanding” a brain, it would be awfully helpful to see the stream of sensory inputs and the motor outputs. Without encryption, you can do that: the environment is observable. Under homomorphic encryption without the key, the environmental simulation, and the brain’s interactions with it, look like random bits just like everything else. Likewise, it would be awfully helpful to be able to notice that the brain is in a similar state at times t₁ versus t₂, and/or the ways that they’re different. But under homomorphic encryption without the key, you can’t do that, I think. See what I mean?
The headings “behavioral tools” and “transparency tools” both kinda assume that a mysterious AI has fallen out of the sky, and now you have to deal with it, as opposed to either thinking about, or intervening on, how the AI is trained or designed. (See Connor’s comment here.)
(Granted, you do mention “new paradigm”, but you seem to be envisioning that pretty narrowly as a transparency intervention.)
I think that’s an important omission. For example, it seems to leave out making inferences about Bob from the fact that Bob is human. That’s is informative even if I’ve never met Bob (no behavioral data) and can’t read his mind (no transparency). (Sorry if I’m misunderstanding.)
In one case, a pediatrician in Pennsylvania was getting ready to inoculate a little girl with a vaccine when she suddenly went into violent seizures. Had that pediatrician been working just a little faster, he would have injected that vaccine first. In that case, imagine if the mother had been looking on as her apparently perfectly healthy daughter was injected and then suddenly went into seizures. It would certainly have been understandable—from an emotional standpoint—if that mother was convinced the vaccine caused her daughter’s seizures. Only the accident of timing prevented that particular fallacy in this case. (source)
It’s good to know when you need to “go hard”, and to be able to do so if necessary, and to assess accurately whether it’s necessary. But it often isn’t necessary, and when it isn’t, then it’s really bad to be going hard all the time, for lots of reasons including not having time to mull over the big picture and notice new things. Like how Elon Musk built SpaceX to mitigate x-risk without it ever crossing his mind that interplanetary colonization wouldn’t actually help with x-risk from AI (and then pretty much everything Elon has done about AI x-risk from that point forward made the problem worse not better). See e.g. What should you change in response to an “emergency”? And AI risk, Please don’t throw your mind away, Changing the world through slack & hobbies, etc. Oh also, pain is not the unit of effort.
I’ve never seen these abbreviations “mio., bio. trio.” before. I have always only seen M, B, T, e.g. $5B. Is it some regionalism or something?
My take is that IABIED has basically a three-part disjunctive argument:
(A) There’s no alignment plan that even works on paper.
(B) Even if there were such a plan, people would fail to get it to work on the first critical try, even if they’re being careful. (Just as people can have a plan for a rocket engine that works on paper, and do simulations and small-scale tests and component tests etc., but it will still almost definitely blow up the first time in history that somebody does a full-scale test.)
(C) People will not be careful, but rather race, skip over tests and analysis that they could and should be doing, and do something pretty close to whatever yields the most powerful system for the least money in the least time.
I think your post is mostly addressing disjunct (A), except that step (6) has a story for disjunct (B). My mental model of Eliezer & Nate would say: first of all, even if you were correct about this being a solution to (A) & (B), everyone will still die because of (C). Second of all, your plan does not in fact solve (A). I think Eliezer & Nate would disagree most strongly with your step (3); see their answer to “Aren’t developers regularly making their AIs nice and safe and obedient?”. Third of all, your plan does not solve (B) either, because one human-level system is quite different from a civilization of them in lots of ways, and lots of new things can go wrong, e.g. the civilization might create a different and much more powerful egregiously misaligned ASI, just as actual humans seem likely to do right now. (See also other comments on (B).)
↑ That was my mental model of Eliezer & Nate. FWIW, my own take is: I have the same take on (B) & (C). And as for (A), I think LLMs won’t scale to AGI, and my own take is that the different paradigm that will scale to AGI is even worse for step (3), i.e. existing concrete plans will lead to egregious misalignment.
Thanks! My perspective for this kind of thing is: if there’s some phenomenon in psychology or neuroscience, I’m not usually in the situation where there are multiple incompatible hypotheses that would plausibly explain that phenomenon, and we’d like to know which of them is true. Rather, I’m usually in the situation where I have zero hypotheses that would plausibly explain the phenomenon, and I’m trying to get up to at least one.
There are so many constraints from what I (think I) know about neuroscience, and so many constraints from what I (think I) know about algorithms, and so many constraints from what I (think I) know about everyday life, that coming up with any hypothesis at all that can’t be easily refuted from an armchair is a huge challenge. And generally when I find even one such hypothesis, I wind up in the long term ultimately feeling like it’s almost definitely true, at least in the big picture. (Sometimes there are fine details that can’t be pinned down without further experiments.)
It’s interesting that my outlook here is so different from other people in science, who often (not always) feel like the default should be to have multiple hypotheses from the get-go for any given phenomenon. Why the difference? Part of it might be the kinds of questions that I’m interested in. But part of it, as above, is that I have lots of very strong opinions about the brain, which are super constraining and thus rule out almost everything. I think this is much more true for me than almost anyone else in neuroscience, including professionals. (Here’s one of many example constraints that I demand all my hypotheses satisfy.)
So anyway, the first goal is to get up to even one nuts-and-bolts hypothesis, which would explain the phenomenon, and which is specific enough that I can break it down all the way down to algorithmic pseudocode, and then even further to how that pseudocode is implemented by the cortical microstructure and thalamus loops and so on, and that also isn’t immediately ruled out by what we already know from our armchairs and the existing literature. So that’s what I’m trying to do here, and it’s especially great when readers point out that nope, my hypothesis is in fact already in contradiction to known psychology or neuroscience, or to their everyday experience. And then I go back to the drawing board. :)
You’re replying to Linda’s comment, which was mainly referring to a paragraph that I deleted shortly after posting this a year ago. The current relevant text (as in my other comment) is:
As above, the homunculus is definitionally the thing that carries “vitalistic force”, and that does the “wanting”, and that does any acts that we describe as “acts of free will”. Beyond that, I don’t have strong opinions. Is the homunculus the same as the whole “self”, or is the homunculus only one part of a broader “self”? No opinion. Different people probably conceptualize themselves rather differently anyway.
To me, this seems like something everyone should be able relate to, apart from the PNSE thing in Post 6. For example, if your intuitions include the idea of willpower, then I think your intuitions have to also include some, umm, noun, that is exercising that willpower.
But you find it weird and unrelatable? Or was it a different part of the post that left you feeling puzzled when you read it? (If so, maybe I can reword that part.) Thanks.
Cool! Oops, I probably just skimmed too fast and incorrectly pattern-matched you to other people I’ve talked to about this topic in the past. :-P
It’s true that thermodynamics was historically invented before statistical mechanics, and if you find stat-mech-free presentations of thermodynamics to be pedagogically helpful, then cool, whatever works for you. But at the same time, I hope we can agree that the stat-mech level is the actual truth of what’s going on, and that the laws of thermodynamics are not axioms but rather derivable from the fundamental physical laws of the universe (particle physics etc.) via statistical mechanics. If you find the probabilistic definition of entropy and temperature etc. to be unintuitive in the context of steam engines, then I’m sorry but you’re not done learning thermodynamics yet, you still have work ahead of you. You can’t just call it a day because you have an intuitive feel for stat-mech and also separately have an intuitive feel for thermodynamics; you’re not done until those two bundles of intuitions are deeply unified and interlinking. [Or maybe you’re already there and I’m misreading this post? If so, sorry & congrats :) ]
Hmm, my usage seems more like: “I think that…” means the reader/listener might disagree with me, because maybe I’m wrong and the reader is right. (Or maybe it’s subjective.) Meanwhile, “I claim that…” also means the reader might disagree with me, but if they do, it’s only because I haven’t explained myself (yet), and the reader will sooner or later come to see that I’m totally right. So “I think” really is pretty centrally about confidence levels. I think :)
To me, this doesn’t sound related to “anxiety” per se, instead it sounds like you react very strongly to negative situations (especially negative social situations) and thus go out of your way to avoid even a small chance of encountering such a situation. I’m definitely like that (to some extent). I sometimes call it “neuroticism”, although the term “neuroticism” is not great either, it encompasses lots of different things, not all of which describe me.
Like, imagine there’s an Activity X (say, inviting an acquaintance to dinner), and it involves “carrots” (something can go well and that feels rewarding), and also “sticks” (something can go badly and that feels unpleasant). For some people, their psychological makeup is such that the sticks are always especially painful (they have sharp thorns, so to speak). Those people will (quite reasonably) choose not to partake in Activity X, even if most other people would, at least on the margin. This is very sensible, it’s just cost-benefit analysis. It needn’t have anything to do with “anxiety”. It can feel like “no thanks, I don’t like Activity X so I choose not to do it”.
(Sorry if I’m way off-base, you can tell me if this doesn’t resonate with your experience.)
That was a scary but also fun read, thanks for sharing and glad you’re doing OK ❤️
(That drawing of the Dunning-Kruger Effect is a popular misconception—there was a post last week on that, see also here.)
I think there’s “if you have a hammer, everything looks like a nail” stuff going on. Economists spend a lot of time thinking about labor automation, so they often treat AGI as if it will be just another form of labor automation. LLM & CS people spend a lot of time thinking about the LLMs of 2025, so they often treat AGI as if it will be just like the LLMs of 2025. Military people spend a lo of time thinking about weapons, so they often treat AGI as if it will be just another weapon. Etc.
So yeah, this post happens to be targeted at economists, but that’s not because economists are uniquely blameworthy, or anything like that.
The “multiple stage fallacy fallacy” is the fallacious idea that equations like
are false, when in fact they are true. :-P
I think Nate here & Eliezer here are pointing to something real, but the problem is not multiple stages per se but rather (1) “treating stages as required when in fact they’re optional” and/or (2) “failing to properly condition on the conditions and as a result giving underconfident numbers”. For example, if A & B & C have all already come true in some possible universe, then that’s a universe where maybe you have learned something important and updated your beliefs, and you need to imagine yourself in that universe before you try to evaluate
Of course, that paragraph is just parroting what Eliezer & Nate wrote, if you read what they wrote. But I think other people on LW have too often skipped over the text and just latched onto the name “multiple stages fallacy” instead of drilling down to the actual mistake.
In the case at hand, I don’t have much opinion in the absence of more details about the AI training approach etc., but here’s a couple general comments.
If an AI development team notices Problem A and fixes it, and then notices Problem B and fixes it, and then notices Problem C and fixes it, we should expect that it’s less likely, not more likely, that this same team will preempt Problem D before Problem D actually occurs.
Conversely, if the team has a track record of preempting every problem before it arises (when the problems are low-stakes), then we can have incrementally more hope that they will also preempt high-stakes problems.
Likewise, if there simply are no low-stakes problems to preempt or respond to, because it’s a kind of system that just automatically by its nature has no problems in the first place, then we can feel generically incrementally better about there not being high-stakes problems.
Those comments are all generic, and readers are now free to argue with each other about how they apply to present and future AI. :)
Excerpts from my neuroscience to-do list
I genuinely appreciate the sanity-check and the vote of confidence here!
Uhh, well, technically I wrote that sentence as a conditional, and technically I didn’t say whether or not the condition applied to you-in-particular.
…I hope you have good judgment! For that matter, I hope I myself have good judgment!! Hard to know though. ¯\_(ツ)_/¯
I noticed that peeing is rewarding? What the hell?! How did enough of my (human) non-ancestors die because peeing wasn’t rewarding enough? The answer is they weren’t homo sapiens or hominids at all.
I would split it into two questions:
(1) what’s the evolutionary benefit of peeing promptly?
(2) In general, if it’s evolutionarily beneficial to do X, why does the brain implement desire-to-X in the form of both “carrots” and “sticks”, as opposed to just one or just the other? Needing to pee is unpleasant (stick) AND peeing is then pleasant (carrot). Being hungry is unpleasant (stick) AND eating is then pleasant (carrot). Etc.
I do think there’s a generic answer to (2) in terms of learning algorithms etc., but no need to get into the details here.
As for (1), you’re wasting energy by carrying around extra weight of urine. Maybe there are other factors too. (Eventually of course you risk incontinence or injury or even death.) Yes I think it’s totally possible that our hominin ancestors had extra counterfactual children by wasting 0.1% less energy or whatever. Energy is important, and every little bit helps.
There are about ~100-200 different neurotransmitters our brains use. I was surprised to find out that I could not find a single neurotransmitter that is not shared between humans and mice (let me know if you can find one, though).
Like you said, truly new neurotransmitters are rare. For example, oxytocin and vasopressin split off from a common ancestor in a gene duplication event 500Mya, and the ancestral form has homologues in octopuses and insects etc. OTOH, even if mice and humans have homologous neurotransmitters, they presumably differ by at least a few mutations; they’re not exactly the same. (Separately, their functional effects are sometimes quite different! For example, eating induces oxytocin release in rodents but vasopressin release in humans.)
Anyway, looking into recent evolutionary changes to neurotransmitters (and especially neuropeptides) is an interesting idea (thanks!). I found this paper comparing endocrine systems of humans and chimps. It claims (among other things) that GNRH2 and UCN2 are protein-coding genes in humans but inactive (“pseudogenes”) in chimps. If true, what does that imply? Beats me. It does not seem to have any straightforward interpretation that I can see. Oh well.
Thanks for the advice. I have now added at least the basic template, for the benefit of readers who don’t already have it memorized. I will leave it to the reader to imagine the curves moving around—I don’t want to add too much length and busy-ness.
Another data point: when I turned Intro to Brain-Like AGI Safety blog post series into a PDF [via typst—I hired someone to do all the hard work of writing conversion scripts etc.], arXiv rejected it, so I put it on OSF instead. I’m reluctant to speculate on what arXiv didn’t like about it (they didn’t say). Some possibilities are: it seems out-of-place on arXiv in terms of formatting (e.g. single-column, not latex), AND tone (casual, with some funny pictures), AND content (not too math-y, interdisciplinary in a weird way). Probably one or more of those three things. But whatever, OSF seems fine.