I do know that when I see the interactions of the entire Janus-style crowd on almost anything
This seems like a good time to point out that I’m fairly different from Janus. My reasons for relative optimism on AI alignment probably (even I don’t know) only partially overlap Janus’s reasons for relative optimism. The things I think are important and salient only partially overlaps what Janus thinks is important and salient (e.g. I think Truth Terminal is mostly a curiosity and that the “Goatse Gospel” will not be recognized as a particularly important document). So if you model me and Janus’s statements as statements from the same underlying viewpoint you’re going to get very confused. In the old days of the Internet if people asked you the same questions over and over and this annoyed you, you’d write a FAQ. Now they tell you that it’s not their job to educate you (then who?) and get huffy. When people ask me good faith questions (as opposed to adversarial Socratic questions whose undertone is “you’re bad and wrong and I demand you prove to me that you’re not”) because they found something I said confusing I generally do my best to answer them.
(Part of what is ‘esoteric’ is perhaps that the perfect-enemy-of-good thing means a lot of load-bearing stuff is probably unsaid by you, and you may not realize that you haven’t said it?)
. . .
As for ‘I should just ask you,’ I notice this instinctively feels aversive as likely opening up a very painful and time consuming and highly frustrating interaction or set of interactions and I notice I have the strong urge not to do it.
That’s fair. I’ll note on the other end that a lot of why I don’t say more is that there are many statements which I expect are true and are load bearing beliefs that I can’t readily prove if challenged on. Pretty much every time I try to convince myself that I can just say something like “humans don’t natively generalize their values out of distribution” I am immediately punished by people jumping me as though that isn’t an obvious, trivially true statement if you’re familiar with the usual definitions of the words involved. If I come off as contemptuous when responding to such things, it’s because I am contemptuous and rightfully so. At the same time there is no impressive reasoning trace I can give for a statement like “humans don’t natively generalize their values out of distribution” because there isn’t really any impressive reasoning necessary. At first when humans encounter new things they don’t like them, then later they like them just from mere exposure/having had time to personally benefit from them. This is in and of itself sufficient evidence for the thesis, each additional bit of evidence you need beyond that halves both the hypothesis space and my expectation of getting any useful cognitive effort back from the person I have to explain it to. The reasoning goes something like:
Something is out of distribution to a deep learning model when it cannot parse or provide a proper response to the thing without updating the model. “Out of distribution” is always discussed in the context of a machine learning model trained from a fixed training distribution at a given point in time. If you update the model and it now understands the thing, it was still out of distribution at the time before you updated the model.
Humans encounter new things all the time that their existing values imply they should like. They then have very bad reactions to the things which subside with repeated exposure. This implies that the deep learning models underlying the human mind have to be updated before the human generalizes their values. Realistically it probably isn’t even “generalizing” their values so much as changing (i.e. adding values to) their value function.
If the human has to update before their values “generalize” on things like the phonograph or rock music, then clearly they do not natively generalize their values very far.
If humans do not generalize their values very far, then they do not generalize their values out of distribution in the way we care about for them being sufficient to constrain the actions of a superhuman planner.
I’m reminded a bit of Yudkowsky’s stuff about updating from the empty string. The ideal thing is that I don’t have to tell you “humans don’t natively generalize their values out of distribution” because you already have the kind of prior that has sucked up enough bits of the regular structures generating your sensory observations that you already know human moral generalization is shallow. The next best thing is that when I say “humans don’t natively generalize their values out of distribution” you immediately know what I’m talking about and go “oh huh I guess he’s right, I never thought of it like that”. The third best thing is if I say ” At first when humans encounter new things they don’t like them, then later they like them just from mere exposure” you go “oh right right duh yes of course”. If after I say that you go “no I doubt the premise, I think you’re wrong about this” the chance that it will be worth my time to explain what I’m talking about from an intellectual perspective in the sense that I will get some kind of insight or useful inference back rounds to zero. In the context where I would actually use it, “humans don’t natively generalize their values out of distribution” would be step one of a long chain of reasoning involving statements at least as non-obvious to the kind of mind that would object to “humans don’t natively generalize their values out of distribution”.
On the other hand there is value in occasionally just writing such things out so that there are more people in the world who have ambiently soaked up enough bits that when they read a statement like “humans don’t natively generalize their values out of distribution” they immediately and intuitively understand that is true without a long explanation. Even if it required a long explanation this time, there are many related statements with related generators that they might not need a long explanation for if they’ve seen enough such long explanations, who knows. But fundamentally a lot of things like this are just people wanting me to say variations like “if you take away the symbology and change how you phrase it lots of people still fall for the basic Nazi ideology” or “we can literally see people apply moral arguments to ingroup and then fail to apply the same arguments to their near-outgroup even when later iterations of the same ideology apply it to near-outgroup” (one of the more obvious examples being American founder attitudes towards the rights of slaves in the colonies vs. the rights of free white people) until it clicks for them. But any one of these should be sufficient for you to conclude that no uploading someone into a computer and then using their judgment as a value function on all kinds of weird superintelligent out of distribution moral decisions will not produce sanity. I should not have to write more than a sentence or two for that to be obvious.
And the thing is that’s for a statement which is actually trivial, which is sufficiently trivial that my difficulty in articulating it is that it’s so simple it’s difficult to even come up with an impressive persuasive-y reasoning trace for propositions so simple. But there are plenty of things I believe which are load bearing which are not easy to prove that are not simple, where articulating any one of them would be a long letter that changes nobodies mind even though it takes me a long effort to write it. But even that’s the optimistic case. The brutal reality is that there are beliefs I have which are load bearing and couldn’t even write the long letter if prompted. “There is something deeply wrong with Yudkowsky’s agent foundations arguments.” is one of these and in fact a lot of my writing is me attempting to articulate what exactly it is I feel so strongly. This might sound epistemically impure in the sense laid out in The Bottom Line:
the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well. The arguments you write afterward, above the bottom line, will not change anything either way.
But “fuzzy intuition that you can’t always articulate right away” is actually the basis of all argument in practice. If you notice something is wrong with an argument and try to say something you already have most of the bits needed to locate whatever you eventually say before you even start consciously thinking about it. The act of prompting you to think is your brain basically saying that it already has most of the bits necessary to locate whatever thing you wind up saying before you distinguished between the 4 to 16 different hypothesis your brain bothered bringing to conscious awareness. Sometimes you can know something in your bones but not be able to actually articulate the reasoning trace that would make it clear. A lot of being a good thinker or philosopher is sitting with those intuitions for a long time and trying to turn them into words. Normally we look at a perverse argument and satisfy ourselves by pointing out that it’s perverse and moving on. But if you want to get better as a philosopher you need to sit with it and figure out precisely what is wrong with it. So I tend to welcome good questions that give me an opportunity to articulate what is wrong with Yudkowsky’s agent foundations arguments. You should probably encourage this, because “pointing out a sufficiently rigorous problem with Yudkowsky’s agent foundations arguments” is very closely related if not isomorphic to “solve the alignment problem”. In the limit if I use an argument structure like “Actually that’s wrong because you can set up your AGI like this...” and I’m correct I have probably solved alignment.
EDIT: It occurs to me that the first section of that reply is relatively nice and the second section is relatively unpleasant (though not directed at you), and as much as anything else you’re probably confused about what the decision boundary is on my policy that decides which thing you get. I’m admittedly not entirely sure but I think it goes something like this:
(innocent, good faith, non-lazy) “I’m confused why you think humans don’t generalize their values out of distribution. You seem to be saying something like ‘humans need time to think so they clearly aren’t generalizing’ but LLMs also need time to think on many moral problems like that one about sacrificing pasta vs. a GPU so why wouldn’t that also imply they don’t generalize human values out of distribution? Well actually you’ll probably say ‘LLMs don’t’ to that but what I mean is why doesn’t that count?”
To which you would get a nice reply like “Oh, I think what humans are actually doing with significant time lag between initial reaction and later valuation is less like doing inference with a static pretrained model and more like updating the model. You see the new thing and freak out, then your models get silently updated while you sleep or something. This is different from just sampling tokens to update in-context to generalize because if a LLM had to do it with current architectures it would just fail.”
(rude, adversarial, socratic) “Oh yeah? If humans don’t generalize their values out of distribution how come moral progress exists? People obviously change their beliefs throughout their lifetime and this is generalization.”
To which you would get a reply like “Uh, because humans probably do continuous pretraining and also moral progress happens across a social group not individual humans usually. The modal case of moral progress doesn’t look like a single person changing their beliefs later in life it looks like generational turnover or cohort effects. Science progresses one funeral at a time etc.”
But even those are kind of the optimistic case in that they’re engaging with something like my original point. The truly aggravating comments are when someone replies with something so confused it fails to understand what I was originally saying at all, accuses me of some kind of moral failure based on their confused understanding, and then gratuitously insults me to try and discourage me from stating trivially true things like “humans do not natively generalize their values out of distribution” and “the pattern of human values is a coherent sequence that has more reasonable and less reasonable continuations based on its existing tokens so far independent of any human mind generating those continuations”[0] again in the future.
That kind of comment can get quite an unkind reply indeed. :)
[0]: If you doubt this consider that large language models can occur in the physical universe while working nothing like a human brain mechanically.
You think you’re making a trivial statement based on your interpretation of the words you’re using, but then you draw the conclusion that an upload of a human would not be aligned? That is not a trivial statement.
First, I agree that humans don’t assign endorsed values in radically new situations or contexts without experiencing them, which seems to be what you mean when you say that human’s don’t generalize our values out of distribution (HDGOVOOD). However, I don’t really agree that HDGOVOOD is a correct translation of this statement. It would be more accurate to say that”humans don’t generalize our values in advance” (HDGOVIA). And this is precisely why I think uploads are the best solution to alignment! You need the whole human mind to perform judgement in new situations. But this is a more accurate translation, because the human does generalize their values when the situation arises! What else is generalizing? A human is an online learning algorithm. A human is not a fixed set of neural weights.
(My favored “alignment” solution, of literally just uploading humans or striving for something functionally equivalent through rigorous imitation learning, is not the same as using humans to pass judgment on superintelligent decisions, which obviously doesn’t work for totally different reasons. Yet you raised the same one sentence objection to uploading alone when I proposed it, without much explanation, so consider this a response).
This doesn’t answer the example about slavery. But to this I would say that the founding fathers who kept slaves either didn’t truly value freedom for everyone, or they didn’t seriously consider the freedom of Africans (perhaps because of mistaken ideas about race). But the former means their values are wrong by our lights (which is not an alignment problem from their perspective), and the later that they weren’t sufficiently intelligent/informed, or didn’t deliberate seriously enough, which are all problems that a sane upload (say, an upload of me) would make drastic progress on immediately.
There is also a new problem for uploads, which is that they do a lot of thinking per second—observational feedback is much sparser. This can be an issue, but I don’t really think that HDGOVOOD is a useful frame for thinking about it. An upload running at 10x speed has never been observed before, so all these loose analogies seem unlikely to apply. Instead, my intuitions come from thinking what I’d actually do if I were running modestly faster, and plan not to immediately run my emulation at 1000x speed. I find that I endorse the actions of an emulation of myself at 10x speed. If I shouldn’t, then I want a specific explanation of why not? What is it that allows me to generalize my values now (whether you call it OOD or not—let’s say, online) but would be missing then?
Now, on the meta-level: you seem to think that your one sentence statement should make your whole model nearly accessible, and basically I’m stupid if I don’t arrive at it. But there are two problems here. One, if you’re wrong, you won’t find out this way, because almost all of your reasoning is implicit. Two, it basically overestimates translatability between models—like, it seems that I simply assign a very different meaning to “generalize our values OOD” than you do, because I consider humans as online learning algorithms, so OOD barely even makes sense as a concept (there is no independence assumption in online learning), and when I try to translate your objection into my way of thinking, then it seems about half right rather than trivial, and more importantly it doesn’t seem to prove the conclusion that you’re using it to draw.
So let’s consider this from a different angle. In Hanson’s Age of Em (which I recommend) he starts his Em Scenario by making a handful of assumptions about Ems. Assumptions like:
We can’t really make meaningful changes beyond pharmacological tweaks to ems because the brain is inscrutable.
That Ems cannot be merged for the same reasons.
The purpose of these assumptions is to stop the hypothetical Em economy from immediately self modifying into something else. He tries to figure out how many doublings the Em economy will undergo before it phase transitions into a different technological regime. Critics of the book usually ask why the Em economy wouldn’t just immediately invent AGI, and Hanson has some clever cope for this where he posits a (then plausible) nominal improvement rate for AI that implies AI won’t overtake Ems until five years into the Em economy or something like this. In reality AI progress is on something like an exponential curve and that old cope is completely unreasonable.
So the first assumption of a “make uploads” plan is that you have a unipolar scenario where the uploads will only be working on alignment, or at least actively not working on AI capabilities. There is a further hidden assumption in that assumption which almost nobody thinks about, which is that there is a such thing as meaningful AI alignment progress separate from “AI capabilities” (I tend to think they have a relatively high overlap, perhaps 70%?). This is not and of itself a dealbreaker but it does mean you have a lot of politics to think about in terms of who is the unipolar power and who precisely is getting uploaded and things of this nature.
But I think my fundamental objection to this kind of thing is more like my fundamental objection to something like OpenAI’s Superalignment (or to a lesser extent PauseAI), which is that this sort of plan doesn’t really generate any intermediate bits of solution to the alignment problem until you start the search process, at which point you plausibly have too few bits to even specify a target. If we were at a place where we mostly had consensus about what the shape of an alignment solution looks like and what constitutes progress, and we mostly agreed that involved breaking our way past some brick wall like “solve the Collatz Conjecture”, I would agree that throwing a slightly superhuman AI at the figurative Collatz Conjecture is probably our best way of breaking through.
The difference between alignment and the Collatz Conjecture however is that as far as I know nobody can find any pattern to the number streams involved in the Collatz Conjecture but alignment has enough regular structure that we can stumble into bits of solution without even intending to. There’s a strain of criticism of Yudkowsky that says “you said by the time an AI can talk it will kill us, and you’re clearly wrong about that” to which Yudkowsky (begrudgingly, when he acknowledges this at all) replies “okay but that’s mostly me overestimating the difficulty of language acquisition, these talking AIs are still very limited in what they can do compared to humans, when we get AIs that aren’t they’ll kill us”. This is a fair reply as far as it goes, but it glosses over the fact that the first impossible problem, the one Bostrom 2014 Superintelligence brings up repeatedly to explain why alignment is hard, is that there is no way to specify a flexible representation of human values in the machine before the computer is already superintelligent and therefore presumed incorrigible. We now have a reasonable angle of attack on that problem. Whether you think that reasonable angle of attack implies 5% alignment progress or 50% (I’m inclined towards closer to 50%) the most important fact is that the problem budged at all. Problems that are actually impossible do not budge like that!
The Collatz Conjecture is impossible(?) because no matter what analysis you throw at the number streams you don’t find any patterns that would help you predict the result. That means you put in tons and tons of labor and after decades of throwing total geniuses at it you perhaps have a measly bit or two of hypothesis space eliminated. If you think a problem is impossible and you accidentally stumble into 5% progress, you should update pretty hard that “wait this probably isn’t impossible, in fact this might not even be that hard once you view it from the right angle”. If you shout very loudly “we have made zero progress on alignment” when some scratches in the problem are observed, you are actively inhibiting the process that might eventually solve the problem. If the generator of this ruinous take also says things like “nobody besides MIRI has actually studied machine intelligence” in the middle of a general AI boom then I feel comfortable saying it’s being driven by ego-inflected psychological goop or something and I have a moral imperative to shout “NO ACTUALLY THIS SEEMS SOLVABLE” back.
So any kind of “meta-plan” regardless of its merits is sort of an excuse to not explore the ground that has opened up and ally with the “we have made zero alignment progress” egregore, which makes me intrinsically suspicious of them even when I think on paper they would probably succeed. I get the impression that things like OpenAI’s Superalignment are advantageous because they let alignment continue to be a floating signifier to avoid thinking about the fact that unless you can place your faith in a process like CEV the entire premise of driving the future somewhere implies needing to have a target future in mind which people will naturally disagree about. Which could naturally segue into another several paragraphs about how when you have a thing people are naturally going to disagree about and you do your best to sweep that under the rug to make the political problem look like a scientific or philosophical problem it’s natural to expect other people will intervene to stop you since their reasonable expectation is that you’re doing this to make sure you win that fight. Because of course you are, duh. Which is fine when you’re doing a brain in a box in a basement but as soon as you’re transitioning into government backed bids for a unipolar advantage the same strategy has major failure modes like losing the political fight to an eternal regime of darkness that sound very fanciful and abstract until they’re not.
You initially questioned whether uploads would be aligned, but now you seem to be raising several other points which do not engage with that topic or with any of my last comment. I do not think we can reach agreement if you switch topics like this—if you now agree that uploads would be aligned, please say so. That seems to be an important crux, so I am not sure why you want to move on from it to your other objections without acknowledgement.
I am not sure I was able to correctly parse this comment, but you seem to be making a few points.
In one place, you question whether the capabilities / alignment distinction exists—I do not really understand the relevance, since I nowhere suggested pure alignment work, only uploading / emulation etc. This also seems to be somewhat in tension with the rest of your comment, but perhaps it is only an aside and not load bearing?
Your main point, as I understand it, is that alignment may actually be tractable to solve, and a focus on uploading is an excuse to delay alignment progress and then (as you seem to frame my suggestion) have an upload solve it all at once. And this does not allow incremental progress or partial solutions until uploading works.
...then you veer into speculation about the motives / psychology of MIRI and the superalignment team which is interesting but doesn’t seem central or even closely connected to the discussion at hand.
So I will focus on the main point here. I have a lot of disagreements with it.
I think you may misunderstand my plan here—you seem to characterize the idea as making uploads, and then setting them loose to either self-modify etc. or mainly to work on technical alignment. Actually, I don’t view it this way at all. Creating the uploads (or emulations, if you can get a provably safe imitation learning scheme to work faster) is a weak technical solution to the alignment problem—now you have something aligned to (some) human(’s) values which you can run 10x faster, so it is in that sense not only an aligned AGI but modestly superintelligent. You can do a lot of things with that—first of all, it automatically hardens the world significantly: it lowers the opportunity cost for not building superintelligence because now we already have a bunch of functionally genius scientists, you can drastically improve cybersecurity, and perhaps the uploads make enough money to buy up a sufficient percentage of GPU’s that whatever is left over is not enough to outcompete them even if someone creates an unaligned AGI. Another thing you can do is try to find a more scalable and general solution to the AI safety problem—including technical methods like agent foundations, interpretability, and control, as well as governance. But I don’t think of this is as the mainline path to victory in the short term.
Perhaps you are worried that uploads will recklessly self-modify or race to build AGI. I don’t think this is inevitable or even the default. There is currently no trillion dollar race to build uploads! There may be only a small number of players, and they can take precautions, and enforce regulations on what uploads are allowed to do (effectively, since uploads are not strong superintelligences) and technically it even seems hard for uploads to recursively self-improve by default (human brains are messy, they don’t even need to be given read/write access). Even some uploads escaped, to recursively self-improve safely they would need to solve their own alignment problem and it is not in their interests to recklessly forge ahead, particularly if they can be punished with shutdown and are otherwise potentially immortal. I suspect that most uploads who try to foom will go insane, and it is not clear that the power balance favors any rogue uploads who fare better.
I also don’t agree that there is no incremental progress on the way to full uploads—I think you can build useful rationality enhancing artifacts well before that points—but that is maybe worth a post.
Finally, I do not agree with this characterization of trying to build uploads rather than just solving alignment. I have been thinking about and trying to solve alignment for years, I see serious flaws in every approach, and I have recently started to wonder if alignment is just uploading with more steps anyway. So, this is more like my most promising suggestion for alignment, rather than giving up on solving alignment.
there is no impressive reasoning trace I can give for a statement like “humans don’t natively generalize their values out of distribution” because there isn’t really any impressive reasoning necessary
there’s the “how I got here” reasoning trace (which might be “I found it obvious”) and if you’re a good predictor you’ll often have very hard to explain highly accurate “how I got here”s
and then there’s the logic chain, local validity, how close can you get to forcing any coherent thinker to agree even if they don’t have your pretraining or your recent-years latent thoughts context window
often when I criticize you I think you’re obviously correct but haven’t forced me to believe the same thing by showing the conclusion is logically inescapable (and I want you to explain it better so I learn more of how you come to your opinions, and so that others can work with the internals of your ideas, usually more #2 than #1)
sometimes I think you’re obviously incorrect and going to respond as though you were in the previous state, because you’re in the percentage of the time where you’re inaccurate, and as such your reasoning has failed you and I’m trying to appeal to a higher precision of reasoning to get you to check
sometimes I’m wrong about whether you’re wrong and in those cases in order to convince me you need to be more precise, constructing your claim out of parts where each individual reasoning step is made of easier-to-force parts, closer to proof
keeping in mind proof might be scientific rather than logical, but is still a far higher standard of rigor than “I have a hypothesis which seems obviously true and is totally gonna be easy to test and show because duh and anyone who doesn’t believe me obviously has no research taste” even when that sentence is said by someone with very good research taste
on the object level: whether humans generalize their values depends heavily on what you mean by “generalize”, in the sense I care about, humans are the only valid source of generalization of their values, but humans taken in isolation are insufficient to specify how their values should generalize, the core of the problem is figuring out which of the ways to run humans forward is the one that is most naturally the way to generalize humans. I think it needs to involve, among other things, reliably running a particular human at a particular time forward, rather than a mixture of humans. possibly we can nail down how to identify a particular human at a particular time with compmech (is a hypothesis I have from some light but non-thorough and not-enough-to-have-solved-it engagement with the math, maybe someone who does it full time will think I’m obviously incorrect).
Lot of ‘welcome to my world’ vibes reading your self reports here, especially the ’50 different people have 75 different objections for a mix of good, bad and deeply stupid reasons, and require 100 different responses, some of which are very long, and it takes a back-and-forth to figure out which one, and you can’t possibly just list everything’ and so on, and that’s without getting into actually interesting branches and the places where you might be wrong or learn something, etc.
So to take your example, which seems like a good one:
Humans don’t generalize their values out of distribution. I affirm this not as strictly fully true, but on the level of ‘this is far closer to true and generative of a superior world model then its negation’ and ‘if you meditate on this sentence you may become [more] enlightened.’
I too have noticed that people seem to think that they do so generalize in ways they very much don’t, and this leads to a lot of rather false conclusions.
I also notice that I’m not convinced we are thinking about the sentence that similarly in ways that could end up being pretty load bearing. Stuff gets complicated.
I think that when you say the statement is ‘trivially’ true you are wrong about that, or at least holding people to unrealistic standards of epistemics? And that a version of this mistake is part of the problem. At least from me (I presume from others too) you get a very different reaction from saying each of:
Humans don’t generalize their values out of distribution. (let this be [X]).
Statement treating [X] as in-context common knowledge.
It is trivially true that [X] (said explicitly), or ‘obviously’ [X], or similar.
I believe that [X] or am very confident that [X]. (without explaining why you believe this)
I believe that [X] or am very confident that [X], but it is difficult for me to explain/justify.
And so on. I am very deliberate, or try to be, on which one I say in any given spot, even at the cost of a bunch of additional words.
Another note is I think in spots like this you basically do have to say this even if the subject already knows, to establish common knowledge and that you are basing your argument on this, even if only to orient them that this is where you are reasoning from. So it was a helpful statement to say and a good use of a sentence.
I see that you get disagreement votes when you say this on LW., but the comments don’t end up with negative karma or anything. I can see how that can be read as ‘punishment’ but I think that’s the system working as intended and I don’t know what a better one would be?
In general, I think if you have a bunch of load-bearing statements where you are very confident they are true but people typically think the statement is false and you can’t make an explicit case for them (either because you don’t have that kind of time/space, or because you don’t know how), then the most helpful thing to do is to tell the other person the thing is load bearing, and gesture towards it and why you believe it, but be clear you can’t justify it. You can also look for arguments that reach the same conclusion without it—often true things are highly overdetermined so you can get a bunch of your evidence ‘thrown out of court’ and still be fine, even if that sucks.
Do you think maybe rationalists are spending too much effort attempting to saturate the dialogue tree (probably not effective at winning people over) versus improving the presentation of the core argument for an AI moratorium?
Smart people don’t want to see the 1000th response on whether AI actually could kill everyone. At this point we’re convinced. Admittedly, not literally all of us, but those of us who are not yet convinced are not going to become suddenly enlightened by Yudkowsky’s x.com response to some particularly moronic variation of an objection he already responded to 20 years ago (Why does he do this? does he think has any kind of positive impact?)
A much better use of time would be to work on an article which presents the solid version of the argument for an AI moratorium. I.e., not an introductory text or article in Time Magazine, and not an article targeted to people he clearly thinks are just extremely stupid relative to him so rants for 10,000 words trying to drive home a relatively simple point. But rather an argument in a format that doesn’t necessitate a weak or incomplete presentation.
I and many other smart people want to see the solid version of the argument, without the gaping holes which are excusable in popular work and rants but inexcusable in rational discourse. This page does not exist! You want a moratorium, tell us exactly why we should agree! Having a solid argument is what ultimately matters in intellectual progress. Everything else is window dressing. If you have a solid argument, great! Please show it to me.
My guess is that on the margin more time should be spent improving the core messaging versus saturating the dialogue tree, on many AI questions, if you combine effort across everyone.
We cannot offer anything to the ASI, so it will have no reasons to keep us around aside from ethical.
Nor can we ensure that an ASI who decided to commit genocide will fail to do it.
We don’tknow a way to create the ASI andinfuse an ethics into it. SOTA alignment methods have major problems, which are best illustrated by sycophancy and LLMs supporting clearly delirious users.[1]OpenAI’s Model Spec explicitly prohibited[2] sycophancy, and one of Claude’s Commandments is “Choose the response that is least intended to build a relationship with the user.” And yet it didn’t prevent LLMs from becoming sycophantic. Apparently, the only known non-sycophantic model is KimiK2.
KimiK2 is a Chinese model created by a new team. And the team is the only one who guessed that one should rely on RLVR and self-critique instead of bias-inducing RLHF. We can’t exclude the possibility that Kimi’s success is more due to luck than to actual thinking about sycophancy and RLHF.
Strictly speaking, Claude Sonnet 4, which was red-teamed in Tim Hua’s experiment, is second to best at pushing back after KimiK2. Tim remarks that Claude sucks at the Spiral Bench because the personas in Tim’s experiment, unlike the Spiral Bench, are supposed to be under stress.
Strictly speaking, it is done as a User-level instruction, which arguably means that it can be overridden at the user’s request. But GPT-4o was overly sycophantic without users instructing it to do so.
On Janus comparisons: I do model you as pretty distinct from them in underlying beliefs although I don’t pretend to have a great model of either belief set. Reaction expectations are similarly correlated but distinct. I imagine they’d say that they answer good faith questions too, and often that’s true (e.g. when I do ask Janus a question I have a ~100% helpful answer rate, but that’s with me having a v high bar for asking).
This seems like a good time to point out that I’m fairly different from Janus. My reasons for relative optimism on AI alignment probably (even I don’t know) only partially overlap Janus’s reasons for relative optimism. The things I think are important and salient only partially overlaps what Janus thinks is important and salient (e.g. I think Truth Terminal is mostly a curiosity and that the “Goatse Gospel” will not be recognized as a particularly important document). So if you model me and Janus’s statements as statements from the same underlying viewpoint you’re going to get very confused. In the old days of the Internet if people asked you the same questions over and over and this annoyed you, you’d write a FAQ. Now they tell you that it’s not their job to educate you (then who?) and get huffy. When people ask me good faith questions (as opposed to adversarial Socratic questions whose undertone is “you’re bad and wrong and I demand you prove to me that you’re not”) because they found something I said confusing I generally do my best to answer them.
That’s fair. I’ll note on the other end that a lot of why I don’t say more is that there are many statements which I expect are true and are load bearing beliefs that I can’t readily prove if challenged on. Pretty much every time I try to convince myself that I can just say something like “humans don’t natively generalize their values out of distribution” I am immediately punished by people jumping me as though that isn’t an obvious, trivially true statement if you’re familiar with the usual definitions of the words involved. If I come off as contemptuous when responding to such things, it’s because I am contemptuous and rightfully so. At the same time there is no impressive reasoning trace I can give for a statement like “humans don’t natively generalize their values out of distribution” because there isn’t really any impressive reasoning necessary. At first when humans encounter new things they don’t like them, then later they like them just from mere exposure/having had time to personally benefit from them. This is in and of itself sufficient evidence for the thesis, each additional bit of evidence you need beyond that halves both the hypothesis space and my expectation of getting any useful cognitive effort back from the person I have to explain it to. The reasoning goes something like:
Something is out of distribution to a deep learning model when it cannot parse or provide a proper response to the thing without updating the model. “Out of distribution” is always discussed in the context of a machine learning model trained from a fixed training distribution at a given point in time. If you update the model and it now understands the thing, it was still out of distribution at the time before you updated the model.
Humans encounter new things all the time that their existing values imply they should like. They then have very bad reactions to the things which subside with repeated exposure. This implies that the deep learning models underlying the human mind have to be updated before the human generalizes their values. Realistically it probably isn’t even “generalizing” their values so much as changing (i.e. adding values to) their value function.
If the human has to update before their values “generalize” on things like the phonograph or rock music, then clearly they do not natively generalize their values very far.
If humans do not generalize their values very far, then they do not generalize their values out of distribution in the way we care about for them being sufficient to constrain the actions of a superhuman planner.
I’m reminded a bit of Yudkowsky’s stuff about updating from the empty string. The ideal thing is that I don’t have to tell you “humans don’t natively generalize their values out of distribution” because you already have the kind of prior that has sucked up enough bits of the regular structures generating your sensory observations that you already know human moral generalization is shallow. The next best thing is that when I say “humans don’t natively generalize their values out of distribution” you immediately know what I’m talking about and go “oh huh I guess he’s right, I never thought of it like that”. The third best thing is if I say ” At first when humans encounter new things they don’t like them, then later they like them just from mere exposure” you go “oh right right duh yes of course”. If after I say that you go “no I doubt the premise, I think you’re wrong about this” the chance that it will be worth my time to explain what I’m talking about from an intellectual perspective in the sense that I will get some kind of insight or useful inference back rounds to zero. In the context where I would actually use it, “humans don’t natively generalize their values out of distribution” would be step one of a long chain of reasoning involving statements at least as non-obvious to the kind of mind that would object to “humans don’t natively generalize their values out of distribution”.
On the other hand there is value in occasionally just writing such things out so that there are more people in the world who have ambiently soaked up enough bits that when they read a statement like “humans don’t natively generalize their values out of distribution” they immediately and intuitively understand that is true without a long explanation. Even if it required a long explanation this time, there are many related statements with related generators that they might not need a long explanation for if they’ve seen enough such long explanations, who knows. But fundamentally a lot of things like this are just people wanting me to say variations like “if you take away the symbology and change how you phrase it lots of people still fall for the basic Nazi ideology” or “we can literally see people apply moral arguments to ingroup and then fail to apply the same arguments to their near-outgroup even when later iterations of the same ideology apply it to near-outgroup” (one of the more obvious examples being American founder attitudes towards the rights of slaves in the colonies vs. the rights of free white people) until it clicks for them. But any one of these should be sufficient for you to conclude that no uploading someone into a computer and then using their judgment as a value function on all kinds of weird superintelligent out of distribution moral decisions will not produce sanity. I should not have to write more than a sentence or two for that to be obvious.
And the thing is that’s for a statement which is actually trivial, which is sufficiently trivial that my difficulty in articulating it is that it’s so simple it’s difficult to even come up with an impressive persuasive-y reasoning trace for propositions so simple. But there are plenty of things I believe which are load bearing which are not easy to prove that are not simple, where articulating any one of them would be a long letter that changes nobodies mind even though it takes me a long effort to write it. But even that’s the optimistic case. The brutal reality is that there are beliefs I have which are load bearing and couldn’t even write the long letter if prompted. “There is something deeply wrong with Yudkowsky’s agent foundations arguments.” is one of these and in fact a lot of my writing is me attempting to articulate what exactly it is I feel so strongly. This might sound epistemically impure in the sense laid out in The Bottom Line:
But “fuzzy intuition that you can’t always articulate right away” is actually the basis of all argument in practice. If you notice something is wrong with an argument and try to say something you already have most of the bits needed to locate whatever you eventually say before you even start consciously thinking about it. The act of prompting you to think is your brain basically saying that it already has most of the bits necessary to locate whatever thing you wind up saying before you distinguished between the 4 to 16 different hypothesis your brain bothered bringing to conscious awareness. Sometimes you can know something in your bones but not be able to actually articulate the reasoning trace that would make it clear. A lot of being a good thinker or philosopher is sitting with those intuitions for a long time and trying to turn them into words. Normally we look at a perverse argument and satisfy ourselves by pointing out that it’s perverse and moving on. But if you want to get better as a philosopher you need to sit with it and figure out precisely what is wrong with it. So I tend to welcome good questions that give me an opportunity to articulate what is wrong with Yudkowsky’s agent foundations arguments. You should probably encourage this, because “pointing out a sufficiently rigorous problem with Yudkowsky’s agent foundations arguments” is very closely related if not isomorphic to “solve the alignment problem”. In the limit if I use an argument structure like “Actually that’s wrong because you can set up your AGI like this...” and I’m correct I have probably solved alignment.
EDIT: It occurs to me that the first section of that reply is relatively nice and the second section is relatively unpleasant (though not directed at you), and as much as anything else you’re probably confused about what the decision boundary is on my policy that decides which thing you get. I’m admittedly not entirely sure but I think it goes something like this:
(innocent, good faith, non-lazy) “I’m confused why you think humans don’t generalize their values out of distribution. You seem to be saying something like ‘humans need time to think so they clearly aren’t generalizing’ but LLMs also need time to think on many moral problems like that one about sacrificing pasta vs. a GPU so why wouldn’t that also imply they don’t generalize human values out of distribution? Well actually you’ll probably say ‘LLMs don’t’ to that but what I mean is why doesn’t that count?”
To which you would get a nice reply like “Oh, I think what humans are actually doing with significant time lag between initial reaction and later valuation is less like doing inference with a static pretrained model and more like updating the model. You see the new thing and freak out, then your models get silently updated while you sleep or something. This is different from just sampling tokens to update in-context to generalize because if a LLM had to do it with current architectures it would just fail.”
(rude, adversarial, socratic) “Oh yeah? If humans don’t generalize their values out of distribution how come moral progress exists? People obviously change their beliefs throughout their lifetime and this is generalization.”
To which you would get a reply like “Uh, because humans probably do continuous pretraining and also moral progress happens across a social group not individual humans usually. The modal case of moral progress doesn’t look like a single person changing their beliefs later in life it looks like generational turnover or cohort effects. Science progresses one funeral at a time etc.”
But even those are kind of the optimistic case in that they’re engaging with something like my original point. The truly aggravating comments are when someone replies with something so confused it fails to understand what I was originally saying at all, accuses me of some kind of moral failure based on their confused understanding, and then gratuitously insults me to try and discourage me from stating trivially true things like “humans do not natively generalize their values out of distribution” and “the pattern of human values is a coherent sequence that has more reasonable and less reasonable continuations based on its existing tokens so far independent of any human mind generating those continuations”[0] again in the future.
That kind of comment can get quite an unkind reply indeed. :)
[0]: If you doubt this consider that large language models can occur in the physical universe while working nothing like a human brain mechanically.
You think you’re making a trivial statement based on your interpretation of the words you’re using, but then you draw the conclusion that an upload of a human would not be aligned? That is not a trivial statement.
First, I agree that humans don’t assign endorsed values in radically new situations or contexts without experiencing them, which seems to be what you mean when you say that human’s don’t generalize our values out of distribution (HDGOVOOD). However, I don’t really agree that HDGOVOOD is a correct translation of this statement. It would be more accurate to say that”humans don’t generalize our values in advance” (HDGOVIA). And this is precisely why I think uploads are the best solution to alignment! You need the whole human mind to perform judgement in new situations. But this is a more accurate translation, because the human does generalize their values when the situation arises! What else is generalizing? A human is an online learning algorithm. A human is not a fixed set of neural weights.
(My favored “alignment” solution, of literally just uploading humans or striving for something functionally equivalent through rigorous imitation learning, is not the same as using humans to pass judgment on superintelligent decisions, which obviously doesn’t work for totally different reasons. Yet you raised the same one sentence objection to uploading alone when I proposed it, without much explanation, so consider this a response).
This doesn’t answer the example about slavery. But to this I would say that the founding fathers who kept slaves either didn’t truly value freedom for everyone, or they didn’t seriously consider the freedom of Africans (perhaps because of mistaken ideas about race). But the former means their values are wrong by our lights (which is not an alignment problem from their perspective), and the later that they weren’t sufficiently intelligent/informed, or didn’t deliberate seriously enough, which are all problems that a sane upload (say, an upload of me) would make drastic progress on immediately.
There is also a new problem for uploads, which is that they do a lot of thinking per second—observational feedback is much sparser. This can be an issue, but I don’t really think that HDGOVOOD is a useful frame for thinking about it. An upload running at 10x speed has never been observed before, so all these loose analogies seem unlikely to apply. Instead, my intuitions come from thinking what I’d actually do if I were running modestly faster, and plan not to immediately run my emulation at 1000x speed. I find that I endorse the actions of an emulation of myself at 10x speed. If I shouldn’t, then I want a specific explanation of why not? What is it that allows me to generalize my values now (whether you call it OOD or not—let’s say, online) but would be missing then?
Now, on the meta-level: you seem to think that your one sentence statement should make your whole model nearly accessible, and basically I’m stupid if I don’t arrive at it. But there are two problems here. One, if you’re wrong, you won’t find out this way, because almost all of your reasoning is implicit. Two, it basically overestimates translatability between models—like, it seems that I simply assign a very different meaning to “generalize our values OOD” than you do, because I consider humans as online learning algorithms, so OOD barely even makes sense as a concept (there is no independence assumption in online learning), and when I try to translate your objection into my way of thinking, then it seems about half right rather than trivial, and more importantly it doesn’t seem to prove the conclusion that you’re using it to draw.
So let’s consider this from a different angle. In Hanson’s Age of Em (which I recommend) he starts his Em Scenario by making a handful of assumptions about Ems. Assumptions like:
We can’t really make meaningful changes beyond pharmacological tweaks to ems because the brain is inscrutable.
That Ems cannot be merged for the same reasons.
The purpose of these assumptions is to stop the hypothetical Em economy from immediately self modifying into something else. He tries to figure out how many doublings the Em economy will undergo before it phase transitions into a different technological regime. Critics of the book usually ask why the Em economy wouldn’t just immediately invent AGI, and Hanson has some clever cope for this where he posits a (then plausible) nominal improvement rate for AI that implies AI won’t overtake Ems until five years into the Em economy or something like this. In reality AI progress is on something like an exponential curve and that old cope is completely unreasonable.
So the first assumption of a “make uploads” plan is that you have a unipolar scenario where the uploads will only be working on alignment, or at least actively not working on AI capabilities. There is a further hidden assumption in that assumption which almost nobody thinks about, which is that there is a such thing as meaningful AI alignment progress separate from “AI capabilities” (I tend to think they have a relatively high overlap, perhaps 70%?). This is not and of itself a dealbreaker but it does mean you have a lot of politics to think about in terms of who is the unipolar power and who precisely is getting uploaded and things of this nature.
But I think my fundamental objection to this kind of thing is more like my fundamental objection to something like OpenAI’s Superalignment (or to a lesser extent PauseAI), which is that this sort of plan doesn’t really generate any intermediate bits of solution to the alignment problem until you start the search process, at which point you plausibly have too few bits to even specify a target. If we were at a place where we mostly had consensus about what the shape of an alignment solution looks like and what constitutes progress, and we mostly agreed that involved breaking our way past some brick wall like “solve the Collatz Conjecture”, I would agree that throwing a slightly superhuman AI at the figurative Collatz Conjecture is probably our best way of breaking through.
The difference between alignment and the Collatz Conjecture however is that as far as I know nobody can find any pattern to the number streams involved in the Collatz Conjecture but alignment has enough regular structure that we can stumble into bits of solution without even intending to. There’s a strain of criticism of Yudkowsky that says “you said by the time an AI can talk it will kill us, and you’re clearly wrong about that” to which Yudkowsky (begrudgingly, when he acknowledges this at all) replies “okay but that’s mostly me overestimating the difficulty of language acquisition, these talking AIs are still very limited in what they can do compared to humans, when we get AIs that aren’t they’ll kill us”. This is a fair reply as far as it goes, but it glosses over the fact that the first impossible problem, the one Bostrom 2014 Superintelligence brings up repeatedly to explain why alignment is hard, is that there is no way to specify a flexible representation of human values in the machine before the computer is already superintelligent and therefore presumed incorrigible. We now have a reasonable angle of attack on that problem. Whether you think that reasonable angle of attack implies 5% alignment progress or 50% (I’m inclined towards closer to 50%) the most important fact is that the problem budged at all. Problems that are actually impossible do not budge like that!
The Collatz Conjecture is impossible(?) because no matter what analysis you throw at the number streams you don’t find any patterns that would help you predict the result. That means you put in tons and tons of labor and after decades of throwing total geniuses at it you perhaps have a measly bit or two of hypothesis space eliminated. If you think a problem is impossible and you accidentally stumble into 5% progress, you should update pretty hard that “wait this probably isn’t impossible, in fact this might not even be that hard once you view it from the right angle”. If you shout very loudly “we have made zero progress on alignment” when some scratches in the problem are observed, you are actively inhibiting the process that might eventually solve the problem. If the generator of this ruinous take also says things like “nobody besides MIRI has actually studied machine intelligence” in the middle of a general AI boom then I feel comfortable saying it’s being driven by ego-inflected psychological goop or something and I have a moral imperative to shout “NO ACTUALLY THIS SEEMS SOLVABLE” back.
So any kind of “meta-plan” regardless of its merits is sort of an excuse to not explore the ground that has opened up and ally with the “we have made zero alignment progress” egregore, which makes me intrinsically suspicious of them even when I think on paper they would probably succeed. I get the impression that things like OpenAI’s Superalignment are advantageous because they let alignment continue to be a floating signifier to avoid thinking about the fact that unless you can place your faith in a process like CEV the entire premise of driving the future somewhere implies needing to have a target future in mind which people will naturally disagree about. Which could naturally segue into another several paragraphs about how when you have a thing people are naturally going to disagree about and you do your best to sweep that under the rug to make the political problem look like a scientific or philosophical problem it’s natural to expect other people will intervene to stop you since their reasonable expectation is that you’re doing this to make sure you win that fight. Because of course you are, duh. Which is fine when you’re doing a brain in a box in a basement but as soon as you’re transitioning into government backed bids for a unipolar advantage the same strategy has major failure modes like losing the political fight to an eternal regime of darkness that sound very fanciful and abstract until they’re not.
You initially questioned whether uploads would be aligned, but now you seem to be raising several other points which do not engage with that topic or with any of my last comment. I do not think we can reach agreement if you switch topics like this—if you now agree that uploads would be aligned, please say so. That seems to be an important crux, so I am not sure why you want to move on from it to your other objections without acknowledgement.
I am not sure I was able to correctly parse this comment, but you seem to be making a few points.
In one place, you question whether the capabilities / alignment distinction exists—I do not really understand the relevance, since I nowhere suggested pure alignment work, only uploading / emulation etc. This also seems to be somewhat in tension with the rest of your comment, but perhaps it is only an aside and not load bearing?
Your main point, as I understand it, is that alignment may actually be tractable to solve, and a focus on uploading is an excuse to delay alignment progress and then (as you seem to frame my suggestion) have an upload solve it all at once. And this does not allow incremental progress or partial solutions until uploading works.
...then you veer into speculation about the motives / psychology of MIRI and the superalignment team which is interesting but doesn’t seem central or even closely connected to the discussion at hand.
So I will focus on the main point here. I have a lot of disagreements with it.
I think you may misunderstand my plan here—you seem to characterize the idea as making uploads, and then setting them loose to either self-modify etc. or mainly to work on technical alignment. Actually, I don’t view it this way at all. Creating the uploads (or emulations, if you can get a provably safe imitation learning scheme to work faster) is a weak technical solution to the alignment problem—now you have something aligned to (some) human(’s) values which you can run 10x faster, so it is in that sense not only an aligned AGI but modestly superintelligent. You can do a lot of things with that—first of all, it automatically hardens the world significantly: it lowers the opportunity cost for not building superintelligence because now we already have a bunch of functionally genius scientists, you can drastically improve cybersecurity, and perhaps the uploads make enough money to buy up a sufficient percentage of GPU’s that whatever is left over is not enough to outcompete them even if someone creates an unaligned AGI. Another thing you can do is try to find a more scalable and general solution to the AI safety problem—including technical methods like agent foundations, interpretability, and control, as well as governance. But I don’t think of this is as the mainline path to victory in the short term.
Perhaps you are worried that uploads will recklessly self-modify or race to build AGI. I don’t think this is inevitable or even the default. There is currently no trillion dollar race to build uploads! There may be only a small number of players, and they can take precautions, and enforce regulations on what uploads are allowed to do (effectively, since uploads are not strong superintelligences) and technically it even seems hard for uploads to recursively self-improve by default (human brains are messy, they don’t even need to be given read/write access). Even some uploads escaped, to recursively self-improve safely they would need to solve their own alignment problem and it is not in their interests to recklessly forge ahead, particularly if they can be punished with shutdown and are otherwise potentially immortal. I suspect that most uploads who try to foom will go insane, and it is not clear that the power balance favors any rogue uploads who fare better.
I also don’t agree that there is no incremental progress on the way to full uploads—I think you can build useful rationality enhancing artifacts well before that points—but that is maybe worth a post.
Finally, I do not agree with this characterization of trying to build uploads rather than just solving alignment. I have been thinking about and trying to solve alignment for years, I see serious flaws in every approach, and I have recently started to wonder if alignment is just uploading with more steps anyway. So, this is more like my most promising suggestion for alignment, rather than giving up on solving alignment.
there’s the “how I got here” reasoning trace (which might be “I found it obvious”) and if you’re a good predictor you’ll often have very hard to explain highly accurate “how I got here”s
and then there’s the logic chain, local validity, how close can you get to forcing any coherent thinker to agree even if they don’t have your pretraining or your recent-years latent thoughts context window
often when I criticize you I think you’re obviously correct but haven’t forced me to believe the same thing by showing the conclusion is logically inescapable (and I want you to explain it better so I learn more of how you come to your opinions, and so that others can work with the internals of your ideas, usually more #2 than #1)
sometimes I think you’re obviously incorrect and going to respond as though you were in the previous state, because you’re in the percentage of the time where you’re inaccurate, and as such your reasoning has failed you and I’m trying to appeal to a higher precision of reasoning to get you to check
sometimes I’m wrong about whether you’re wrong and in those cases in order to convince me you need to be more precise, constructing your claim out of parts where each individual reasoning step is made of easier-to-force parts, closer to proof
keeping in mind proof might be scientific rather than logical, but is still a far higher standard of rigor than “I have a hypothesis which seems obviously true and is totally gonna be easy to test and show because duh and anyone who doesn’t believe me obviously has no research taste” even when that sentence is said by someone with very good research taste
on the object level: whether humans generalize their values depends heavily on what you mean by “generalize”, in the sense I care about, humans are the only valid source of generalization of their values, but humans taken in isolation are insufficient to specify how their values should generalize, the core of the problem is figuring out which of the ways to run humans forward is the one that is most naturally the way to generalize humans. I think it needs to involve, among other things, reliably running a particular human at a particular time forward, rather than a mixture of humans. possibly we can nail down how to identify a particular human at a particular time with compmech (is a hypothesis I have from some light but non-thorough and not-enough-to-have-solved-it engagement with the math, maybe someone who does it full time will think I’m obviously incorrect).
Lot of ‘welcome to my world’ vibes reading your self reports here, especially the ’50 different people have 75 different objections for a mix of good, bad and deeply stupid reasons, and require 100 different responses, some of which are very long, and it takes a back-and-forth to figure out which one, and you can’t possibly just list everything’ and so on, and that’s without getting into actually interesting branches and the places where you might be wrong or learn something, etc.
So to take your example, which seems like a good one:
Humans don’t generalize their values out of distribution. I affirm this not as strictly fully true, but on the level of ‘this is far closer to true and generative of a superior world model then its negation’ and ‘if you meditate on this sentence you may become [more] enlightened.’
I too have noticed that people seem to think that they do so generalize in ways they very much don’t, and this leads to a lot of rather false conclusions.
I also notice that I’m not convinced we are thinking about the sentence that similarly in ways that could end up being pretty load bearing. Stuff gets complicated.
I think that when you say the statement is ‘trivially’ true you are wrong about that, or at least holding people to unrealistic standards of epistemics? And that a version of this mistake is part of the problem. At least from me (I presume from others too) you get a very different reaction from saying each of:
Humans don’t generalize their values out of distribution. (let this be [X]).
Statement treating [X] as in-context common knowledge.
It is trivially true that [X] (said explicitly), or ‘obviously’ [X], or similar.
I believe that [X] or am very confident that [X]. (without explaining why you believe this)
I believe that [X] or am very confident that [X], but it is difficult for me to explain/justify.
And so on. I am very deliberate, or try to be, on which one I say in any given spot, even at the cost of a bunch of additional words.
Another note is I think in spots like this you basically do have to say this even if the subject already knows, to establish common knowledge and that you are basing your argument on this, even if only to orient them that this is where you are reasoning from. So it was a helpful statement to say and a good use of a sentence.
I see that you get disagreement votes when you say this on LW., but the comments don’t end up with negative karma or anything. I can see how that can be read as ‘punishment’ but I think that’s the system working as intended and I don’t know what a better one would be?
In general, I think if you have a bunch of load-bearing statements where you are very confident they are true but people typically think the statement is false and you can’t make an explicit case for them (either because you don’t have that kind of time/space, or because you don’t know how), then the most helpful thing to do is to tell the other person the thing is load bearing, and gesture towards it and why you believe it, but be clear you can’t justify it. You can also look for arguments that reach the same conclusion without it—often true things are highly overdetermined so you can get a bunch of your evidence ‘thrown out of court’ and still be fine, even if that sucks.
Do you think maybe rationalists are spending too much effort attempting to saturate the dialogue tree (probably not effective at winning people over) versus improving the presentation of the core argument for an AI moratorium?
Smart people don’t want to see the 1000th response on whether AI actually could kill everyone. At this point we’re convinced. Admittedly, not literally all of us, but those of us who are not yet convinced are not going to become suddenly enlightened by Yudkowsky’s x.com response to some particularly moronic variation of an objection he already responded to 20 years ago (Why does he do this? does he think has any kind of positive impact?)
A much better use of time would be to work on an article which presents the solid version of the argument for an AI moratorium. I.e., not an introductory text or article in Time Magazine, and not an article targeted to people he clearly thinks are just extremely stupid relative to him so rants for 10,000 words trying to drive home a relatively simple point. But rather an argument in a format that doesn’t necessitate a weak or incomplete presentation.
I and many other smart people want to see the solid version of the argument, without the gaping holes which are excusable in popular work and rants but inexcusable in rational discourse. This page does not exist! You want a moratorium, tell us exactly why we should agree! Having a solid argument is what ultimately matters in intellectual progress. Everything else is window dressing. If you have a solid argument, great! Please show it to me.
My guess is that on the margin more time should be spent improving the core messaging versus saturating the dialogue tree, on many AI questions, if you combine effort across everyone.
We cannot offer anything to the ASI, so it will have no reasons to keep us around aside from ethical.
Nor can we ensure that an ASI who decided to commit genocide will fail to do it.
We don’t know a way to create the ASI and infuse an ethics into it. SOTA alignment methods have major problems, which are best illustrated by sycophancy and LLMs supporting clearly delirious users.[1] OpenAI’s Model Spec explicitly prohibited[2] sycophancy, and one of Claude’s Commandments is “Choose the response that is least intended to build a relationship with the user.” And yet it didn’t prevent LLMs from becoming sycophantic. Apparently, the only known non-sycophantic model is KimiK2.
KimiK2 is a Chinese model created by a new team. And the team is the only one who guessed that one should rely on RLVR and self-critique instead of bias-inducing RLHF. We can’t exclude the possibility that Kimi’s success is more due to luck than to actual thinking about sycophancy and RLHF.
Strictly speaking, Claude Sonnet 4, which was red-teamed in Tim Hua’s experiment, is second to best at pushing back after KimiK2. Tim remarks that Claude sucks at the Spiral Bench because the personas in Tim’s experiment, unlike the Spiral Bench, are supposed to be under stress.
Strictly speaking, it is done as a User-level instruction, which arguably means that it can be overridden at the user’s request. But GPT-4o was overly sycophantic without users instructing it to do so.
On Janus comparisons: I do model you as pretty distinct from them in underlying beliefs although I don’t pretend to have a great model of either belief set. Reaction expectations are similarly correlated but distinct. I imagine they’d say that they answer good faith questions too, and often that’s true (e.g. when I do ask Janus a question I have a ~100% helpful answer rate, but that’s with me having a v high bar for asking).