I’m Tamsin Leake, co-founder and head of research at Orthogonal, doing agent foundations.
Tamsin Leake
Some people who are very concerned about suffering might be considering building an unaligned AI that kills everyone just to avoid the risk of an AI takeover by an AI aligned to values which want some people to suffer.
Let this be me being on the record saying: I believe the probability of {alignment to values that strongly diswant suffering for all moral patients} is high enough, and the probability of {alignment to values that want some moral patients to suffer} is low enough, that this action is not worth it.
I think this applies to approximately anyone who would read this post, including heads of major labs in case they happen to read this post and in case they’re pursuing the startegy of killing everyone to reduce S-risk.
See also: how acausal trade helps in 1, 2, but I think I think this even without acausal trade.
- 1 Jun 2024 10:14 UTC; 13 points) 's comment on quila’s Shortform by (
sigh I wish people realized how useless it is to have money when the singularity happens. Either we die or we get a utopia in which it’s pretty unlikely that pre-singularity wealth matters. What you want to maximize is not your wealth but your utility function, and you sure as hell are gonna get more from LDT handshakes with aligned superintelligences in saved worlds, if you don’t help OpenAI reduce the amount of saved worlds.
I believe that ChatGPT was not released with the expectation that it would become as popular as it did.
Well, even if that’s true, causing such an outcome by accident should still count as evidence of vast irresponsibility imo.
I’m surprised at people who seem to be updating only now about OpenAI being very irresponsible, rather than updating when they created a giant public competitive market for chatbots (which contains plenty of labs that don’t care about alignment at all), thereby reducing how long everyone has to solve alignment. I still parse that move as devastating the commons in order to make a quick buck.
I made guesses about my values a while ago, here.
but that this would be bad if the users aren’t one of “us”—you know, the good alignment researchers who want to use AI to take over the universe, totally unlike those evil capabilities researchers who want to use AI to produce economically valuable goods and services.
Rather, ” us” — the good alignment researchers who will be careful at all about the long term effects of our actions, unlike capabilities researchers who are happy to accelerate race dynamics and increase p(doom) if they make a quick profit out of it in the short term.
I am a utilitarian and agree with your comment.
The intent of the post was
to make people weigh whether to publish or not, because I think some people are not weighing this enough
to give some arguments in favor of “you might be systematically overestimating the utility of publishing”, because I think some people are doing that
I agree people should take the utilitalianly optimal action, I just think they’re doing the utilitarian calculus wrong or not doing the calculus at all.
I think research that is mostly about outer alignment (what to point the AI to) rather than inner alignment (how to point the AI to it) tends to be good — quantilizers, corrigibility, QACI, decision theory, embedded agency, indirect normativity, infra bayesianism, things like that. Though I could see some of those backfiring the way RLHF did — in the hands of a very irresponsible org, even not very capabilities-related research can be used to accelerate timelines and increase race dynamics if the org doing it thinks it can get a quick buck out of it.
I don’t buy the argument that safety researchers have unusually good ideas/research compared to capability researchers at top labs
I don’t think this particularly needs to be true for my point to hold; they only need to have reasonably good ideas/research, not unusually good, for them to publish less to be a positive thing.
That said, if someone hasn’t thought at all about concepts like “differentially advancing safety” or “capabilities externalities,” then reading this post would probably be helpful, and I’d endorse thinking about those issues.
That’s a lot of what I intend to do with this post, yes. I think a lot of people do not think about the impact of publishing very much and just blurt-out/publish things as a default action, and I would like them to think about their actions more.
One straightforward alternative is to just not do that; I agree it’s not very satisfying but it should still be the action that’s pursued if it’s the one that has more utility.
I wish I had better alternatives, but I don’t. But the null action is an alternative.
Please stop publishing ideas/insights/research about AI
It certainly is possible! In more decision-theoritic terms, I’d describe this as “it sure would suck if agents in my reference class just optimized for their own happiness; it seems like the instrumental thing for agents in my reference class to do is maximize for everyone’s happiness”. Which is probly correct!
But as per my post, I’d describe this position as “not intrinsically altruistic” — you’re optimizing for everyone’s happiness because “it sure would sure if agents in my reference class didn’t do that”, not because you intrinsically value that everyone be happy, regardless of reasoning about agents and reference classes and veils of ignorance.
decision theory is no substitute for utility function
some people, upon learning about decision theories such as LDT and how it cooperates on problems such as the prisoner’s dilemma, end up believing the following:
my utility function is about what i want for just me; but i’m altruistic (/egalitarian/cosmopolitan/pro-fairness/etc) because decision theory says i should cooperate with other agents. decision theoritic cooperation is the true name of altruism.
it’s possible that this is true for some people, but in general i expect that to be a mistaken analysis of their values.
decision theory cooperates with agents relative to how much power they have, and only when it’s instrumental.
in my opinion, real altruism (/egalitarianism/cosmopolitanism/fairness/etc) should be in the utility function which the decision theory is instrumental to. i actually intrinsically care about others; i don’t just care about others instrumentally because it helps me somehow.
some important aspects that my utility-function-altruism differs from decision-theoritic-cooperation includes:
i care about people weighed by moral patienthood, decision theory only cares about agents weighed by negotiation power. if an alien superintelligence is very powerful but isn’t a moral patient, then i will only cooperate with it instrumentally (for example because i care about the alien moral patients that it has been in contact with); if cooperating with it doesn’t help my utility function (which, again, includes altruism towards aliens) then i won’t cooperate with that alien superintelligence. corollarily, i will take actions that cause nice things to happen to people even if they’ve very impoverished (and thus don’t have much LDT negotiation power) and it doesn’t help any other aspect of my utility function than just the fact that i value that they’re okay.
if i can switch to a better decision theory, or if fucking over some non-moral-patienty agents helps me somehow, then i’ll happily do that; i don’t have goal-content integrity about my decision theory. i do have goal-content integrity about my utility function: i don’t want to become someone who wants moral patients to unconsentingly-die or suffer, for example.
there seems to be a sense in which some decision theories are better than others, because they’re ultimately instrumental to one’s utility function. utility functions, however, don’t have an objective measure for how good they are. hence, moral anti-realism is true: there isn’t a Single Correct Utility Function.
decision theory is instrumental; the utility function is where the actual intrinsic/axiomatic/terminal goals/values/preferences are stored. usually, i also interpret “morality” and “ethics” as “terminal values”, since most of the stuff that those seem to care about looks like terminal values to me. for example, i will want fairness between moral patients intrinsically, not just because my decision theory says that that’s instrumental to me somehow.
I would feel better about this if there was something closer to (1) on which to discuss what is probably the most important topic in history (AI alignment). But noted.
I’m generally not a fan of increasing the amount of illegible selection effects.
On the privacy side, can lesswrong guarantee that, if I never click on Recommended, then recombee will never see an (even anonymized) trace of what I browse on lesswrong?
Here the thing that I’m calling evil is pursuing short-term profits at the cost of non-negligeably higher risk that everyone dies.
Regardless of how good their alignment plans are, the thing that makes OpenAI unambiguously evil is that they created a strongly marketed public product and, as a result, caused a lot public excitement about AI, and thus lots of other AI capabilities organizations were created that are completely dismissive of safety.
There’s just no good reason to do that, except short-term greed at the cost of higher probability that everyone (including people at OpenAI) dies.
(No, “you need huge profits to solve alignment” isn’t a good excuse — we had nowhere near exhausted the alignment research that can be done without huge profits.)
There’s also the case of harmful warning shots: for example, if it turns out that, upon seeing an AI do a scary but impressive thing, enough people/orgs/states go “woah, AI is powerful, I should make one!” or “I guess we’re doomed anyways, might as well stop thinking about safety and just enjoy making profit with AI while we’re still alive”, to offset the positive effect. This is totally the kind of thing that could be the case in our civilization.
There could be a difference but only after a certain point in time, which you’re trying to predict / plan for.
Considering how loog it took me to get that by this you mean “not dual-use”, I expect some others just won’t get it.