Should a “ask dumb questions about AGI safety” thread be recurring? Surely people will continue to come up with more questions in the years to come, and the same dynamics outlined in the OP will repeat. Perhaps this post could continue to be the go-to page, but it would become enormous (but if there were recurring posts they’d lose the FAQ function somewhat. Perhaps recurring posts and a FAQ post?).
No77e
Why not shoot for something less ambitious?
I’ll give myself a provisional answer. I’m not sure if it satisfies me, but it’s enough to make me pause: Anything short of CEV might leave open an unacceptably high chance of fates worse than death.
One is thinking about how to build aligned intelligence in a machine, the other is thinking about how to build aligned intelligence in humans and groups of humans.
Is this true though? Teaching rationality improves capability in people but shouldn’t necessarily align them. People are not AIs, but their morality doesn’t need to converge under reflection.
And even if the argument is “people are already aligned with people”, you still are working on capabilities when dealing with people and on alignment when dealing with AIs.
Teaching rationality looks more similar to AI capabilities research than AI alignment research to me.
Ah, I see your point now, and it makes sense. If I had to summarize it (and reword it in a way that appeals to my intuition), I’d say that the choice of seeking the truth is not just about “this helps me,” but about “this is what I want/ought to do/choose”. Not just about capabilities. I don’t think I disagree at this point, although perhaps I should think about it more.
I had the suspicion that my question would be met with something at least a bit removed inference-wise from where I was starting, since my model seemed like the most natural one, and so I expected someone who routinely thinks about this topic to have updated away from it rather than not having thought about it.
Regarding the last paragraph: I already believed your line “increasing a person’s ability to see and reason and care (vs rationalizing and blaming-to-distract-themselves and so on) probably helps with ethical conduct.” It didn’t seem to bear on the argument in this case because it looks like you are getting alignment for free by improving capabilities (if you reason with my previous model, otherwise it looks like your truth-alignment efforts somehow spill over to other values, which is still getting something for free due to how humans are built I’d guess).
Also… now that I think about it, what Harry was doing with Draco in HPMOR looks a lot like aligning rather than improving capabilities, and there were good spill-over effects (which were almost the whole point in that case perhaps).
This looks like something that would be useful also for alignment orgs, if they want to organize their research in siloes, as Yudkowsky often suggests (if they haven’t already implemented systems like this one).
Can someone explain to me why Pasha’s posts are downvoted so much? I don’t think they are great, but this level of negative karma seems disproportioned to me.
Why is research into decision theories relevant to alignment?
- 12 Jan 2023 10:10 UTC; 6 points) 's comment on All AGI Safety questions welcome (especially basic ones) [~monthly thread] by (
Thanks for the answer. It clarifies a little bit, but I still feel like I don’t fully grasp its relevance to alignment. I have the impression that there’s more to the story than just that?
I publish posts like this one to clarify my doubts about alignment. I don’t pay attention to whether I’m beating a dead horse or if there’s previous literature about my questions or ideas. Do you think this is an OK practice? One pro is that people like me learn faster, and one con is that it may pollute the site with lower-quality posts.
I use Eliezer Yudkowsky in my example because it makes the most sense. Don’t read anything else into it, please.
The last Twitter reply links to a talk from MIRI which I haven’t watched. I wouldn’t be surprised if MIRI also used this metaphor in the past, but I can’t recall examples off the top of my head right now.
Do you mean that no one will actually create exactly a paperclips maximizer or no agent of that kind? I.e. with goals such as “collect stamps”, or “generate images”? Because I think Eliezer meant to object to that class of examples, rather than only that specific one, but I’m not sure.
Yes, this makes a lot of sense, thank you.
I agree with you here, although something like “predict the next token” seems more and more likely. Although I’m not sure if this is in the same class of goals as paperclip maximizing in this context, and if the kind of failure it could lead to would be similar or not.
For some reason I don’t get e-mail notifications when someone replies to my posts or comments. My e-mail is verified and I’ve set all notifications to “immediately”. Here’s what my e-mail settings look like:
No, I mean “humans continue to evolve genetically, and they never start self-modifying in a way that makes evolution impossible (e.g., by becoming emulations).”
I am trying to figure out what is the relation between “alignment with evolution” and “short-term thinking”. Like, imagine that some people get hit by magical space rays, which make them fully “aligned with evolution”. What exactly would such people do?
I think they would become consequentialists smart enough that they could actually act to maximize inclusive genetic fitness. I think Thou Art Godshatter is convincing.
But what if the art or the philosophy makes it easier to get laid? So maybe in such case they would do the art/philosophy, but they would feel no intrinsic pleasure from doing it, like it would all be purely instrumental, willing to throw it all away if on second thought they find out that this is actually not maximizing reproduction?
Yeah that’s what I would expect.
How would they even figure out what is the reproduction-optimal thing to do? Would they spend some time trying to figure out the world? (The time that could otherwise be spent trying to get laid?) Or perhaps, as a result of sufficiently long evolution, they would already do the optimal thing instinctively? (Because those who had the right instincts and followed them, outcompeted those who spent too much time thinking?)
I doubt that being governed by instincts can outperform a sufficiently smart agent reasoning from scratch, given sufficiently complicated environment. Instincts are just heuristics after all...
But would that mean that the environment is fixed? Especially, if the most important part of the environment is other people? Maybe the humanity would get locked in an equilibrium where the optimal strategy is found, and everyone who tries doing something else is outcompeted; and afterwards those who do the optimal strategy more instinctively outcompete those who need to figure it out. What would such equilibrium look like?
Ohhh interesting, I have no idea… it seems plausible that it could happen though!
I’m going to re-ask all my questions that I don’t think have received a satisfactory answer. Some of them are probably basic, some other maybe less so:
As a failure mode of specification gaming, agents might modify their own goals.
As a convergent instrumental goal, agents want to prevent their goals to be modified.
I think I know how to resolve this apparent contradiction, but I’d like to see other people’s opinions about it.
Why is CEV so difficult? And if CEV is impossible to learn first try, why not shoot for something less ambitious? Value is fragile, OK, but aren’t there easier utopias?
Many humans would be able to distinguish utopia from dystopia if they saw them, and humanity’s only advantage over an AI is that the brain has “evolution presets”.
Humans are relatively dumb, so why can’t even a relatively dumb AI learn the same ability to distinguish utopias from dystopias?
To anyone reading: don’t interpret these questions as disagreement. If someone doesn’t, for example, understand a mathematical proof, they might express disagreement with the proof while knowing full well that they haven’t discovered a mistake in it and that they are simply confused.