Is there currently any place for possibly stupid or naive questions about alignment? I don’t wish to bother people with questions that have probably been addressed, but I don’t always know where to look for existing approaches to a question I have.
Vadim Fomin
What is the connection between the concepts of intelligence and optimization?
I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating “non-agentic AI”, or “AI with no goals/utility”, and EY says that they are trying to invent non-wet water or something like that?)
It makes sense if we describe intelligence as a general problem-solving ability. But intuitively, intelligence is also about making good models of the world, which sounds like it could be done in a non-agentic / non-optimizing way. One example that throws me off if Solomonoff induction—which feels like a superintelligence, and indeed contains good models of the world, but doesn’t seem to be pushing to any specific state of the world.
I know there’s the concept of AIXI, basically an agent armed with Solomonoff induction as their epistemology, but it feels like agency is added separately. Like, there’s the intelligence part (Solomonoff induction) and the agency part and they are clearly different, rather that agency automatically popping out because they’re superintelligent.
Hey there,
I was showing this post to a friend who’s into OpenBSD. He felt that this is not a good description, and wanted me to post his comment. I’m curious about what you guys think about this specific case and what it does to the point of the post as a whole. Here’s his comment: