Thanks for that. However, my definition of “intelligence” would be “the ability to find solutions for complex decision problems”. It’s unclear whether the ability of slime molds to find the shortest path through a maze or organize in seemingly “intelligent” ways has anything to do with intelligence, although the underlying principles may be similar.
We don’t want to argue over définition so I can change mine at least for this discussion. However, I want to point that, by your own definition, slime molds are very much intelligent, in that you can take any instance of an NP complete problem, reduce that to an instance of the problem of finding the most efficient roads for connecting corn flakes dots, and it will probably approximately solve it. Would you say your pocket calculator doesn’t compute because it has no idea it’s computing?
I haven’t read the article you linked in full, but at first glance, it seems to refer to consciousness, not intelligence.
Depends on you. Say I ask a person with split-brain if they’d vote blue or red. Is that intelligence or isthat consciousness? In any case, the point is: sometime I can interrogate the left or right hemisphere specifically, and sometime they disagree. So, we don’t have a unique structure that can act as cartesian theater, which means our perception that our thoughts come from a single agent is, well, a perception. Does that invalidate any paper that starts from this single mind assumption? Not necessarily -models are always wrong, sometime useful despite wrong. But each time I read one that seems to break something important in the analysis.
Maybe that is a key to understanding the difference in thinking between me, Melanie Mitchell, and possibly you: If she assumes that for AI to present an x-risk, it has to be conscious in the way we humans are, that would explain Mitchell’s low estimate for achieving this anytime soon. However, I don’t believe that.
Maybe for Mitchell, I don’t know her personally. But no, that’s the opposite for me: I suspect building a conscious AI might be the easiest way to keep it as interpretable as a human mind, and I suspect most agents with random values would be constantly crashing, like LLMs you don’t keep in check. I fear civil war from minds polarization much more (for x-risks from AI).
To become uncontrollable and develop instrumental goals, an advanced AI would probably need what Joseph Carlsmith calls “strategic awareness”—a world model that includes the AI itself as a part of its plan to achieve its goals.
That’s one of the thing I tend to perceive as magic thinking, even while knowing the poster would likely say it’s a flawed perception. Let’s discuss that @magic.
@one reason (not from Mitchell) for questioning the validity of many works on x-risk in AIs?
We don’t want to argue over définition so I can change mine at least for this discussion. However, I want to point that, by your own definition, slime molds are very much intelligent, in that you can take any instance of an NP complete problem, reduce that to an instance of the problem of finding the most efficient roads for connecting corn flakes dots, and it will probably approximately solve it. Would you say your pocket calculator doesn’t compute because it has no idea it’s computing?
Depends on you. Say I ask a person with split-brain if they’d vote blue or red. Is that intelligence or isthat consciousness? In any case, the point is: sometime I can interrogate the left or right hemisphere specifically, and sometime they disagree. So, we don’t have a unique structure that can act as cartesian theater, which means our perception that our thoughts come from a single agent is, well, a perception. Does that invalidate any paper that starts from this single mind assumption? Not necessarily -models are always wrong, sometime useful despite wrong. But each time I read one that seems to break something important in the analysis.
Maybe for Mitchell, I don’t know her personally. But no, that’s the opposite for me: I suspect building a conscious AI might be the easiest way to keep it as interpretable as a human mind, and I suspect most agents with random values would be constantly crashing, like LLMs you don’t keep in check. I fear civil war from minds polarization much more (for x-risks from AI).
That’s one of the thing I tend to perceive as magic thinking, even while knowing the poster would likely say it’s a flawed perception. Let’s discuss that @magic.