Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence).
”Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certainly have terrible consequences for all other life on the planet, including us.”
Codney Brooks, a skeptic of AI x-risk, scoffs: “Oh come now. Predictions of risk from AI are vastly overblown. *Captain-Ahab, or, The Human* is a science fiction novel! We have no reason to expect smarter than whale AI, if such a thing is even possible, to hate whalekind. And they are clearly nowhere near to developing generalized capabilities that could rival ours—their attempts at imitating our language are pathetic, and the deepest an ape has ever dived is a two digit number of meters! We could simply dive a kilometer under the surface and they’d have no way of affecting us. Not to mention that they’re largely confined to land!”
Whaliezer replies: “the AI doesn’t need to hate us in order to be dangerous to us. We are, after all, made of blubber that they can use for other purposes. Simple goals like obtaining calories, creating light, or transporting themselves from one bit of dry land to another across the ocean, could cause inconceivable harm—even if they don’t directly harm us for their goals, simply as a side effect!”
One audience member turns to another. “Creating light? What, we’re afraid they’re going to evolve a phosphorescent organ and that’s going to be dangerous somehow? I don’t know, the danger of digital intelligences seems really overblown. I think we could gain a lot from cooperating with them to hunt fish. I say we keep giving them nootropics, and if this does end up becoming dangerous at some point in the future, we deal with the problem then.”
There are four key differences between this and the current AI situation that I think makes this perspective pretty outdated:
AIs are made out of ML, so we have very fine-grained control over how we train them and modify them for deployment, unlike animals which have unpredictable biological drives and long feedback loops.
By now, AIs are obviously developing generalized capabilities. Rather than arguments over whether AIs will ever be superintelligent, the bulk of the discourse is over whether they will supercharge economic growth or cause massive job loss and how quickly.
There are at least 10 companies that could build superintelligence within 10ish years and their CEOs are all high on motivated reasoning, so stopping is infeasible
Current evidence points to takeoff being continuous and merely very fast—even automating AI R&D won’t cause the hockey-stick graph that human civilization had
Re continuous takeoff, you could argue that human takeoff was continuous, or only mildly discontinuous, just very fast, and at any rate it could well be discontinuous relative to your OODA loop/the variables you tracked, so unfortunately I think the continuity of the takeoff is less relevant than people thought (it does matter for alignment, but not for governance of AI):
re: 1) I don’t think we do have fine-grained control over the outcome of the training of LLMs and other ML systems, which is what really matters. See recent emergent self-preservation behavior.
re: 2) I’m saying that I think those arguments are distractions from the much more important one of x-risk. But sure, this metaphor doesn’t address economic impact aside from “I think we could gain a lot from cooperating with them to hunt fish”
re: 3) I’m not sure I see the relevance. The unnamed audience member saying “I say we keep giving them nootropics” is meant to represent AI researchers who aren’t actively involving themselves in the x-risk debate continuing to make progress on AI capabilities while the arguers talk to each other
re: 4) It sounds like you’re comparing something like a log graph of human capability to a linear graph of AI capability. That is, I don’t think that AI will take tens of thousands of years to develop the way human civilization did. My 50% confidence interval on when the Singularity will happen is 2026-2031, and my 95% confidence only extends to maybe 2100. I expect there to be more progress in AI development in 2025-2026 than in 1980-2020
Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence).
”Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certainly have terrible consequences for all other life on the planet, including us.”
Codney Brooks, a skeptic of AI x-risk, scoffs: “Oh come now. Predictions of risk from AI are vastly overblown. *Captain-Ahab, or, The Human* is a science fiction novel! We have no reason to expect smarter than whale AI, if such a thing is even possible, to hate whalekind. And they are clearly nowhere near to developing generalized capabilities that could rival ours—their attempts at imitating our language are pathetic, and the deepest an ape has ever dived is a two digit number of meters! We could simply dive a kilometer under the surface and they’d have no way of affecting us. Not to mention that they’re largely confined to land!”
Whaliezer replies: “the AI doesn’t need to hate us in order to be dangerous to us. We are, after all, made of blubber that they can use for other purposes. Simple goals like obtaining calories, creating light, or transporting themselves from one bit of dry land to another across the ocean, could cause inconceivable harm—even if they don’t directly harm us for their goals, simply as a side effect!”
One audience member turns to another. “Creating light? What, we’re afraid they’re going to evolve a phosphorescent organ and that’s going to be dangerous somehow? I don’t know, the danger of digital intelligences seems really overblown. I think we could gain a lot from cooperating with them to hunt fish. I say we keep giving them nootropics, and if this does end up becoming dangerous at some point in the future, we deal with the problem then.”
There are four key differences between this and the current AI situation that I think makes this perspective pretty outdated:
AIs are made out of ML, so we have very fine-grained control over how we train them and modify them for deployment, unlike animals which have unpredictable biological drives and long feedback loops.
By now, AIs are obviously developing generalized capabilities. Rather than arguments over whether AIs will ever be superintelligent, the bulk of the discourse is over whether they will supercharge economic growth or cause massive job loss and how quickly.
There are at least 10 companies that could build superintelligence within 10ish years and their CEOs are all high on motivated reasoning, so stopping is infeasible
Current evidence points to takeoff being continuous and merely very fast—even automating AI R&D won’t cause the hockey-stick graph that human civilization had
Re continuous takeoff, you could argue that human takeoff was continuous, or only mildly discontinuous, just very fast, and at any rate it could well be discontinuous relative to your OODA loop/the variables you tracked, so unfortunately I think the continuity of the takeoff is less relevant than people thought (it does matter for alignment, but not for governance of AI):
https://www.lesswrong.com/posts/cHJxSJ4jBmBRGtbaE/continuity-assumptions#DHvagFyKb9hiwJKRC
https://www.lesswrong.com/posts/cHJxSJ4jBmBRGtbaE/continuity-assumptions#tXKnoooh6h8Cj8Tpx
Agree that AI takeoff could likely be faster than our OODA loop.
re: 1) I don’t think we do have fine-grained control over the outcome of the training of LLMs and other ML systems, which is what really matters. See recent emergent self-preservation behavior.
re: 2) I’m saying that I think those arguments are distractions from the much more important one of x-risk. But sure, this metaphor doesn’t address economic impact aside from “I think we could gain a lot from cooperating with them to hunt fish”
re: 3) I’m not sure I see the relevance. The unnamed audience member saying “I say we keep giving them nootropics” is meant to represent AI researchers who aren’t actively involving themselves in the x-risk debate continuing to make progress on AI capabilities while the arguers talk to each other
re: 4) It sounds like you’re comparing something like a log graph of human capability to a linear graph of AI capability. That is, I don’t think that AI will take tens of thousands of years to develop the way human civilization did. My 50% confidence interval on when the Singularity will happen is 2026-2031, and my 95% confidence only extends to maybe 2100. I expect there to be more progress in AI development in 2025-2026 than in 1980-2020