I read this as effectively saying that paperclip maximizers/ mickey mouse maximizers would not permanently populate the universe because self-copiers would be better at maximizing their goals. Which makes sense: the paperclips Clippy produces don’t produce more paperclips, but the copies the self-copier creates do copy themselves. So it’s quite possibly a difference between polynomial and exponential growth.
So Clippy probably is unrealistic. Not that reproduction-maximizing AIs are any better for humanity.
There is nothing stopping a paperclip maximizer from simply behaving like a self-copier, if that works better. And then once it “wins,” it can make the paperclips.
So I think the whole notion makes very little sense.
Paperclip maximization doesn’t seem like a stable goal, though I could be wrong about that. Let’s say Clippy reproduces to create a bunch of clippys trying to maximize total paperclips (let’s call this collective ClippyBorg). If one of ClippyBorg’s subClippys had some variety of mutation that changed its goal set to one more suited for reproduction, it would outcompete the other clippys. Now ClippyBorg could destroy cancerClippy, but whether it would successfully do so every time is an open question.
One additional confounding factor is that if ClippyBorg’s subClippys are identical, they will not occupy every available niche optimally and could well be outcompeted by dumber but more adaptable agents (much like humans don’t completely dominate bacteria, despite vastly greater intelligence, due to lower adaptability).
A self-copying clippy would have the handicap of having to retain it’s desire to maximize paperclips, something other self-copiers wouldn’t have to do. I think the notion of Clippys not dominating does make sense, even if it’s not necessarily right. (my personal intuition is that whichever replicating optimizer with a stable goal set begins expansion first will dominate).
A paperclip maximizer can create self-reproducing paperclip makers.
It’s quite imaginable that somewhere in the universe there are organisms which either resemble paperclips (maybe an intelligent gastropod with a paperclip-shaped shell) or which have a fundamental use for paperclip-like artefacts (they lay their eggs in a hardened tunnel dug in a paperclip shape). So while it is outlandish to imagine that the first AGI made by human beings will end up fetishizing an object which in our context is a useful but minor artefact, what we would call a “paperclip maximizer” might have a much higher probability of arising from that species, as a degenerated expression of some of its basic impulses.
The real question is, how likely is that, or indeed, how likely is any scenario in which superintelligence is employed to convert as much of the universe as possible to “X”—remembering that “interstellar civilizations populated by beings experiencing growth, choice, and joy” is also a possible value of X.
It would seem that universe-converting X-maximizers are a somewhat likely, but not an inevitable, outcome of a naturally intelligent species experiencing a technological singularity. But we don’t know how likely that is, and we don’t know what possible Xs are likely.
I read this as effectively saying that paperclip maximizers/ mickey mouse maximizers would not permanently populate the universe because self-copiers would be better at maximizing their goals. Which makes sense: the paperclips Clippy produces don’t produce more paperclips, but the copies the self-copier creates do copy themselves. So it’s quite possibly a difference between polynomial and exponential growth.
So Clippy probably is unrealistic. Not that reproduction-maximizing AIs are any better for humanity.
There is nothing stopping a paperclip maximizer from simply behaving like a self-copier, if that works better. And then once it “wins,” it can make the paperclips.
So I think the whole notion makes very little sense.
Paperclip maximization doesn’t seem like a stable goal, though I could be wrong about that. Let’s say Clippy reproduces to create a bunch of clippys trying to maximize total paperclips (let’s call this collective ClippyBorg). If one of ClippyBorg’s subClippys had some variety of mutation that changed its goal set to one more suited for reproduction, it would outcompete the other clippys. Now ClippyBorg could destroy cancerClippy, but whether it would successfully do so every time is an open question.
One additional confounding factor is that if ClippyBorg’s subClippys are identical, they will not occupy every available niche optimally and could well be outcompeted by dumber but more adaptable agents (much like humans don’t completely dominate bacteria, despite vastly greater intelligence, due to lower adaptability).
A self-copying clippy would have the handicap of having to retain it’s desire to maximize paperclips, something other self-copiers wouldn’t have to do. I think the notion of Clippys not dominating does make sense, even if it’s not necessarily right. (my personal intuition is that whichever replicating optimizer with a stable goal set begins expansion first will dominate).
A paperclip maximizer can create self-reproducing paperclip makers.
It’s quite imaginable that somewhere in the universe there are organisms which either resemble paperclips (maybe an intelligent gastropod with a paperclip-shaped shell) or which have a fundamental use for paperclip-like artefacts (they lay their eggs in a hardened tunnel dug in a paperclip shape). So while it is outlandish to imagine that the first AGI made by human beings will end up fetishizing an object which in our context is a useful but minor artefact, what we would call a “paperclip maximizer” might have a much higher probability of arising from that species, as a degenerated expression of some of its basic impulses.
The real question is, how likely is that, or indeed, how likely is any scenario in which superintelligence is employed to convert as much of the universe as possible to “X”—remembering that “interstellar civilizations populated by beings experiencing growth, choice, and joy” is also a possible value of X.
It would seem that universe-converting X-maximizers are a somewhat likely, but not an inevitable, outcome of a naturally intelligent species experiencing a technological singularity. But we don’t know how likely that is, and we don’t know what possible Xs are likely.