Typo corrected, thanks for that.
I agree, it’s more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I’m just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety—both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal.
Yeah, I don’t actually disagree. It’s just that, if someone asks “how could an AI actually be dangerous? It’s just on a computer” and I respond by “here look at this cool story someone wrote which answers that question”, they might go “Aha, you think it will be developed on a laptop. This is clearly nonsense, therefore I now dismiss your case entirely”. I think you wanna bend over backwards to not make misleading statements if you argue for the dangers-from-ai-is-a-real-thing side.
You’re of course correct that any scenario with this level of detail is necessarily extremely unlikely, but I think that will be more obvious for other details like how exactly the AI reasons than it is for the above. I don’t see anyone going “aha, the AI reasoned that X→Y→Z which is clearly implausible because it’s specific, therefore I won’t take this seriously”.
If I had written this, I would add a disclaimer rather than change the title. The disclaimer could also explain that “paperclips” is a stand-in for any utility function that maximizes for just a particular physical thing.
That’s a good point, I’ll write up a brief explanation/disclaimer and put it in as a footnote.
There are some additional it’s/its mistakes on your text, e.g. here:
I run a denial of service attack on it’s server, cutting it off from the web before it can get it’s copies running.