I’m more optimistic about survival than I necessarily am about good behavior on the part of the first AGIs (and I still hate the word “alignment”).
Intelligence is not necessarily all that powerful. There are limits on what you can achieve within any available space of action, no matter how smart you are.
Smart adversaries are indeed very dangerous, but people talk as though a “superintelligence” could destroy or remake the world instantly, basically just by wanting to. A lot of what I read here comes off more like hysteria than like a sound model of a threat.
… and the limitations are even more important if you have multiple goals and have to consider costs. The closer you get to totally “optimizing” any one goal, the more further gain usually ends up costing in terms of your other goals. To some degree, that even includes just piling up general capabilities or resources if you don’t know how you’re going to use them.
Computing power is limited, and scaling on a lot of things doesn’t even seem to be linear, let alone sublinear.
The most important consequence: you can’t necessarily get all that smart, especially all that fast, because there just aren’t that many transistors or that much electricity or even that many distinguishable quantum states available.
The extreme: whenever I hear people talking about AIs, instantiated in the physical universe in which we exist, running huge numbers of faithful simulations of the thoughts and behaviors of humans or other AIs, in realistic environments no less, I wonder what they’ve been smoking. It’s just not gonna happen. But, again, that sort of thing gets a lot of uncritical acceptance around here.
Smart adversaries are indeed very dangerous, but people talk as though a “superintelligence” could destroy or remake the world instantly, basically just by wanting to. A lot of what I read here comes off more like hysteria than like a sound model of a threat.
I think it’s pretty easy to argue that internet access ought to be sufficient, though it won’t literally be instant.
I agree that unrestricted Internet access is Bad(TM). Given the Internet, a completely unbounded intelligence could very probably cause massive havoc, essentially at an x-risk level, damned fast… but it’s not a certainty, and I think “damned fast” is in the range of months to years even if your intelligence is truly unbounded. You have to work through tools that can only go so fast, and stealth will slow you down even more (while still necessarily being imperfect).
… but a lot of the talk on here is in the vein of “if it gets to say one sentence to one randomly selected person, it can destroy the world”. Even if it also has limited knowledge of the outside world. If people don’t actually believe that, it’s still sometimes seen as a necessary conservative assumption. That’s getting pretty far out there. While “conservative” in one sense, that strong an assumption could keep you from applying safety measures that would actually be effective, so it it can be “anti-conservative” in other senses. Admittedly the extreme view doesn’t seem to be so common among the people actually trying to figure out how to build stuff, but it still colors everybody’s thoughts.
Also, my points interact with one another. If you are a real superintelligence, with effectively instantaneous logical omniscience, Internet access is very probably enough (and I still claim being able to say one sentence is very probably not enough). But if you just have “a really high IQ”, Internet access may not be enough, and being able to say one sentence is definitely not enough. And even trying to figure out how to use what you have can be a problem, if you start sucking down expensive computing resources without producing visible results.
If those limitations give people a softer takeoff, more time to figure things out, and an ability to have some experience with AGI without the first bug destroying you, it seems like they have a better chance to survive.
Also, I’d like to defend myself by pointing out, that I said “more optimistic”, not “optimistic in an absolute sense”. I resist puttting numbers on these things, but I’m not sure I’d say survival was better than fifty-fifty, even considering the limitations I mentioned. Some days I’d go far worse, fewer days I’d go a bit better.
I’m more optimistic about survival than I necessarily am about good behavior on the part of the first AGIs (and I still hate the word “alignment”).
Intelligence is not necessarily all that powerful. There are limits on what you can achieve within any available space of action, no matter how smart you are.
Smart adversaries are indeed very dangerous, but people talk as though a “superintelligence” could destroy or remake the world instantly, basically just by wanting to. A lot of what I read here comes off more like hysteria than like a sound model of a threat.
… and the limitations are even more important if you have multiple goals and have to consider costs. The closer you get to totally “optimizing” any one goal, the more further gain usually ends up costing in terms of your other goals. To some degree, that even includes just piling up general capabilities or resources if you don’t know how you’re going to use them.
Computing power is limited, and scaling on a lot of things doesn’t even seem to be linear, let alone sublinear.
The most important consequence: you can’t necessarily get all that smart, especially all that fast, because there just aren’t that many transistors or that much electricity or even that many distinguishable quantum states available.
The extreme: whenever I hear people talking about AIs, instantiated in the physical universe in which we exist, running huge numbers of faithful simulations of the thoughts and behaviors of humans or other AIs, in realistic environments no less, I wonder what they’ve been smoking. It’s just not gonna happen. But, again, that sort of thing gets a lot of uncritical acceptance around here.
I think it’s pretty easy to argue that internet access ought to be sufficient, though it won’t literally be instant.
I agree that unrestricted Internet access is Bad(TM). Given the Internet, a completely unbounded intelligence could very probably cause massive havoc, essentially at an x-risk level, damned fast… but it’s not a certainty, and I think “damned fast” is in the range of months to years even if your intelligence is truly unbounded. You have to work through tools that can only go so fast, and stealth will slow you down even more (while still necessarily being imperfect).
… but a lot of the talk on here is in the vein of “if it gets to say one sentence to one randomly selected person, it can destroy the world”. Even if it also has limited knowledge of the outside world. If people don’t actually believe that, it’s still sometimes seen as a necessary conservative assumption. That’s getting pretty far out there. While “conservative” in one sense, that strong an assumption could keep you from applying safety measures that would actually be effective, so it it can be “anti-conservative” in other senses. Admittedly the extreme view doesn’t seem to be so common among the people actually trying to figure out how to build stuff, but it still colors everybody’s thoughts.
Also, my points interact with one another. If you are a real superintelligence, with effectively instantaneous logical omniscience, Internet access is very probably enough (and I still claim being able to say one sentence is very probably not enough). But if you just have “a really high IQ”, Internet access may not be enough, and being able to say one sentence is definitely not enough. And even trying to figure out how to use what you have can be a problem, if you start sucking down expensive computing resources without producing visible results.
If those limitations give people a softer takeoff, more time to figure things out, and an ability to have some experience with AGI without the first bug destroying you, it seems like they have a better chance to survive.
Also, I’d like to defend myself by pointing out, that I said “more optimistic”, not “optimistic in an absolute sense”. I resist puttting numbers on these things, but I’m not sure I’d say survival was better than fifty-fifty, even considering the limitations I mentioned. Some days I’d go far worse, fewer days I’d go a bit better.