Another big difference is if there’s no intelligence explosion, we’re probably not talking about a singleton. If someone manages to create an AI that’s, say, roughly human level intelligence (probably stronger in some areas and weaker in others, but human-ish on average) and progress slows or stalls after that, then the most likely scenario is that a lot of those human-level AI’s would be created and sold for different purposes all over the world. We would probably be dealing with a complex world that has a lot of different AI’s and humans interacting with each other. That could create it’s own risks, but they would probably have to be handled in a different way.
One more point you might want to mention, is that in a world with AI but no intelligent explosion, where AI’s are not able to rapidly develop better AI’s, augmented human intelligence through various transhuman technologies and various forms of brain/computer interfaces could be a much more important factor; that kind of technology could allow humans to “keep up with” AI’s (at least for a time), and it’s possible that humans and AI’s working together on tasks could remain competitive with pure AI’s for a significant time period.
Another big difference is if there’s no intelligence explosion, we’re probably not talking about a singleton. If someone manages to create an AI that’s, say, roughly human level intelligence (probably stronger in some areas and weaker in others, but human-ish on average) and progress slows or stalls after that, then the most likely scenario is that a lot of those human-level AI’s would be created and sold for different purposes all over the world. We would probably be dealing with a complex world that has a lot of different AI’s and humans interacting with each other. That could create it’s own risks, but they would probably have to be handled in a different way.
Good point. This seems like an important oversight on my part, so I added a note about it.
Thanks.
One more point you might want to mention, is that in a world with AI but no intelligent explosion, where AI’s are not able to rapidly develop better AI’s, augmented human intelligence through various transhuman technologies and various forms of brain/computer interfaces could be a much more important factor; that kind of technology could allow humans to “keep up with” AI’s (at least for a time), and it’s possible that humans and AI’s working together on tasks could remain competitive with pure AI’s for a significant time period.