Fair point. We might get an extra century. Until then, it may turn out that we can somehow deal with the problem, for example by having a competent and benevolent world government that can actually prevent the development of superhuman AIs (perhaps by using millions of exactly-human-level AIs who keep each other in check and together endlessly scan all computers on the planet).
I mean, a superhuman AI is definitely going to be a problem of some kind; at least economically and politically. But in best case, we may be able to deal with it. Either because we somehow got more competent quickly, or because we had enough time to become more competent gradually.
Maybe even this is needlessly pessimistic, but in such case I don’t see how it is.
Fair point. We might get an extra century. Until then, it may turn out that we can somehow deal with the problem, for example by having a competent and benevolent world government that can actually prevent the development of superhuman AIs (perhaps by using millions of exactly-human-level AIs who keep each other in check and together endlessly scan all computers on the planet).
I mean, a superhuman AI is definitely going to be a problem of some kind; at least economically and politically. But in best case, we may be able to deal with it. Either because we somehow got more competent quickly, or because we had enough time to become more competent gradually.
Maybe even this is needlessly pessimistic, but in such case I don’t see how it is.