Great post! I think this captures a lot of why I’m not ultradoomy (only, er, 45%-ish doomy, at the moment), especially A and B. I think it’s at least possible that our reality is on easymode, where muddling could conceivably put an AI into close enough territory to not trigger an oops.
I’d be even less doomy if I agreed with the counterarguments in C. Unfortunately, I can’t shake the suspicion that superintelligence is the kind of ridiculously powerful lever that would magnify small oopses into the largest possible oopses.
Hypothetically, if we took a clever human’s general capacity for problem solving, stripped it of limitations like getting bored or tired, got rid of its pesky intuitions around ethics, and sped it up by a factor of 1,000 times… I’d be very worried about what it would be able to do. Even without greater capacity for insight or an enhanced working memory, simply thinking really fast would be a broken superpower.
Such an entity might not be able to recreate the technology of modern civilization starting from scratch (both in resources and knowledge) in the stone age within 30 years, primarily due to physical interaction requirements. But starting from anything like modern civilization? That would get weird fast.
In other words, it seems like the intelligence range of humans- or even the range across animals and humans- is small compared to what is artificially possible even if we only consider speed. And it seems very likely at this point that a well-built artificial mind could have higher quality insights, too. MuZero certainly seems to within its domain. I don’t find much comfort in observable intelligence differences not always resulting in domination.
Agreed that superhuman intelligence seems like the kind of thing that could be a very powerful lever. What gets me is that we don’t seem to know how orthogonal or non-orthogonal intelligence and empathy are to one another.[1] If we were capable of creating a superhumanly intelligent AI and we were to be able to give it superhuman empathy, I might be inclined to trust ceding over a large amount of power and control to that system (or set of systems whatever). But a sociopathic superhuman intelligence? Definitely not ceding power over to that system.
The question then becomes to me, how confident are we that we are not creating dangerously sociopathic AI?
If I were to take a stab, I would say they were almost entirely orthogonal, as we have perfectly intelligent yet sociopathic humans walking around today who lack any sort of empathy. Giving any of these people superhuman ability and control would seem like an obviously terrible idea to me.
Great post! I think this captures a lot of why I’m not ultradoomy (only, er, 45%-ish doomy, at the moment), especially A and B. I think it’s at least possible that our reality is on easymode, where muddling could conceivably put an AI into close enough territory to not trigger an oops.
I’d be even less doomy if I agreed with the counterarguments in C. Unfortunately, I can’t shake the suspicion that superintelligence is the kind of ridiculously powerful lever that would magnify small oopses into the largest possible oopses.
Hypothetically, if we took a clever human’s general capacity for problem solving, stripped it of limitations like getting bored or tired, got rid of its pesky intuitions around ethics, and sped it up by a factor of 1,000 times… I’d be very worried about what it would be able to do. Even without greater capacity for insight or an enhanced working memory, simply thinking really fast would be a broken superpower.
Such an entity might not be able to recreate the technology of modern civilization starting from scratch (both in resources and knowledge) in the stone age within 30 years, primarily due to physical interaction requirements. But starting from anything like modern civilization? That would get weird fast.
In other words, it seems like the intelligence range of humans- or even the range across animals and humans- is small compared to what is artificially possible even if we only consider speed. And it seems very likely at this point that a well-built artificial mind could have higher quality insights, too. MuZero certainly seems to within its domain. I don’t find much comfort in observable intelligence differences not always resulting in domination.
Agreed that superhuman intelligence seems like the kind of thing that could be a very powerful lever. What gets me is that we don’t seem to know how orthogonal or non-orthogonal intelligence and empathy are to one another.[1] If we were capable of creating a superhumanly intelligent AI and we were to be able to give it superhuman empathy, I might be inclined to trust ceding over a large amount of power and control to that system (or set of systems whatever). But a sociopathic superhuman intelligence? Definitely not ceding power over to that system.
The question then becomes to me, how confident are we that we are not creating dangerously sociopathic AI?
If I were to take a stab, I would say they were almost entirely orthogonal, as we have perfectly intelligent yet sociopathic humans walking around today who lack any sort of empathy. Giving any of these people superhuman ability and control would seem like an obviously terrible idea to me.