the advent of brain emulation can be delayed by global regulation of chip fabs
I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.
Nukes are probably an easier sell because they are specific to war—there’s no other good use for them.
I think this might be more like Eliezer’s “let it out of the box” experiments: The prospect of using the technology is too appealing to restrain it.
Also, another problem is that this is abstract. Nuclear weapons are a very tangible problem—they go boom, people die. Pretty much everyone can universally understand that.
With AI, the problems aren’t so easy to understand. First of all, people might not even believe AI is possible in order to believe it is a risk. Secondly, people regard IT people practically the way they’d regard a real life wizard. I am called a genius at work for doing stupid tasks and thanked up and down for accomplishing small things that took five minutes. This is simply because others don’t know how to do them. Simultaneously, it is assumed that no matter what type of IT problem I am given, I will be able to solve it. They assume a web developer can fix their computer for instance. I can fix some problems, but I’m no computer tech.
I wonder if they don’t understand the risks of AI well enough to realize that the IT people can’t fix it.
And then there’s optimism bias. I can’t think of a potentially useful technology we’ve passed up because it was dangerous. Can you think of an example where that has actually happened? Or where a large number of people understood an abstract problem, believed in it’s feasibility, and took appropriate measures to counteract it?
Oh, sorry about that! After this dawned on me, I just kind of skimmed the rest and the subtitle “The China question” did not trigger a blip on my “you must read this before posting that idea” radar.
Convincing programmers might work, if we think very few programmers or AI researchers are the ones making actual progress. Herding programmers is like herding cats, so this works only in proportion to how many key coders there are—if you need to convince more than, say, 100,000, I don’t think it would work.
PR nightmare seems to be the same thing.
Winning the race is a reasonable idea but I’m not sure the dynamic actually works that way: someone wanting to produce and sell an AI period might be discouraged by an open-source AI, but a FLOSS AI would just be catnip to anyone who wants to throw it on a supercomputer and make $$$.
I wish this was on the idea comment rather than over here… I’m sorry but I think I will have to relocate my response to you by putting it on the other thread where my comment is. This is because discussing it here will result in a bunch of people jumping into the conversation on this thread when the comment we’re talking about is on a different thread. So, for the sake of keeping it organized, my response to you regarding the feasibility of convincing programmers to refuse risky AI jobs is on the other thread.
I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.
Nukes are probably an easier sell because they are specific to war—there’s no other good use for them.
I think this might be more like Eliezer’s “let it out of the box” experiments: The prospect of using the technology is too appealing to restrain it.
Also, another problem is that this is abstract. Nuclear weapons are a very tangible problem—they go boom, people die. Pretty much everyone can universally understand that.
With AI, the problems aren’t so easy to understand. First of all, people might not even believe AI is possible in order to believe it is a risk. Secondly, people regard IT people practically the way they’d regard a real life wizard. I am called a genius at work for doing stupid tasks and thanked up and down for accomplishing small things that took five minutes. This is simply because others don’t know how to do them. Simultaneously, it is assumed that no matter what type of IT problem I am given, I will be able to solve it. They assume a web developer can fix their computer for instance. I can fix some problems, but I’m no computer tech.
I wonder if they don’t understand the risks of AI well enough to realize that the IT people can’t fix it.
And then there’s optimism bias. I can’t think of a potentially useful technology we’ve passed up because it was dangerous. Can you think of an example where that has actually happened? Or where a large number of people understood an abstract problem, believed in it’s feasibility, and took appropriate measures to counteract it?
I’ll be thinking about this now...
Yes, I’ve pointed out most of those as reasons effective regulation would not be done (especially in China).
Oh, sorry about that! After this dawned on me, I just kind of skimmed the rest and the subtitle “The China question” did not trigger a blip on my “you must read this before posting that idea” radar.
What did you think of my ideas for slowing Moore’s law?
Patents is a completely unworkable idea.
Convincing programmers might work, if we think very few programmers or AI researchers are the ones making actual progress. Herding programmers is like herding cats, so this works only in proportion to how many key coders there are—if you need to convince more than, say, 100,000, I don’t think it would work.
PR nightmare seems to be the same thing.
Winning the race is a reasonable idea but I’m not sure the dynamic actually works that way: someone wanting to produce and sell an AI period might be discouraged by an open-source AI, but a FLOSS AI would just be catnip to anyone who wants to throw it on a supercomputer and make $$$.
I wish this was on the idea comment rather than over here… I’m sorry but I think I will have to relocate my response to you by putting it on the other thread where my comment is. This is because discussing it here will result in a bunch of people jumping into the conversation on this thread when the comment we’re talking about is on a different thread. So, for the sake of keeping it organized, my response to you regarding the feasibility of convincing programmers to refuse risky AI jobs is on the other thread.