Is independent AI research likely to continue to be legal?
At this point, very few people take the risks seriously, but that may not continue forever.
This doesn’t mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it’s a possibility
If there’s a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?
Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I’ve never seen AI research fictionally outlawed until it’s done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.
NancyLebovitz wasn’t suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does—preparation would be warranted if such legislation were likely. I don’t think it’s likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI’s primary goals.
You’re probably right that there’s no practical thing to be done now. I’m sure you’re know very quickly if restrictions on independent AI research are being considered.
The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don’t know if there are precautions for that one.
I’ve been thinking about that, and I believe you’re right that laws typically don’t get passed against hypothetical harms, and also that AI research isn’t the kind of thing that’s enough fun to think about to set off a moral panic.
However, I’m not sure whether real harm that society can recover from is a possibility.
I’m basing the possibility on two premises—that a lot of people thinking about AI aren’t as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat.
Suppose that a self-improving AI breaks the financial markets—there might just be efforts to protect the markets, or AI might be an issue in itself.
Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can’t prove it.
Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple.
Edit: Granted, most of the reaction is very much overblown.
The question was, “Is independent AI research likely to continue to be legal?”. What Eliezer considers a reasonable policy isn’t necessarily related to what government considers a reasonable policy. Though I think the answer to both questions is the same, for unrelated reasons.
Is independent AI research likely to continue to be legal?
At this point, very few people take the risks seriously, but that may not continue forever.
This doesn’t mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it’s a possibility
If there’s a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?
Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I’ve never seen AI research fictionally outlawed until it’s done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.
Fictional evidence should be avoided. Also, this subject seems very prime for a moral panic, i.e., “these guys are making Terminator”.
how would it be stopped if it were illegal? unless information tech suddenly goes away it’s impossible.
NancyLebovitz wasn’t suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does—preparation would be warranted if such legislation were likely. I don’t think it’s likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI’s primary goals.
Bingo. That’s exactly what I was concerned about.
You’re probably right that there’s no practical thing to be done now. I’m sure you’re know very quickly if restrictions on independent AI research are being considered.
The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don’t know if there are precautions for that one.
I’ve been thinking about that, and I believe you’re right that laws typically don’t get passed against hypothetical harms, and also that AI research isn’t the kind of thing that’s enough fun to think about to set off a moral panic.
However, I’m not sure whether real harm that society can recover from is a possibility.
I’m basing the possibility on two premises—that a lot of people thinking about AI aren’t as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat.
Suppose that a self-improving AI breaks the financial markets—there might just be efforts to protect the markets, or AI might be an issue in itself.
Witchcraft? Labeling of GM food?
Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can’t prove it.
Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple.
Edit: Granted, most of the reaction is very much overblown.
I’m pretty sure Eliezer commented publicly on this and I think his answer was that it doesn’t make sense to outlaw AI research.
10 free karma to whoever can find the right link.
The question was, “Is independent AI research likely to continue to be legal?”. What Eliezer considers a reasonable policy isn’t necessarily related to what government considers a reasonable policy. Though I think the answer to both questions is the same, for unrelated reasons.
AI as a Positive and Negative Factor in Global Risk (section 10) discusses this. More obsoletely, so do CFAI (section 4) and several SL4 posts (e.g. this thread from 2003).