One more reason why I think Faustian singleton is the most likely final outcome, even if FAI succeeds. Unlike material or social desires, curiosity can scale endlessly—and to the point where humans become willing to suspend their individuality for the sake of computational efficiency.
snarles
I’m not bothered by my scope insensitivity.
“SIAI is tackling the world’s most important task—the task of shaping the Singularity. The task of averting human extinction.”
I’d like to see a defense for this claim: that SIAI can actually have a justified confidence in exerting a positive influence on the future, and that this outweighs any alternative present good that could be done with the resources it is using.
Let me continue to play Devil’s Advocate for a second, then. There are many reasons why attempting to influence the far future might not be the most important task in the world.
The one I’ve already mentioned, indirectly, is the idea that it becomes super-exponentially futile to predict the consequences of your actions the farther in the future you go. For instance, SIAI might raise awareness of AI to the extent that regulations are passed, and no early AI accidents happen: however, this causes complacency that does allow a large AI accident to happen; whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
Another viewpoint is the bleak but by no means indefensible idea that it is impossible to prevent all existential disasters: the human race, or at least our values, will inevitably be reduced to inconsequence one way or another, and the only thing we can do is simply to reduce the amount of suffering in the world right now.
These are no reasons to give up, either, but the fact is that we simply don’t know enough to say anything about the non-near future with any confidence. That’s no reason to give up, of course, in fact—our lack of understanding makes it more valuable to try to improve our understanding of the future, as SIAI is doing. So maybe make that you official stated goal: simply to understand if there’s even a possibility of influencing the future—it is a noble and defensible goal by itself. But even then, arguably not the most important thing in the world.
Neither you nor I have enough confidence to assume or dismiss notions like: “There won’t be any non-catastrophic AI disasters which are big enough to get attention; if any non-trivial AI accident occurs, it will be catastrophic.”
Indeed, the truth of the matter is that I would be interested in contributing to SIAI, but at the moment I am still not convinced that it would be a good use of my resources. My other objections still haven’t been satisfied, but here’s another argument. As usual, I don’t personally commit to what I claim, since I don’t have enough knowledge to discuss anything in this area with certainty.
The main thing this community seems to lack when discussing Singularity is a lack of political savvy. The primary forces that shape history are, and quite likely, will always be economic and political motives, rather than technology. Technology and innovation are expensive, and innovators require financial and social motivation to create. This applies superlinearly for projects that are so large as to require collaboration.
General AI is exactly that sort of project. There is no magic mathematical insight that will enable us to write a program in a hundred lines of code that will allow it to improve itself in any reasonable amount of time. I’m sure Eliezer is aware of the literature on optimization processes, but the no free lunch principle and the practical randomness of innovation mean that an AI seeking to self-improve can only do so with an (optimized) random search. Humans essentially do the same thing, except we have knowledge and certain built-in processes to help us constrain the search space (but this also makes us miss certain obvious innovations.) To make GAI a real threat, you have to give it enough knowledge so that it can understand the basics of human behavior, or enough knowledge to learn more on its own from human-created resources. This is highly specific information which would take a fully general learning agent a lot of cycles to infer unless it were fed the information, in a machine-friendly form.
Now we will discuss the political and economic aspects of GAI. Support of general artificial intelligence is a political impossibility, because general AI, by definition, is a threat to the jobs of voters. By the time GAI becomes remotely viable, a candidate supporting a ban of GAI will have nearly universal support. It is impossible even to defend GAI on the grounds that the research it produces could save lives, because no medical researcher will welcome a technology that does their job for them. The same applies to any professional. There is a worry on this site that people underestimate GAI, but far more likely is that GAI or anything remotely like it is vastly overestimated as a threat.
The economic aspects are similar. GAI is vastly more costly to develop (for reasons I’ve outlined), and doesn’t provide many advantages over expert systems. Besides, no company is going to produce a self-improving tool in the first place, because nobody, in theory, would ever have to buy an upgraded version.
These political and economic forces are a powerful retardant against the possibility a General AI catastrophe, and have more heft than any focused organization like SIAI could ever have. Yet much like Nader spoiling Al Gore’s vote, the minor influence of SIAI might actually weaken rather than reinforce these protective forces. By claiming to have the tools in place to implement the strategically named ‘friendly AI’, SIAI might in fact assuage public worries about AI. Even if the organization itself does not take actions to do so, GAI advocates will be able to exaggerate the safety of friendly AI and point out that ‘experts have already developed Friendly AI guidelines’ in press releases. And by developing the framework to teach machines about human behavior, SIAI lowers the cost for any enterprise that for some reason, is interested in developing GAI.
At this point, I conclude my hypothetical argument. But I have realized that it is now my true position that SIAI should make it a clear position that: if tenable, NO general AI is preferable to friendly AI. (Back to no-accountability mode: it may be that general AI will eventually come, but by the point it will have become an eventuality, the human race will be vastly more prepared than it is now to deal with such an agent on an equal footing.)
Why do you think expert systems cannot handle anything cross-disciplinary? I even say that expert systems can generate new ideas, by more or less the same process that humans do. An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance. If you’re talking about revolutionary, paradigm shifting ideas—we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift. A company that can produce a 10% better product can already take over the market, a 200% better product is overkill, and especially unnecessary if there are substantial costs in overhauling the production line.
The reason why NO general AI is better than friendly (general) AI is very simple. IF general AI is an existential threat, than no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.
*save for mitigating an even larger risk of annihilation, of course
“You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells.”
And why is that?
EDIT: I’ve realized that some misinterpretation of my arguments has been due to disagreements in terminology. I define “expert systems” as systems designed to address a specific class of well-defined problems, capable of logical reasoning and probabilistic inference given a set of “axiom-like” rules, and updating their knowledge database with specific kinds of information.
AGI I define specifically as AI which has human or extra-human level capabilities, or the potential to reach those capabilities.
Now my response to the above:
“Expert AI systems are already used in hospitals, and will surely be used more and more as the technology progresses. There isn’t a single point where AI is suddenly better than humans at all aspects of a field. Current AIs are already better than doctors in some areas, but worse in many others. As the range of AI expertise increases doctors will shift more towards managerial roles, understanding the strengths and weakness of the myriad expert systems, refereeing between them and knowing when to overrule them.”
I agree with all of these.
“By the time true AGI arrives narrow AI will probably be pervasive enough that the line between the two will be too fuzzy to allow for a naive ban on AGI.”
To me it seems the greatest enabler of AI catastrophe is ignorance. But by the time narrow AI becomes pervasive, it’s also likely that people will possess much more of the technical understanding needed to comprehend the threat that AGI possesses.
“Moreover, I highly doubt people are going to vote to save jobs (especially jobs of the affluent) at the expense of human life.”
You are being too idealistic here.
“First, the existential threat [of AGI] may be low.”
Let me trace back the argument tree for a second. I originally asked for a defense of the claim that “SIAI is tackling the world’s most important task.” Michael Porter responded, “The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?” So NOW in this argument tree, we’re assuming that unfriendly AI IS an existential threat, enough that preventing it is the “world’s most important task.”
Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it’s a naive notion that any organization could protect friendly AI from being subverted.
“The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand”
I don’t think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.
“If a program can take an understanding of those subjects and design a better computer chip, I don’t think it’s just an “expert system” anymore. I would think it would take an AI to do that. That’s an AI complete problem.”
What I had in mind was some sort of combinatorial approach to designing chips, i.e. take these materials and randomly generate a design, test it, and then start altering the search space based on the results. I didn’t mean “understanding” in the human sense of the word, sorry.
“I’d love to hear some of these revolutionary ideas that we’re saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur”
Example: many aspects of the legal and political systems could be reformed, and it’s not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.
“Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world’s problems, I’d do it. ”
So you think there’s a >10% chance that the world’s problems are going to destroy humanity in the near future?
Our current conception of AGI is based on a biased comparison of hypothetical AGI capabilities with our relatively unehanced capabilities. By the time AGI is viable, a typical professional with expert systems will be able to vastly outperform current professionals with our current tools.
How are you going to protect the source code before you run it?
Expert systems would be faster still. For AGI to be justified in this case, you would need a task that required both speed and creativity.
Yes, but it would have to take the resources from humans first.
Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.
I know that the FAI argument is that the only way to prevent disaster is to make the agent “want” to not modify itself. But I’m arguing that for an agent to even be dangerous, it has to “want” to modify itself. There is no plausible scenario where an agent solving a specific problem decides that the most efficient path to the solution involves upgrading its own capabilities. It’s certainly not going to stumble upon a self-improvement randomly.
Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)
I’d like to suggest another type of rationality test for this site. The top contributors should randomly make posts that are flat-out wrong to see how they are received; and they should also randomly make legitimate posts under different names.