I really don’t want to spoil it for you people and will now try to cease active participation here on LW. I do not mind if you ignore this and my other reply to your comment above. So don’t waste any more of your time. I would only feel obliged to reply again. Thanks for your time and sorry for any inconveniences.
Regarding my personal assessment of AI associated risk I start from a position of ignorance. I’m asking myself, what reasons are there to believe that some sort of AI can rapidly self-improve to the point of superhuman intelligence and how likely is that outcome? What other risks from AI are there and how does their combined probability compare to other existential risks? There are many subsequent questions to be asked here as well. For example, even if the combined probability of all risks posed by AI does outweigh any other existential risk, are those problems sufficiently similar to be tackled by an organisation like the SIAI or are they disjoint, discrete problems that one should not add up to calculate the overall risk posed by AI?
Among other things this short story was supposed to show the possibility to argue against risks from AI by definition. I have to conclude that the same could be done in favor of risks from AI. You could simply argue that AGI is by definition capable of recursive self-improvement, that any intelligent agent naturally follows this pathway and that the most likely failure mode is to fail to implement scope boundaries that would make it hold before consuming the universe. Both ideas sound reasonable but might be based upon completely made-up arguments. Their superficially appeal might be mostly a result of their vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is indeed an existential risk. I counter that I dispute some of the premises and the likelihood of some subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within existing frameworks that impose several presuppositions onto me. Of course the same counts for arguing against the risks from AI. But what is the starting position here, why would one naturally believe that AI does pose an existential risk or does not?
Right now I am able to see many good arguments for both positions. But since most people here already take the position that AI does posit an existential risk I thought to get the most feedback by taking the position that it might not.
I could just accept the good reasons to believe that AI does posit an existential risk, just to be on the safe side. But here I have to be careful not to neglect other risks that might be more dangerous. If I want to support the mitigation of existential risks by contributing to one charity I’ll have take some effort to figure out which one that might be. To come up with such an estimation I believe I have to weigh all arguments for and against risks from AI. You might currently believe that the SIAI is the one charity with the highest value per donation. I have to update on that, because you and other people here seem to be smart fellows. Yet there are other smart people who do not share that opinion. Take for the example the Global Catastrophic Risk Survey, it seems to suggest that molecular nanotech weapons are not less of a risk than AI. Should I maybe donate to the Foresight Institute then? Sure, you say that AI is not only a risk but does also help to solve all other problems. Yet the SIAI might cause research on AI to slow down. Further there might be other charities more effectively working on some sub-goal that will enable AI, for example molecular nanotech. Again something that might speak in favor of donating to the Foresight Institute. Which is of course just an example. I want to highlight that this problem is not a clear-cut issue for me at the moment.
Following the above I want to highlight a few other points I tried to convey with the short story:
Intelligence might not be applicable to itself effectively.
The development of artificial intelligence might be gradually, slow enough to keep pace, learn from low-impact failures and adapt ourselves on the way.
We might not be able to capture intelligence by a discrete algorithm.
General superhuman intelligence might not be possible and all modular approaches can be adapted by humans as expert systems to outweigh any benefits of brute force approaches.
To improve your intelligence you need intelligence and resources and therefore will have to acquire those resources and improve your intelligence given what you already have.
Researchers capable of creating AGI are not likely to fail on limiting its scope.
I’m not sure how I should respond to this, because I’m not sure of what the main points were. I second the request for a shorter version.
I really don’t want to spoil it for you people and will now try to cease active participation here on LW. I do not mind if you ignore this and my other reply to your comment above. So don’t waste any more of your time. I would only feel obliged to reply again. Thanks for your time and sorry for any inconveniences.
Regarding my personal assessment of AI associated risk I start from a position of ignorance. I’m asking myself, what reasons are there to believe that some sort of AI can rapidly self-improve to the point of superhuman intelligence and how likely is that outcome? What other risks from AI are there and how does their combined probability compare to other existential risks? There are many subsequent questions to be asked here as well. For example, even if the combined probability of all risks posed by AI does outweigh any other existential risk, are those problems sufficiently similar to be tackled by an organisation like the SIAI or are they disjoint, discrete problems that one should not add up to calculate the overall risk posed by AI?
Among other things this short story was supposed to show the possibility to argue against risks from AI by definition. I have to conclude that the same could be done in favor of risks from AI. You could simply argue that AGI is by definition capable of recursive self-improvement, that any intelligent agent naturally follows this pathway and that the most likely failure mode is to fail to implement scope boundaries that would make it hold before consuming the universe. Both ideas sound reasonable but might be based upon completely made-up arguments. Their superficially appeal might be mostly a result of their vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is indeed an existential risk. I counter that I dispute some of the premises and the likelihood of some subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within existing frameworks that impose several presuppositions onto me. Of course the same counts for arguing against the risks from AI. But what is the starting position here, why would one naturally believe that AI does pose an existential risk or does not?
Right now I am able to see many good arguments for both positions. But since most people here already take the position that AI does posit an existential risk I thought to get the most feedback by taking the position that it might not.
I could just accept the good reasons to believe that AI does posit an existential risk, just to be on the safe side. But here I have to be careful not to neglect other risks that might be more dangerous. If I want to support the mitigation of existential risks by contributing to one charity I’ll have take some effort to figure out which one that might be. To come up with such an estimation I believe I have to weigh all arguments for and against risks from AI. You might currently believe that the SIAI is the one charity with the highest value per donation. I have to update on that, because you and other people here seem to be smart fellows. Yet there are other smart people who do not share that opinion. Take for the example the Global Catastrophic Risk Survey, it seems to suggest that molecular nanotech weapons are not less of a risk than AI. Should I maybe donate to the Foresight Institute then? Sure, you say that AI is not only a risk but does also help to solve all other problems. Yet the SIAI might cause research on AI to slow down. Further there might be other charities more effectively working on some sub-goal that will enable AI, for example molecular nanotech. Again something that might speak in favor of donating to the Foresight Institute. Which is of course just an example. I want to highlight that this problem is not a clear-cut issue for me at the moment.
Following the above I want to highlight a few other points I tried to convey with the short story:
Intelligence might not be applicable to itself effectively.
The development of artificial intelligence might be gradually, slow enough to keep pace, learn from low-impact failures and adapt ourselves on the way.
We might not be able to capture intelligence by a discrete algorithm.
General superhuman intelligence might not be possible and all modular approaches can be adapted by humans as expert systems to outweigh any benefits of brute force approaches.
To improve your intelligence you need intelligence and resources and therefore will have to acquire those resources and improve your intelligence given what you already have.
Researchers capable of creating AGI are not likely to fail on limiting its scope.