No, the world must be saved by mathematicians, computer scientists, and philosophers. This is because the creation of machine superintelligence this century will determine the future of our planet...
You sound awfully certain of that, especially considering that, as you say later, the problems are poorly defined, the nature of the problem space is unclear, and the solutions are unknown.
If I were a brilliant scientist, engineer, or mathematician (which I’m not, sadly), why should I invest my efforts into AI research, when I could be working on more immediate and well-defined goals ? There are quite a few of them, including but not limited to:
Prevention of, or compensation for, anthropogenic global climate change
Avoiding economic collapse
Developing a way to generate energy cheaply and sustainably
Reducing and eliminating famine and poverty in all nations
True, developing a quasi-godlike friendly AI would probably solve all of these problems in one hit, but that might be a bit of a long shot, whereas these problems and many others need to be solved today.
Well, I’m unlikely to solve those problems today regardless. Either way, we’re talking about estimated value calculations about the future made under uncertainty.
Fair enough, but all of the examples I’d listed are reasonably well-defined problems, with reasonably well-outlined problem spaces, whose solutions appear to be, if not within reach, then at least feasible given our current level of technology. If you contrast this with the nebulous problem of FAI as lukeprog outlined it, would you not conclude that the probability of solving these less ambitious problems is much higher ? If so, then the increased probability could compensate for the relatively lower utility (even though, in absolute terms, nothing beats having your own Friendly pocket genie).
would you not conclude that the probability of solving these less ambitious problems is much higher ?
Honestly, the error bars on all of these expected-value calculations are so wide for me that they pretty much overlap. Especially when I consider that building a run-of-the-mill marginally-superhuman non-quasi-godlike AI significantly changes my expected value of all kinds of research projects, and that cheap plentiful energy changes my expected value of AI projects, and etc., so half of them include one another as factors anyway.
sorry, its been a while since everyone stopped responding to this comment, but these goals wouldnt even begin to cover the number of problems that would be solved if our rough estimates of the capabilities of FAI are correct. You could easily fit another 10 issues to this selection and still be nowhere near a truly just world. not to mention the fact that each goal you add on makes solving such problems less likely due to the amount of social resistance you would encounter. and suppose humans truly are incapable of solving some of these issues under present conditions. this is not at all unlikely and an AI would have a much better shot at finding solutions. The added delay and greater risk may make pursuing FAI less rewarding than any one or even possibly three of these problems, but considering the sheer number of problems human beings face that could be solved through the Singularity if all goes well would lead me to believe it is far more worthwhile than any of these issues.
You sound awfully certain of that, especially considering that, as you say later, the problems are poorly defined, the nature of the problem space is unclear, and the solutions are unknown.
If I were a brilliant scientist, engineer, or mathematician (which I’m not, sadly), why should I invest my efforts into AI research, when I could be working on more immediate and well-defined goals ? There are quite a few of them, including but not limited to:
Prevention of, or compensation for, anthropogenic global climate change
Avoiding economic collapse
Developing a way to generate energy cheaply and sustainably
Reducing and eliminating famine and poverty in all nations
True, developing a quasi-godlike friendly AI would probably solve all of these problems in one hit, but that might be a bit of a long shot, whereas these problems and many others need to be solved today.
Well, I’m unlikely to solve those problems today regardless. Either way, we’re talking about estimated value calculations about the future made under uncertainty.
Fair enough, but all of the examples I’d listed are reasonably well-defined problems, with reasonably well-outlined problem spaces, whose solutions appear to be, if not within reach, then at least feasible given our current level of technology. If you contrast this with the nebulous problem of FAI as lukeprog outlined it, would you not conclude that the probability of solving these less ambitious problems is much higher ? If so, then the increased probability could compensate for the relatively lower utility (even though, in absolute terms, nothing beats having your own Friendly pocket genie).
Honestly, the error bars on all of these expected-value calculations are so wide for me that they pretty much overlap. Especially when I consider that building a run-of-the-mill marginally-superhuman non-quasi-godlike AI significantly changes my expected value of all kinds of research projects, and that cheap plentiful energy changes my expected value of AI projects, and etc., so half of them include one another as factors anyway.
So, really? I haven’t a clue.
Fair enough; I guess my error bars are just a lot narrower than yours. It’s possible I’m being too optimistic about them.
sorry, its been a while since everyone stopped responding to this comment, but these goals wouldnt even begin to cover the number of problems that would be solved if our rough estimates of the capabilities of FAI are correct. You could easily fit another 10 issues to this selection and still be nowhere near a truly just world. not to mention the fact that each goal you add on makes solving such problems less likely due to the amount of social resistance you would encounter. and suppose humans truly are incapable of solving some of these issues under present conditions. this is not at all unlikely and an AI would have a much better shot at finding solutions. The added delay and greater risk may make pursuing FAI less rewarding than any one or even possibly three of these problems, but considering the sheer number of problems human beings face that could be solved through the Singularity if all goes well would lead me to believe it is far more worthwhile than any of these issues.