Regarding D) it depends on why the risks are getting varying amounts of attention. Existential risks mainly get derivative attention as a result of more likely/near-term/electorally-salient/commonsense-morality-salient lesser forms. For instance, engineered diseases get countermeasure research because of the threat of non-extinction-level pathogens causing substantial casualties, not the less likely and more distant scenario of a species-killer. Anti-nuclear measures are driven mostly by the expected casualties from nuclear war than the chance of surprisingly powerful nuclear winter, etc. Climate change prevention is mostly justified in non-existential risk terms, and benefits from a single clear observable mechanism already in progress that fits many existing schema for environmentalism and dealing with pollutants.
The beginnings of a similar derivative effort are visible in the emerging “machine ethics” area, which has been energized by the development of Predator drones and the like, although it’s noteworthy how little was done on AI risk in the early, heady days of AI, when researchers were relatively confident in success soon.
Regarding A), I’ll have more to say at another time. I will give three key quick-to-explain points that are fairly important to me in concentrating a good chunk of probability mass in the next one to ten decades:
1) If we’re talking about 2100, the time between now and then is half again longer than the history of AI so far.
2) Theoretical progress is hard to predict, but progress in computing hardware has been quite predictable. While cheap hardware isn’t an overwhelming aid in AI development (slow sequential theory advances that can’t be much accelerated by throwing more people at them may remain a core bottleneck for a long time) it does have some benefits:
a) Some algorithms scale well with hardware performance, e.g. in vision and computer chess.
b) Cheap hardware incentivizes people to try to come up with hardware-hungry algorithms.
c) Abundant computing makes it easy for computer scientists to perform numerous experiments and test many parameter values for their algorithms.
d) Cheap computing, by enhancing the performance and utility of software, drives the expansion of the technology industry, which is accompanied by large increases in the number of corporate and academic researchers.
e) Products dependent on hardware advance (e.g. robots, the internet, etc) can produce large datasets and useful testing grounds for AI and machine learning.
All told, these effects of hardware growth give us reason to think that we should concentrate more of our probability mass for AI development further into Moore’s Law (and not too long after its end).
3) Neuroimaging advance has been quite impressive. Kurzweil is more optimistic on timelines than most neuroscientists, but there is wide agreement that neuroimaging tools will improve in various respects by yet more orders of magnitude, and shed at least some substantial light on how the brain works. If those tools may be useful, that should lead us to focus probability mass in the period reasonably soon after they are developed and used.
Regarding D) it depends on why the risks are getting varying amounts of attention. Existential risks mainly get derivative attention as a result of more likely/near-term/electorally-salient/commonsense-morality-salient lesser forms. For instance, engineered diseases get countermeasure research because of the threat of non-extinction-level pathogens causing substantial casualties, not the less likely and more distant scenario of a species-killer. Anti-nuclear measures are driven mostly by the expected casualties from nuclear war than the chance of surprisingly powerful nuclear winter, etc. Climate change prevention is mostly justified in non-existential risk terms, and benefits from a single clear observable mechanism already in progress that fits many existing schema for environmentalism and dealing with pollutants.
The beginnings of a similar derivative effort are visible in the emerging “machine ethics” area, which has been energized by the development of Predator drones and the like, although it’s noteworthy how little was done on AI risk in the early, heady days of AI, when researchers were relatively confident in success soon.
Regarding A), I’ll have more to say at another time. I will give three key quick-to-explain points that are fairly important to me in concentrating a good chunk of probability mass in the next one to ten decades:
1) If we’re talking about 2100, the time between now and then is half again longer than the history of AI so far.
2) Theoretical progress is hard to predict, but progress in computing hardware has been quite predictable. While cheap hardware isn’t an overwhelming aid in AI development (slow sequential theory advances that can’t be much accelerated by throwing more people at them may remain a core bottleneck for a long time) it does have some benefits:
a) Some algorithms scale well with hardware performance, e.g. in vision and computer chess. b) Cheap hardware incentivizes people to try to come up with hardware-hungry algorithms. c) Abundant computing makes it easy for computer scientists to perform numerous experiments and test many parameter values for their algorithms. d) Cheap computing, by enhancing the performance and utility of software, drives the expansion of the technology industry, which is accompanied by large increases in the number of corporate and academic researchers. e) Products dependent on hardware advance (e.g. robots, the internet, etc) can produce large datasets and useful testing grounds for AI and machine learning.
All told, these effects of hardware growth give us reason to think that we should concentrate more of our probability mass for AI development further into Moore’s Law (and not too long after its end).
3) Neuroimaging advance has been quite impressive. Kurzweil is more optimistic on timelines than most neuroscientists, but there is wide agreement that neuroimaging tools will improve in various respects by yet more orders of magnitude, and shed at least some substantial light on how the brain works. If those tools may be useful, that should lead us to focus probability mass in the period reasonably soon after they are developed and used.
Thanks Carl, I’m glad to finally be getting some engagement concerning (A). I will think about these things.