I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can’t remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)
-A lot of the arguments for becoming commited to donating to FAI are based on “even if theres a low probability of it happening, the expected gains are incredibly huge”. I’m wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war.
-This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry.
-A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn’t possible, maybe nanotechnology can’t get much more sophisticated?
-Futurism is often inaccurate. The jokes about “wheres my hover car” are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely.
-Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits
I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can’t remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)
-A lot of the arguments for becoming commited to donating to FAI are based on “even if theres a low probability of it happening, the expected gains are incredibly huge”. I’m wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war. -This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry. -A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn’t possible, maybe nanotechnology can’t get much more sophisticated? -Futurism is often inaccurate. The jokes about “wheres my hover car” are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely. -Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits