I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).
If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.
I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can’t remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)
-A lot of the arguments for becoming commited to donating to FAI are based on “even if theres a low probability of it happening, the expected gains are incredibly huge”. I’m wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war.
-This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry.
-A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn’t possible, maybe nanotechnology can’t get much more sophisticated?
-Futurism is often inaccurate. The jokes about “wheres my hover car” are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely.
-Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits
I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).
If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.
Yes, I probably need a healthy dose of counter-arguments. Can you link any? (I’ll do my own search too.)
I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can’t remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty)
-A lot of the arguments for becoming commited to donating to FAI are based on “even if theres a low probability of it happening, the expected gains are incredibly huge”. I’m wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war. -This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry. -A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn’t possible, maybe nanotechnology can’t get much more sophisticated? -Futurism is often inaccurate. The jokes about “wheres my hover car” are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time scale are unlikely. -Intelligence might have a hard limit, or an exponential decay. It could be argued that we might be able to wire up millions of humanlike intelligence in a computer array, but that might hit physical limits
This doesn’t sound easy to do a keyword search for; did you have anything in mind you could link us to?
Edit: Sorry, I see this has already been asked.