And why would giving away money to charities be a good idea? Returns on investment of almost all of them are extremely close to none, and most people are horrible at identifying the exceptions.
For every effective cause like let’s say polio eradication, Wikileaks, and whatever GiveWell considers good, there’s thousands of charities that essentially waste your money. Especially SIAI.
You’re trying to use methods of rationality to come up with the best way to appeal to emotions.
I don’t know if this a taboo subject or what, but I’m curious. What makes you include SIAI in this category? (If you’d rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don’t cost much.
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Donating to SIAI is pure display of tribal affiliation
That just isn’t true. It is partially a display of tribal affiliation.
They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.
If I was to donate (more) to the SIAI it would be a mix of:
Tribal affiliation.
Reciprocation. (They gave me a free bootcamp and airplane ticket.)
Actually not having a better idea of a way to not die.
EDIT: Downvoting this post sort of confirms my point that it’s all about signaling tribal affiliations.
If people downvoting you is evidence that you are right then would people upvoting you have been evidence that you were wrong? Or does this kind of ‘confirmation’ not get conserved the way that evidence does?
And the evidence that donating to SIAI does anything other than signal affiliation is...?
… not required to refute your claim. It’s a goal post shift. In fact I explicitly allowed for the SIAI being utterly useless or worse than useless in the comment to which you replied. The claim I rejected is this:
Donating to SIAI is pure display of tribal affiliation
For that to be true it would require that there is nobody who believes that the SIAI does something useful and whose donating behaviour is best modelled as at least somewhat influenced by the desire to achieve the overt goal.
You also require that there are no other causal influences behind the decision including forms of signalling other than tribal affiliation. I have already mentioned “reciprocation” as a non “tribal affiliation” motivating influence. Even if I decided that the SIAI were completely unworthy of my affiliation I would find it difficult to suppress the instinct to pay back at least some of what they gave me.
The SIAI has received anonymous donations. (The relevance should be obvious.)
Beliefs based on little evidence that people outside of tribe find extremely weird are one of the main forms of signaling tribal affiliation. Taking Jesus story seriously is how people signal belonging to one of Christian tribes, and taking unfriendly AI story seriously is how people signal belonging to one of lesswrong tribe.
No goal post are being shifted here. Donating to SIAI because one believes lesswrong tribal stories is signaling that you have these tribal-marker beliefs, and still counts as pure 100% tribal affiliation signaling.
My reference here would be a fund to build world’s largest Jesus statue. These seems to be this largest Jesus contest ongoing, the record was broken twice in just a year, in Poland then in Peru, and now some Croatian group is trying to outdo them both. People who donate to these efforts might honestly belief this is a good idea. Details why they believe so are highly complex, but this is a tribal-marker belief and nothing more.
Virtually nobody who’s not a local Catholic considers it such, just like virtually nobody who’s not sharing “lesswrongian meme complex” considers what SIAI is doing a particularly good idea. I’m sure these funds got plenty of anonymous donations from local Catholics, and maybe some small amount of money from off-tribal people (e.g. “screw religion, but huge Jesus will be great for tourism here” / “friendly AI is almost certainly bullshit, but weirdos are worth funding by Pascal wager”), this doesn’t really change anything.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions signaling tribal affiliation, regardless of how conscious this is.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions [solely for] signaling tribal affiliation, regardless of how conscious this is.
There are other reasons why someone could downvote your post. You immediately assuming that it’s about tribal affiliations sort of demonstrates the problem with your claim that it’s all about tribal affiliations.
They’ve published papers. Presumably if we didn’t donate anything, they couldn’t publish papers. They also hand out paychecks to Eliezer. Eliezer is a tribal leader, so we want him to succeed! Between those two, we have proof that they’re doing more than just signalling affiliation.
The far better question is whether they’re doing something useful with that money, and whether it would be better spent elsewhere. That, I do not feel qualified to answer. I think even Give Well gave up on that one.
I’m a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock.
They definitely try to promote the kind of thinking that will result in things like seed vaults though
(I’m a member)
My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven’t seen any appealing organisations in this area.
So am I :)
Just got my steel card last week, actually.
I had a wonderful moment several months back when I was wandering about in the science museum in London… and stumbled across their prototype clock… SO cool!
Um… The return on SIAI so far is well worth it for me :). Can you give me specific examples of how you consider SIAI to waste money? Spreading knowledge of cryonics alone is worth it from an altruistic standpoint and FAI theory development from a selfish one.
So it’s just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance! Outside view says they’re not anything like that, and they have zero to show for it as a counterargument.
If you absolutely positively have to spend money on existential risk (not that I’m claiming this is a good idea, but if you have to), asteroids are known to cause mass extinction every year with 1:50,000,000 or so chance. That’s 1:500,000 per century, not really negligible. And you can make some real difference by supporting asteroid tracking programs.
So it’s just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance!
No, that’s not it at all. If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to, then it makes sense that one of them (Eliezer) would create an organization that would have a very high expected value to have exist (SIAI) and the rest of the people here would donate to it. If that is the case, that SIAI is the best charity to donate to in terms of expected value (which it may or may not be), then it would also be the best charity to best donate to in order to display tribal affiliations (which it definitely is). So if you accept that people on LW are more rational than average, then them donating so much to SIAI should be taken as weak evidence that SIAI is a really good charity to donate to.
you can make some real difference by supporting asteroid tracking programs.
I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to [...]
I didn’t downvote you, but what you’re saying is essentially “if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity”. Which is something every single group would say, in slight variation.
I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
Here’s results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I’m not really following this too closely, I’m mostly glad that some people are doing something here.
but what you’re saying is essentially “if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity”. Which is something every single group would say, in slight variation.
Well yeah; that’s why you should examine the evidence and not just do what everyone else does. So let’s look at the beliefs of all the Singularitarians on LW as evidence. What would we expect to see if LW is just an arbitrary tribe that picked a random cause to glom around? I suspect we would see that not many people in the world, and particularly not high-status people and organizations, would pay attention to the Singularity. I predict that everyone on LW would donate money to SIAI and shun people who don’t donate or belittle SIAI.
Now what would we see if LW is in fact a group of high-quality rationalists and the world, in general, is too blinded by various biases to think rationally about low-probability, high-impact events? Well, most people, including high-status people (but perhaps not some academics) wouldn’t talk about it. People on LW would donate money to SIAI because they did the calculation and decided it was the highest expected value. And they would probably shun the people who disagree, because they’re still humans.
Those two situations look awfully similar to me. My point is, I certainly don’t think that you can use LW’s enthusiasm about SIAI compared to the general public as a strike against LW or SIAI.
Here’s results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I’m not really following this too closely, I’m mostly glad that some people are doing something here.
I’m not finding anything there indicating that they’re hurting for funding, but perhaps I’m missing it.
I honestly believe that the Singularity is a greater threat then asteroids to the human race. Either an asteroid will be small enough that we can destroy it or its too big to stop. Once you make an asteroid big enough to cause risk to humanity its also a lot easier to find and destroy. However, a positive singularity isn’t valued enough and a negative singularity isn’t feared enough among humanity unlike asteroid deflection efforts and that’s why i focus on SIAI.
You actually need to detect these asteroids decades in advance for our current technology to stand any chance, and we currently don’t do that. More detection efforts mean tracking smaller asteroids than otherwise, but more importantly tracking big asteroids faster.
Arbitrarily massive asteroid can be moved off course very easily given enough time to do so. That’s the plan, not “destroying” them.
Still, considering there’s a very low chance of a large asteroid strike and most the most quoted figure Ive heard is that we have more than 75% of NEO objects that are of dangerous size being tracked. I think a negative singularity is more likely to happen in the next 200 years then an asteroid strike.
However, it is a good point that donating money to NEO tracking could be a good charitable donation as well i just don’t think its on the same order of magnitude as the danger of a uFAI.
With asteroid strike everybody agrees on risk within order of magnitude or two. We have a lot of historical data about asteroid strikes of various sizes, can use power level distribution to smooth it a bit etc.
With UFAI people’s estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.
In particular, for donation to a particular charity to be a good idea, two conditions have to hold:
The sign of the expected utility has to be positive rather than negative.
The magnitude has to be greater than the expected utility of purchasing goods and services in the usual way (which generates benefit not only to you, but to your trading partners, to their trading partners etc.)
It is only moderately unlikely for either condition alone to be true, but it is very unlikely for both conditions to be true simultaneously.
And why would giving away money to charities be a good idea?
The studies in the main post suggest that it brings more happiness than spending it on yourself, for small amounts relative to the amount you currently spend on yourself. Bringing happiness is what makes it a pretty good idea.
And why would giving away money to charities be a good idea? Returns on investment of almost all of them are extremely close to none, and most people are horrible at identifying the exceptions.
For every effective cause like let’s say polio eradication, Wikileaks, and whatever GiveWell considers good, there’s thousands of charities that essentially waste your money. Especially SIAI.
You’re trying to use methods of rationality to come up with the best way to appeal to emotions.
I don’t know if this a taboo subject or what, but I’m curious. What makes you include SIAI in this category? (If you’d rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don’t cost much.
Just weighing in here:
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
I should probably reread the papers. My brain tends to go “GAAAH” at the sight of game theory. I’m probably a bit biased because of that.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Interesting. Why do you think so?
That just isn’t true. It is partially a display of tribal affiliation.
Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.
If I was to donate (more) to the SIAI it would be a mix of:
Tribal affiliation.
Reciprocation. (They gave me a free bootcamp and airplane ticket.)
Actually not having a better idea of a way to not die.
And the evidence that donating to SIAI does anything other than signal affiliation is...?
EDIT: Downvoting this post sort of confirms my point that it’s all about signaling tribal affiliations.
If people downvoting you is evidence that you are right then would people upvoting you have been evidence that you were wrong? Or does this kind of ‘confirmation’ not get conserved the way that evidence does?
… not required to refute your claim. It’s a goal post shift. In fact I explicitly allowed for the SIAI being utterly useless or worse than useless in the comment to which you replied. The claim I rejected is this:
For that to be true it would require that there is nobody who believes that the SIAI does something useful and whose donating behaviour is best modelled as at least somewhat influenced by the desire to achieve the overt goal.
You also require that there are no other causal influences behind the decision including forms of signalling other than tribal affiliation. I have already mentioned “reciprocation” as a non “tribal affiliation” motivating influence. Even if I decided that the SIAI were completely unworthy of my affiliation I would find it difficult to suppress the instinct to pay back at least some of what they gave me.
The SIAI has received anonymous donations. (The relevance should be obvious.)
Beliefs based on little evidence that people outside of tribe find extremely weird are one of the main forms of signaling tribal affiliation. Taking Jesus story seriously is how people signal belonging to one of Christian tribes, and taking unfriendly AI story seriously is how people signal belonging to one of lesswrong tribe.
No goal post are being shifted here. Donating to SIAI because one believes lesswrong tribal stories is signaling that you have these tribal-marker beliefs, and still counts as pure 100% tribal affiliation signaling.
My reference here would be a fund to build world’s largest Jesus statue. These seems to be this largest Jesus contest ongoing, the record was broken twice in just a year, in Poland then in Peru, and now some Croatian group is trying to outdo them both. People who donate to these efforts might honestly belief this is a good idea. Details why they believe so are highly complex, but this is a tribal-marker belief and nothing more.
Virtually nobody who’s not a local Catholic considers it such, just like virtually nobody who’s not sharing “lesswrongian meme complex” considers what SIAI is doing a particularly good idea. I’m sure these funds got plenty of anonymous donations from local Catholics, and maybe some small amount of money from off-tribal people (e.g. “screw religion, but huge Jesus will be great for tourism here” / “friendly AI is almost certainly bullshit, but weirdos are worth funding by Pascal wager”), this doesn’t really change anything.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions signaling tribal affiliation, regardless of how conscious this is.
(Edit based on context)
This statement is either false or useless.
There are other reasons why someone could downvote your post. You immediately assuming that it’s about tribal affiliations sort of demonstrates the problem with your claim that it’s all about tribal affiliations.
They’ve published papers. Presumably if we didn’t donate anything, they couldn’t publish papers. They also hand out paychecks to Eliezer. Eliezer is a tribal leader, so we want him to succeed! Between those two, we have proof that they’re doing more than just signalling affiliation.
The far better question is whether they’re doing something useful with that money, and whether it would be better spent elsewhere. That, I do not feel qualified to answer. I think even Give Well gave up on that one.
Really? I thought we wanted the tribal leader to fail in a way that allowed ourselves or someone we have more influence over to take his place.
Or we want the tribal leader to be conveniently martyred at their moment of greatest impact. You know, for the good of the cause.
I think that depends on how we perceive the size of the tribe, our position within it, and the security of its status in the outside world...
Sounds interesting. Do you have links for charities of this sort that you recommend?
I’m a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock.
They definitely try to promote the kind of thinking that will result in things like seed vaults though
(I’m a member)
My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven’t seen any appealing organisations in this area.
So am I :) Just got my steel card last week, actually.
I had a wonderful moment several months back when I was wandering about in the science museum in London… and stumbled across their prototype clock… SO cool!
What’s more, the tribal affiliation might not be a “display” to others.
Hence wedfrifid leaving that word out of his bullet point list.
Um… The return on SIAI so far is well worth it for me :). Can you give me specific examples of how you consider SIAI to waste money? Spreading knowledge of cryonics alone is worth it from an altruistic standpoint and FAI theory development from a selfish one.
So it’s just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance! Outside view says they’re not anything like that, and they have zero to show for it as a counterargument.
If you absolutely positively have to spend money on existential risk (not that I’m claiming this is a good idea, but if you have to), asteroids are known to cause mass extinction every year with 1:50,000,000 or so chance. That’s 1:500,000 per century, not really negligible. And you can make some real difference by supporting asteroid tracking programs.
No, that’s not it at all. If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to, then it makes sense that one of them (Eliezer) would create an organization that would have a very high expected value to have exist (SIAI) and the rest of the people here would donate to it. If that is the case, that SIAI is the best charity to donate to in terms of expected value (which it may or may not be), then it would also be the best charity to best donate to in order to display tribal affiliations (which it definitely is). So if you accept that people on LW are more rational than average, then them donating so much to SIAI should be taken as weak evidence that SIAI is a really good charity to donate to.
I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
I didn’t downvote you, but what you’re saying is essentially “if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity”. Which is something every single group would say, in slight variation.
Here’s results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I’m not really following this too closely, I’m mostly glad that some people are doing something here.
Thanks! I upvoted you.
Well yeah; that’s why you should examine the evidence and not just do what everyone else does. So let’s look at the beliefs of all the Singularitarians on LW as evidence. What would we expect to see if LW is just an arbitrary tribe that picked a random cause to glom around? I suspect we would see that not many people in the world, and particularly not high-status people and organizations, would pay attention to the Singularity. I predict that everyone on LW would donate money to SIAI and shun people who don’t donate or belittle SIAI.
Now what would we see if LW is in fact a group of high-quality rationalists and the world, in general, is too blinded by various biases to think rationally about low-probability, high-impact events? Well, most people, including high-status people (but perhaps not some academics) wouldn’t talk about it. People on LW would donate money to SIAI because they did the calculation and decided it was the highest expected value. And they would probably shun the people who disagree, because they’re still humans.
Those two situations look awfully similar to me. My point is, I certainly don’t think that you can use LW’s enthusiasm about SIAI compared to the general public as a strike against LW or SIAI.
I’m not finding anything there indicating that they’re hurting for funding, but perhaps I’m missing it.
I honestly believe that the Singularity is a greater threat then asteroids to the human race. Either an asteroid will be small enough that we can destroy it or its too big to stop. Once you make an asteroid big enough to cause risk to humanity its also a lot easier to find and destroy. However, a positive singularity isn’t valued enough and a negative singularity isn’t feared enough among humanity unlike asteroid deflection efforts and that’s why i focus on SIAI.
You actually need to detect these asteroids decades in advance for our current technology to stand any chance, and we currently don’t do that. More detection efforts mean tracking smaller asteroids than otherwise, but more importantly tracking big asteroids faster.
Arbitrarily massive asteroid can be moved off course very easily given enough time to do so. That’s the plan, not “destroying” them.
Still, considering there’s a very low chance of a large asteroid strike and most the most quoted figure Ive heard is that we have more than 75% of NEO objects that are of dangerous size being tracked. I think a negative singularity is more likely to happen in the next 200 years then an asteroid strike. However, it is a good point that donating money to NEO tracking could be a good charitable donation as well i just don’t think its on the same order of magnitude as the danger of a uFAI.
With asteroid strike everybody agrees on risk within order of magnitude or two. We have a lot of historical data about asteroid strikes of various sizes, can use power level distribution to smooth it a bit etc.
With UFAI people’s estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.
Money spent on mind uploading is a better defense against asteroids than asteroid detection. At least for me.
In particular, for donation to a particular charity to be a good idea, two conditions have to hold:
The sign of the expected utility has to be positive rather than negative.
The magnitude has to be greater than the expected utility of purchasing goods and services in the usual way (which generates benefit not only to you, but to your trading partners, to their trading partners etc.)
It is only moderately unlikely for either condition alone to be true, but it is very unlikely for both conditions to be true simultaneously.
The studies in the main post suggest that it brings more happiness than spending it on yourself, for small amounts relative to the amount you currently spend on yourself. Bringing happiness is what makes it a pretty good idea.
To be honest all these laboratory tests and happiness questionnaires seem fairly dubious methodologically.
What’s the best research here?