The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without ‘paying’ too much in bad outcomes.
Presumably there’s at least some of this going on. But there’s not an “either/or” dichotomy here. Some of the Less Wrong advice will turn out to fall into the above and other such advice will turn out to be solidly grounded.
For example, I think that more likely than not, focus on x-risk reduction as a philanthropic cause is grounded and that this is something that the LW community has gotten right but that more likely than not, donating to SIAI is not the best x-risk reduction opportunity on the table. I’m bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one; a priori it seems that the one that’s most salient initially is unlikely to be the best one altogether. (That being said, contingencies may point toward SIAI being the best possible option even after an analysis of all available options.)
I’m bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one
I’m really interested in this issue since I’m considering donating to x-risk-organizations.
Which organization do you think is best suited for existential risk reduction?
Besides SIAI I can only think of FHI. IMO they both are preferable to the Forsight Institute and the Center for Responsible Nanotechnology. I don’t know of any other organizations whose main focus are x-risks.
In another thread you said that the best way to contribute to x-risk-reduction is
to increase public interest in and concern for existential risk.
I agree!
You added that
SIAI seems poorly suited to generating interest in and concern for existential risk and may very well be lowering the prestige attached to investigating existential risk rather than raising the prestige attached to investigating existential risk.
Why do you think that this is the case?
IMHO e.g. the Singularity Summit’s have increased public interest in and prestige attached to the Singularity and x-risks.
My impressions of SIAI and views on these things have evolved considerably since a year ago in when I commented in the thread that you wrote. I have a considerably more favorable impression of SIAI than I did at the time.
But regarding:
IMHO e.g. the Singularity Summit’s have increased public interest in and prestige attached to the Singularity and x-risks.
Increasing public interest in and prestige attached to the Singularity may increase the rush toward advanced technologies which could raise the probability of an unfriendly AI.
I’m really interested in this issue since I’m considering donating to x-risk-organizations. Which organization do you think is best suited for existential risk reduction? Besides SIAI I can only think of FHI. IMO they both are preferable to the Forsight Institute and the Center for Responsible Nanotechnology. I don’t know of any other organizations whose main focus are x-risks.
I’m still in the process of gathering information on this topic and haven’t come to a conclusion (not even a tentative one). Beyond the organizations that you list, there are organizations working against nuclear war, asteroid strike risk, global pandemics, etc. Friendly AI is the most important issue on the table but efficacy of working toward it may be less than that of working against other risks after time discounting for information uncertainty.
Would you like to correspond? If you PM me your email address I’d be happy to talk about this some more.
Yes! Actually it also involves LSD and The Doors;)
Googling suggests your screen name has something to do with Dante or T.S. Eliot ?
Anyway, you wrote:
Increasing public interest in and prestige attached to the Singularity may increase the rush toward advanced technologies which could raise the probability of an unfriendly AI.
Good point, but it seems that SIAI has one of the most pessimistic Singularity-concepts (especially if you compare it to the views of, say, Ben Goertzel, Ray Kurzweil or Max More) and therefore advocates strong precautionary measures, which in turn reduce x-risks.
Beyond the organizations that you list, there are organizations working against nuclear war, asteroid strike risk, global pandemics, etc. Friendly AI is the most important issue on the table but efficacy of working toward it may be less than that of working against other risks after time discounting for information uncertainty.
True, in fact thinking about possible nuclear war made me realize how important x-risks are. My main arguments against working for organizations against nuclear war are:
1.They already have huge budgets ( e.g. from Warren Buffet) so my money doesn’t make a big difference
Many people, indeed whole countries try to address those problems, so my efforts don’t weigh much.
The problem has existed for almost 70 years. Folks like Einstein and Russell, which I greatly admire, have thought about these problem for years and, well to be frank, I don’t know if their efforts did actually decrease or increase the risks! Maybe strategies like MAD are better than the ones proposed by Einstein and Russell. So why should I have any confidence in my strategies?
Whereas with regards to AI-x-risks SIAI and in particular Yudkowsky seem to be way more competent than the other folks. ( Excluding Bostrom, Hanson, Omohundro and probably others that I don’t know, but the ones I find competent usually work for or with SIAI.)
The whole issue involves too much politics ( for my taste) → Rational argumentation is often frowned upon.
Are nuclear wars really existential risks? I think they are only Global Catastrophic Risks, i.e. they won’t lead to human extinction. ( Of course, if you are a negative utilitarian this point is an advantage, but I’m not, at least I think so)
You can apply these arguments, mutatis mutandis, to global pandemics, biotechnology, Super-volcanos, asteroid strikes, Global warming and ,to a lesser degree, nanotechnology.
I think this greatly outweighs the informational uncertainty of FAI.
And, the final knock-down argument, IMHO:
If you solve the FAI-problem you solve all the above listed problems, at a single blow!!!
But, I could be wrong! No, hopefully I’m wrong! Because the likely conclusion of my reasoning is the following:
Work in finance and earn money in order to donate to SIAI! That sucks soooo much, to put it charitably.
Anyway, the comment is already too long.
Oh, and I would like to correspond! I PM you my mail adress.
Googling suggests your screen name has something to do with Dante or T.S. Eliot ?
Yes, it’s from “The Hollow Men”
Good point, but it seems that SIAI has one of the most pessimistic Singularity-concepts (especially if you compare it to the views of, say, Ben Goertzel, Ray Kurzweil or Max More) and therefore advocates strong precautionary measures, which in turn reduce x-risks.
But Goertzel and Kurzweil are speakers at the Singularity Summit! :-) I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it’s not clear to me that the Singularity Summit is positive on balance.
True, in fact thinking about possible nuclear war made me realize how important x-risks are. My main arguments against working for organizations against nuclear war are: 1.They already have HUGE budgets ( e.g. from Warren Buffet) so my money doesn’t make a big difference
Even if nuclear deproliferation is overfunded on aggregate there may be particular organizations which are especially effective and in need of room for more funding (the philanthropic world isn’t very efficient). I agree that a priori it looks as though SIAI has a stronger case for room for more funding thanorganizations working against nuclear war but also think that the matter warrants further investigation.
The problem has existed for almost 70 years. Folks like Einstein and Russell, which I greatly admire, have thought about these problem for years and, well to be frank, I don’t know if their efforts did actually decrease or increase the risks! Maybe strategies like MAD are better than the ones proposed by Einstein and Russell. So why should I have any confidence in my strategies?
I agree that uncertainty as to which strategies work drives the expected value down, but not to zero.
Whereas with regards to AI-x-risks SIAI and in particular Yudkowsky seem to be way more competent than the other folks. ( Excluding Bostrom, Hanson, Omohundro and probably others that I don’t know, but the ones I find competent usually work for or with SIAI.)
I agree that the best people thinking about AI x-risk are at SIAI. This doesn’t imply that their efforts are strong enough for them to make a meaningful dent in the problem (nature doesn’t grade on a curve, etc.).
Are nuclear wars really existential risks? I think they are only Global Catastrophic Risks, i.e. they won’t lead to human extinction.
I’m presently inclined to agree that the immediate effect of nuclear war is unlikely to be extinction (although I’ve heard smart people express views to the contrary). But plausibly nuclear war would leave humanity in a much worse position to address other x-risks (e.g. political & economic instability seem more likely to be conducive to unfriendly AI than political & economic stability). Furthermore, even if nuclear war doesn’t cause human extinction it could still cause astronomical waste on account of crippling civilization to the point that it couldn’t yield an intelligence explosion.
You can apply these arguments, mutatis mutandis, to global pandemics, biotechnology, Super-volcanos, asteroid strikes, Global warming and ,to a lesser degree, nanotechnology.
Some of your arguments apply to some of the risks but not all of the arguments apply to all of the risks. In particular, none of the arguments seem to apply to asteroid strike risk.
And, the final knock-down argument, IMHO: If you solve the FAI-problem you solve all the above listed problems, at a single blow!!!
This is definitely a point in favor of focus on FAI but it’s not clear to me that it’s a strong enough.
But, I could be wrong! No, hopefully I’m wrong! Because the likely conclusion of my reasoning is the following: Work in finance and earn money in order to donate to SIAI! That sucks soooo much, to put it charitably.
(a) The existence of any x-risk / catastrophic risk charity with room for more funding suggests that donating money is highly cost-effective.
(b) Donating money is not the only way to reduce x-risk. One can work against one of the risks oneself (e.g. work for SIAI as a volunteer, work for a government agency working on one of the relevant x-risks). One can also try to influence the donors of others.
(c) Regarding your discomfort with the lifestyle that your reasoning seems to lead you to, see paragraphs 2, 3, and 4 of Carl Shulman’s comment here.
I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it’s not clear to me that the Singularity Summit is positive on balance.
Personally I gave up trying to take into account such considerations. Otherwise I would have to weigh the positive and negative effects of comments similar to yours according to influence they might have on existential risks. This quickly leads to chaos theoretic considerations like the butterfly effect which in turn leads to scenarios resembling Pascal’s Mugging where tiny probabilities are being outweighed by vast utilities. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I decided to neglect small probability events.
Googling suggests your screen name has something to do with Dante or T.S. Eliot ?
Yes, it’s from “The Hollow Men”
Whoa- I’ve been parsing it as a chemical name all along (and subconsciously suppressing the second i). Eliot’s one of my favorites, but I never made the connection.
Good points, thx for the link to Carl Shulman’s comment, I love his reasoning.
Just for the record: The reason why I don’t like the conclusion of working in finance to earn money in order to donate is that I guess I can’t do it. I simply hate finance too much and I know I’m too selfish.
Just wearing a suit is probably more I could bear;)
I will respond to the rest of your comment in private.
Upvoted.
Presumably there’s at least some of this going on. But there’s not an “either/or” dichotomy here. Some of the Less Wrong advice will turn out to fall into the above and other such advice will turn out to be solidly grounded.
For example, I think that more likely than not, focus on x-risk reduction as a philanthropic cause is grounded and that this is something that the LW community has gotten right but that more likely than not, donating to SIAI is not the best x-risk reduction opportunity on the table. I’m bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one; a priori it seems that the one that’s most salient initially is unlikely to be the best one altogether. (That being said, contingencies may point toward SIAI being the best possible option even after an analysis of all available options.)
I’m really interested in this issue since I’m considering donating to x-risk-organizations. Which organization do you think is best suited for existential risk reduction? Besides SIAI I can only think of FHI. IMO they both are preferable to the Forsight Institute and the Center for Responsible Nanotechnology. I don’t know of any other organizations whose main focus are x-risks.
In another thread you said that the best way to contribute to x-risk-reduction is
I agree!
You added that
Why do you think that this is the case? IMHO e.g. the Singularity Summit’s have increased public interest in and prestige attached to the Singularity and x-risks.
I like your screen name! (Reference to Buddhism?)
My impressions of SIAI and views on these things have evolved considerably since a year ago in when I commented in the thread that you wrote. I have a considerably more favorable impression of SIAI than I did at the time.
But regarding:
Increasing public interest in and prestige attached to the Singularity may increase the rush toward advanced technologies which could raise the probability of an unfriendly AI.
I’m still in the process of gathering information on this topic and haven’t come to a conclusion (not even a tentative one). Beyond the organizations that you list, there are organizations working against nuclear war, asteroid strike risk, global pandemics, etc. Friendly AI is the most important issue on the table but efficacy of working toward it may be less than that of working against other risks after time discounting for information uncertainty.
Would you like to correspond? If you PM me your email address I’d be happy to talk about this some more.
Yes! Actually it also involves LSD and The Doors;) Googling suggests your screen name has something to do with Dante or T.S. Eliot ?
Anyway, you wrote:
Good point, but it seems that SIAI has one of the most pessimistic Singularity-concepts (especially if you compare it to the views of, say, Ben Goertzel, Ray Kurzweil or Max More) and therefore advocates strong precautionary measures, which in turn reduce x-risks.
True, in fact thinking about possible nuclear war made me realize how important x-risks are. My main arguments against working for organizations against nuclear war are: 1.They already have huge budgets ( e.g. from Warren Buffet) so my money doesn’t make a big difference
Many people, indeed whole countries try to address those problems, so my efforts don’t weigh much.
The problem has existed for almost 70 years. Folks like Einstein and Russell, which I greatly admire, have thought about these problem for years and, well to be frank, I don’t know if their efforts did actually decrease or increase the risks! Maybe strategies like MAD are better than the ones proposed by Einstein and Russell. So why should I have any confidence in my strategies? Whereas with regards to AI-x-risks SIAI and in particular Yudkowsky seem to be way more competent than the other folks. ( Excluding Bostrom, Hanson, Omohundro and probably others that I don’t know, but the ones I find competent usually work for or with SIAI.)
The whole issue involves too much politics ( for my taste) → Rational argumentation is often frowned upon.
Are nuclear wars really existential risks? I think they are only Global Catastrophic Risks, i.e. they won’t lead to human extinction. ( Of course, if you are a negative utilitarian this point is an advantage, but I’m not, at least I think so) You can apply these arguments, mutatis mutandis, to global pandemics, biotechnology, Super-volcanos, asteroid strikes, Global warming and ,to a lesser degree, nanotechnology.
I think this greatly outweighs the informational uncertainty of FAI. And, the final knock-down argument, IMHO: If you solve the FAI-problem you solve all the above listed problems, at a single blow!!!
But, I could be wrong! No, hopefully I’m wrong! Because the likely conclusion of my reasoning is the following: Work in finance and earn money in order to donate to SIAI! That sucks soooo much, to put it charitably. Anyway, the comment is already too long. Oh, and I would like to correspond! I PM you my mail adress.
Yes, it’s from “The Hollow Men”
But Goertzel and Kurzweil are speakers at the Singularity Summit! :-) I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it’s not clear to me that the Singularity Summit is positive on balance.
Even if nuclear deproliferation is overfunded on aggregate there may be particular organizations which are especially effective and in need of room for more funding (the philanthropic world isn’t very efficient). I agree that a priori it looks as though SIAI has a stronger case for room for more funding thanorganizations working against nuclear war but also think that the matter warrants further investigation.
I agree that uncertainty as to which strategies work drives the expected value down, but not to zero.
I agree that the best people thinking about AI x-risk are at SIAI. This doesn’t imply that their efforts are strong enough for them to make a meaningful dent in the problem (nature doesn’t grade on a curve, etc.).
I’m presently inclined to agree that the immediate effect of nuclear war is unlikely to be extinction (although I’ve heard smart people express views to the contrary). But plausibly nuclear war would leave humanity in a much worse position to address other x-risks (e.g. political & economic instability seem more likely to be conducive to unfriendly AI than political & economic stability). Furthermore, even if nuclear war doesn’t cause human extinction it could still cause astronomical waste on account of crippling civilization to the point that it couldn’t yield an intelligence explosion.
Some of your arguments apply to some of the risks but not all of the arguments apply to all of the risks. In particular, none of the arguments seem to apply to asteroid strike risk.
This is definitely a point in favor of focus on FAI but it’s not clear to me that it’s a strong enough.
(a) The existence of any x-risk / catastrophic risk charity with room for more funding suggests that donating money is highly cost-effective.
(b) Donating money is not the only way to reduce x-risk. One can work against one of the risks oneself (e.g. work for SIAI as a volunteer, work for a government agency working on one of the relevant x-risks). One can also try to influence the donors of others.
(c) Regarding your discomfort with the lifestyle that your reasoning seems to lead you to, see paragraphs 2, 3, and 4 of Carl Shulman’s comment here.
Personally I gave up trying to take into account such considerations. Otherwise I would have to weigh the positive and negative effects of comments similar to yours according to influence they might have on existential risks. This quickly leads to chaos theoretic considerations like the butterfly effect which in turn leads to scenarios resembling Pascal’s Mugging where tiny probabilities are being outweighed by vast utilities. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I decided to neglect small probability events.
Whoa- I’ve been parsing it as a chemical name all along (and subconsciously suppressing the second i). Eliot’s one of my favorites, but I never made the connection.
Good points, thx for the link to Carl Shulman’s comment, I love his reasoning.
Just for the record: The reason why I don’t like the conclusion of working in finance to earn money in order to donate is that I guess I can’t do it. I simply hate finance too much and I know I’m too selfish. Just wearing a suit is probably more I could bear;) I will respond to the rest of your comment in private.
Please consider posting your reply here, I would be interested in reading it!
I wrote you a PM.