This analysis would seem to indicate that SIAI is at least in a tie for the world’s leading transhumanist organization. What’s your objection to the claim?
Even without contesting your framing, it doesn’t support the claim that SingInst “appears to be the leading organization”. Notice when you are primarily defending a position without paying sufficient attention to details of the argument.
“At least in a tie” by what metric? Which metrics matter? That’s a tarpit I’m not interested in pursuing, because it doesn’t sound productive to engage in it.
My objection is the same fork that we fell into before during the “rationality mini-camp was a success” fiasco: if there exists evidence, give it; otherwise, if only weak evidence exists, one ought to favor precision over self-aggrandizement.
Well, there are two major (EDIT: x-risk related; see discussion below) transhumanist organizations. This article provides a bunch of possible metrics for comparing them (cited papers, audience, publicity, achievements per dollar...).
It seems that you routinely post declaring that you are skeptical of SI’s claims. That’s good—we need loud contrarians sometimes. But I haven’t seen you specify what evidence you’re looking for that would resolve your skepticism.
Declaring yourself to be skeptical, without explaining what evidence you are looking for, just doesn’t seem to contribute to the debate all that much.
Or, in other words: I can’t figure out how much to update on your skepticism, because I can’t figure out what you’re skeptical about! You should consider writing a post like a few of XiXiDu’s explaining what you’re looking for.
There are two major x-risk -related transhumanist organizations. If you’re counting major transhumanist organizations in general, you definitely need to include the SENS Foundation and the Methuselah Foundation as well.
For me personally, it seems fairly clear that x-risk oriented organizations are the most important ones in the field of philanthropy. That’s why I take the FHI/SI debate so seriously, and that’s why I asked why paper_machine felt that the claim was obviously off base. Are the other organizations worth looking into seriously as optimal philanthropy?
This article provides a bunch of possible metrics for comparing them (cited papers, audience, publicity, achievements per dollar...).
Yes, and it seems from Kaj’s data that by most of the obvious metrics, FHI is beating SI by a lot. One might think that the SI is doing well for publicity but that’s primarily through the Summits. The media coverage is clearly much larger for the FHI than the SI. Moreover, the media coverage for the summits frequently focused on Kurzweil style singularities and similar things.
The media coverage is clearly much larger for the FHI than the SI. Moreover, the media coverage for the summits frequently focused on Kurzweil style singularities and similar things.
The last Summit I was at, the previous NYC one, had an audience of close to a thousand that was relatively savvy and influential (registration records and other info show a lot of VCs, scientists, talented students, entrepreneurs and wealthy individuals) who got to see more substantive talks, even if those were not picked up as much by the media.
Also, one should apply the off-topic correction evenly: a fair amount of FHI media attention is likewise grabbing a quote on some fairly peripheral issue.
I feel like it’s to some extent an apples-to-oranges comparison. FHI is obviously doing better in terms of academic credibility, as measured by citations and their smaller academic conferences; SI seems to be doing much better in mass-audience publicity, as measured by 1 million unique visitors to LessWrong (!) and the Singularity Summit, which is growing each year and has a dramatically larger and wider audience than any FHI conferences. The Visiting Fellows program also stood out as something SI was doing much better.
but that’s primarily through the Summits.
I don’t really see why that’s a “but”. Is it because the media focuses on Kurzweilian Singularities?
I would love to see a financial analysis of FHI along the lines of this one, to evaluate the achievements/dollars metric.
But based on the metrics we have, the only one which seems decisively in FHI’s favor is citations.
This article caused me to update in favor of “FHI is the best transhumanist charity”, but only marginally so. If your interpretation was stronger than that, I’d be interested in hearing why.
I’m not I’m thinking in terms of “best transhumanist charity” which seems to be very hard to define. . I’d say more that I’m thinking in terms of something like “which of these two charities is a most efficient use of my resources, especially in regards to reducing existential risk, and encouraging the general improvement of humanity.”
If I were attempting to think about this in terms of a very broad set of “transhumanist” goals, then I’d point to the
the citations and the large number of media appearances as aspects where the FHI seems to be doing much more productively than the SI. Citations are an obviously important metric- most serious existential risk issues require academics to pay attention otherwise what you do won’t matter much. Media attention is more important in getting people to realize that a) things like death really are as bad as they seem and b) that we might be able to really do something about them.
The primary reason I haven’t updated that much in the direction of the FHI is that I don’t know how much money the FHI is using to get these results, and also how much of this would be work that academics would do anyways. (When academics become affiliated with an institution they often do work they would already do and just tweak it to fit the institutions goals a bit more.) .
I haven’t seen you specify what evidence you’re looking for that would resolve your skepticism.
The evidence I am looking for won’t be available until it is too late, that’s the problem. I have a hard time to swallow that pill. I also don’t trust my rationality enough yet to completely overpower my intuition on that subject. Further, I feel that my background knowledge and math skills are not yet sufficient to actually donate larger amounts of money to the Singularity Institute. I am trying to change that right now, I am almost at Calculus over at Khan Academy (after Khan Academy I am going to delve into Bayesian probability).
I’m curious why you think you need calculus to evaluate which charities to donate to. (Though I wholeheartedly approve of learning it).
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”, even if we won’t know many things until after a Singularity occurs/fails to occur. For example, I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream, since that’s something which seems to be an important accomplishment which I am not qualified to independently evaluate.
I’m curious why you think you need calculus to evaluate which charities to donate to.
I don’t really know what exactly I will need beforehand so I decided to just acquire a general math education. Regarding calculus in particular, in a recent comment someone wrote that you need it to handle a probability distribution.
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”...
What evidence would cause me to update in favor of “Otto Rössler knows what he’s talking about regarding risks associated with particle collision experiments”? I have no idea, I don’t even know enough about high energy physics to tell what evidence could convince me one way or the other, let alone judge any evidence. And besides, the math that would be necessary to read papers about high energy physics is ridiculously far above my head. And the same is true for artificial general intelligence, just that it seems orders of magnitude more difficult and that basically nobody knows anything about it.
I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream...
That says little about their claims regarding risks from AI in my opinion.
I would imagine that the validity of SI’s claims in one area of research is correlated with the validity of their claims in other, related areas (like decision theory and recursively self-improving AI).
Well, there are two major transhumanist organizations.
Depending on what you mean by “major”, I suppose. My first Google hit for “transhumanism” is Humanity+, and though I know nothing about them they at least seem to be in the same category. There’s also the much maligned Lifeboat Foundation, which could hypothetically count as a transhumanist organization. So there’s two more after five minutes of Googling.
It seems that you routinely post declaring that you are skeptical of SI’s claims.
That’s certainly not my MO for participating in LW. I’m a mathematician, not a loud contrarian. I’m currently working on summarizing Pearl’s work on causality so that people can make more sense out of the sequences, and therefore help more when Luke’s polyethics project gets off the ground.
Unfortunately, I’m also studying for my prelims, and so progress on Pearl has been a bit slow.
Declaring yourself to be skeptical, without explaining what evidence you are looking for, just doesn’t seem to contribute to the debate all that much.
As I explained earlier, any hypothetically available evidence would have to be judged relative to some metric, and it’s not worthwhile to sit around and discuss which metrics are optimal, particularly when nobody is in an impartial position to do so.
My first Google hit for “transhumanism” is Humanity+...
You’re right. I guess I’m using the same metric as JoshuaZ (who’s the best marginal use of my dollars?) and I’m fairly convinced that’s existential risk, so I was discounting several non x-risk focused transhumanist charities, perhaps unfairly.
And apologies for implying you were solely a “loud contrarian”. I didn’t mean to imply that was your primary purpose for posting on LessWrong, just that I’d noticed manycommentsbyyou recently along those lines, and I was having difficulty interpreting your skepticism.
This analysis would seem to indicate that SIAI is at least in a tie for the world’s leading transhumanist organization. What’s your objection to the claim?
Even without contesting your framing, it doesn’t support the claim that SingInst “appears to be the leading organization”. Notice when you are primarily defending a position without paying sufficient attention to details of the argument.
“At least in a tie” by what metric? Which metrics matter? That’s a tarpit I’m not interested in pursuing, because it doesn’t sound productive to engage in it.
My objection is the same fork that we fell into before during the “rationality mini-camp was a success” fiasco: if there exists evidence, give it; otherwise, if only weak evidence exists, one ought to favor precision over self-aggrandizement.
Well, there are two major (EDIT: x-risk related; see discussion below) transhumanist organizations. This article provides a bunch of possible metrics for comparing them (cited papers, audience, publicity, achievements per dollar...).
It seems that you routinely post declaring that you are skeptical of SI’s claims. That’s good—we need loud contrarians sometimes. But I haven’t seen you specify what evidence you’re looking for that would resolve your skepticism.
Declaring yourself to be skeptical, without explaining what evidence you are looking for, just doesn’t seem to contribute to the debate all that much.
Or, in other words: I can’t figure out how much to update on your skepticism, because I can’t figure out what you’re skeptical about! You should consider writing a post like a few of XiXiDu’s explaining what you’re looking for.
There are two major x-risk -related transhumanist organizations. If you’re counting major transhumanist organizations in general, you definitely need to include the SENS Foundation and the Methuselah Foundation as well.
Upvoted, and edited to clarify.
For me personally, it seems fairly clear that x-risk oriented organizations are the most important ones in the field of philanthropy. That’s why I take the FHI/SI debate so seriously, and that’s why I asked why paper_machine felt that the claim was obviously off base. Are the other organizations worth looking into seriously as optimal philanthropy?
Yes, and it seems from Kaj’s data that by most of the obvious metrics, FHI is beating SI by a lot. One might think that the SI is doing well for publicity but that’s primarily through the Summits. The media coverage is clearly much larger for the FHI than the SI. Moreover, the media coverage for the summits frequently focused on Kurzweil style singularities and similar things.
The last Summit I was at, the previous NYC one, had an audience of close to a thousand that was relatively savvy and influential (registration records and other info show a lot of VCs, scientists, talented students, entrepreneurs and wealthy individuals) who got to see more substantive talks, even if those were not picked up as much by the media.
Also, one should apply the off-topic correction evenly: a fair amount of FHI media attention is likewise grabbing a quote on some fairly peripheral issue.
Interesting. That seems to be a strong argument to update more towards the SI.
I feel like it’s to some extent an apples-to-oranges comparison. FHI is obviously doing better in terms of academic credibility, as measured by citations and their smaller academic conferences; SI seems to be doing much better in mass-audience publicity, as measured by 1 million unique visitors to LessWrong (!) and the Singularity Summit, which is growing each year and has a dramatically larger and wider audience than any FHI conferences. The Visiting Fellows program also stood out as something SI was doing much better.
I don’t really see why that’s a “but”. Is it because the media focuses on Kurzweilian Singularities?
I would love to see a financial analysis of FHI along the lines of this one, to evaluate the achievements/dollars metric.
But based on the metrics we have, the only one which seems decisively in FHI’s favor is citations.
This article caused me to update in favor of “FHI is the best transhumanist charity”, but only marginally so. If your interpretation was stronger than that, I’d be interested in hearing why.
I’m not I’m thinking in terms of “best transhumanist charity” which seems to be very hard to define. . I’d say more that I’m thinking in terms of something like “which of these two charities is a most efficient use of my resources, especially in regards to reducing existential risk, and encouraging the general improvement of humanity.”
If I were attempting to think about this in terms of a very broad set of “transhumanist” goals, then I’d point to the the citations and the large number of media appearances as aspects where the FHI seems to be doing much more productively than the SI. Citations are an obviously important metric- most serious existential risk issues require academics to pay attention otherwise what you do won’t matter much. Media attention is more important in getting people to realize that a) things like death really are as bad as they seem and b) that we might be able to really do something about them.
The primary reason I haven’t updated that much in the direction of the FHI is that I don’t know how much money the FHI is using to get these results, and also how much of this would be work that academics would do anyways. (When academics become affiliated with an institution they often do work they would already do and just tweak it to fit the institutions goals a bit more.) .
The evidence I am looking for won’t be available until it is too late, that’s the problem. I have a hard time to swallow that pill. I also don’t trust my rationality enough yet to completely overpower my intuition on that subject. Further, I feel that my background knowledge and math skills are not yet sufficient to actually donate larger amounts of money to the Singularity Institute. I am trying to change that right now, I am almost at Calculus over at Khan Academy (after Khan Academy I am going to delve into Bayesian probability).
I’m curious why you think you need calculus to evaluate which charities to donate to. (Though I wholeheartedly approve of learning it).
Surely there’s some evidence that would cause you to update in favor of “SI knows what they’re talking about”, even if we won’t know many things until after a Singularity occurs/fails to occur. For example, I would update pretty dramatically in the direction of “They know what they’re doing” if TImeless Decision Theory went mainstream, since that’s something which seems to be an important accomplishment which I am not qualified to independently evaluate.
I don’t really know what exactly I will need beforehand so I decided to just acquire a general math education. Regarding calculus in particular, in a recent comment someone wrote that you need it to handle a probability distribution.
What evidence would cause me to update in favor of “Otto Rössler knows what he’s talking about regarding risks associated with particle collision experiments”? I have no idea, I don’t even know enough about high energy physics to tell what evidence could convince me one way or the other, let alone judge any evidence. And besides, the math that would be necessary to read papers about high energy physics is ridiculously far above my head. And the same is true for artificial general intelligence, just that it seems orders of magnitude more difficult and that basically nobody knows anything about it.
That says little about their claims regarding risks from AI in my opinion.
I would imagine that the validity of SI’s claims in one area of research is correlated with the validity of their claims in other, related areas (like decision theory and recursively self-improving AI).
Depending on what you mean by “major”, I suppose. My first Google hit for “transhumanism” is Humanity+, and though I know nothing about them they at least seem to be in the same category. There’s also the much maligned Lifeboat Foundation, which could hypothetically count as a transhumanist organization. So there’s two more after five minutes of Googling.
That’s certainly not my MO for participating in LW. I’m a mathematician, not a loud contrarian. I’m currently working on summarizing Pearl’s work on causality so that people can make more sense out of the sequences, and therefore help more when Luke’s polyethics project gets off the ground.
Unfortunately, I’m also studying for my prelims, and so progress on Pearl has been a bit slow.
As I explained earlier, any hypothetically available evidence would have to be judged relative to some metric, and it’s not worthwhile to sit around and discuss which metrics are optimal, particularly when nobody is in an impartial position to do so.
You’re right. I guess I’m using the same metric as JoshuaZ (who’s the best marginal use of my dollars?) and I’m fairly convinced that’s existential risk, so I was discounting several non x-risk focused transhumanist charities, perhaps unfairly.
And apologies for implying you were solely a “loud contrarian”. I didn’t mean to imply that was your primary purpose for posting on LessWrong, just that I’d noticed many comments by you recently along those lines, and I was having difficulty interpreting your skepticism.