I don’t think there’s any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn’t go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That’s an awfully personal statement to make publicly. It’s not self-confidence I’m worried about, it’s insularity.
Here’s the thing. The whole SIAI project is not publicly affiliated with (as far as I’ve heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don’t have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they’re not being tested stringently enough.
It’s speculative. It feels fuzzy to me—I’m not an expert in AI, but I have some education in math, and things feel fuzzy around here.
If you want to claim you’re working on a project that may save the world, fine. But there’s got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies—it works more or less like science.
If I’m completely off base here and SIAI is going to get to the science soon, I apologize, and I’ll shut up about this for a while.
But look. All this advice about the “sin of underconfidence” is all very well (and actually I’ve taken it to heart somewhat.) But if you’re going to go test your abilities, then test them. Against skeptics. Against people who’ll look at you like you’re a rotten fish if you don’t have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.
Sorry to be so blunt. It’s just that I want this to be something. And I have my doubts because there’s doesn’t seem to be enough in this floating world in the way of unmistakable, concrete achievement.
The whole SIAI project is not publicly affiliated with (as far as I’ve heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don’t have guest posts from Dr. X or Think Tank Fellow Y.
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.
At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat!
It’s not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There’s various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they’re not justified as rigorously, or haven’t met certain public standards. Why do I agree with the main post that Eliezer isn’t justified in his opinion of his own importance (and SIAI’s importance)? Because there isn’t (yet) a lot beyond speculation here.
I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it’s doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I’m likely to hold off until I see something stronger.
And I’m likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It’s not an obvious, open-and shut deal.
What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count?
I contacted Nick Bostrom about this and he said that there’s no formal relationship between FHI and SIAI.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
Here’s the thing. The whole SIAI project is not publicly affiliated with (as far as I’ve heard) other, more mainstream institutions with relevant expertise.
LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit’s website includes a lengthy list of partners, including Google and Scientific American.
The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven’t done a terrible one either, and accusations that they aren’t trying are, so far as I am able to determine, factually inaccurate.
But those don’t really qualify as “published academic papers” in the sense that those terms are usually understood in academia. They are instead “research reports” or “technical reports”.
The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published—i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.
Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.
There are already relevant niche philosophy journals (Ethics and Information Technology, Minds and Machines, and Philosophy and Technology). Robin Hanson’s “Economic Growth Given Machine Intelligence” has been accepted in an AI journal, and there are forecasting journals like Technological Forecasting and Social Change. For more unusual topics, there’s the Journal of Evolution and Technology. SIAI folk are working to submit the current crop of papers for publication.
Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.
What are these academic papers published in? A lot of them don’t provide that information; one is in Global Catastrophic Risks.
At any rate, I exaggerated in saying there isn’t any engagement with the academic mainstream. But it looks like it’s not very much. And I recall a post of Eliezer’s that said, roughly, “It’s not that academia has rejected my ideas, it’s that I haven’t done the work of trying to get academia’s attention.” Well, why not?
And I recall a post of Eliezer’s that said, roughly, “It’s not that academia has rejected my ideas, it’s that I haven’t done the work of trying to get academia’s attention.” Well, why not?
Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)
The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven’t done a terrible one either, and accusations that they aren’t trying are, so far as I am able to determine, factually inaccurate.
… particularly in as much as they have become (somewhat) obsolete.
We don’t have guest posts from Dr. X or Think Tank Fellow Y.
Possibly because this blog is Less Wrong, positioned as “a community blog devoted to refining the art of human rationality”, and not as the SIAI blog, or an existential risk blog, or an FAI blog.
I don’t think there’s any point doing armchair diagnoses and accusing people of delusions of grandeur.
I respectfully disagree with this statement, at least as an absolute. I believe that:
(A) In situations in which people are making significant life choices based on person X’s claims and person X exhibits behavior which is highly correlated with delusions of grandeur, it’s appropriate to raise the possibility that person X’s claims arise from delusions of grandeur and ask that person X publicly address this possibility.
(B) When one raises the possibility that somebody is suffering from delusions of grandeur, this should be done in as polite and nonconfrontational way as possible given the nature of the topic.
If you want to claim you’re working on a project that may save the world, fine. But there’s got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies—it works more or less like science.
I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.
If you want to claim you’re working on a project that may save the world, fine. But there’s got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies—it works more or less like science.
I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.
I don’t think there’s any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn’t go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That’s an awfully personal statement to make publicly. It’s not self-confidence I’m worried about, it’s insularity.
Here’s the thing. The whole SIAI project is not publicly affiliated with (as far as I’ve heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don’t have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they’re not being tested stringently enough.
It’s speculative. It feels fuzzy to me—I’m not an expert in AI, but I have some education in math, and things feel fuzzy around here.
If you want to claim you’re working on a project that may save the world, fine. But there’s got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies—it works more or less like science.
If I’m completely off base here and SIAI is going to get to the science soon, I apologize, and I’ll shut up about this for a while.
But look. All this advice about the “sin of underconfidence” is all very well (and actually I’ve taken it to heart somewhat.) But if you’re going to go test your abilities, then test them. Against skeptics. Against people who’ll look at you like you’re a rotten fish if you don’t have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.
Sorry to be so blunt. It’s just that I want this to be something. And I have my doubts because there’s doesn’t seem to be enough in this floating world in the way of unmistakable, concrete achievement.
According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.
It’s not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There’s various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.
Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.
My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they’re not justified as rigorously, or haven’t met certain public standards. Why do I agree with the main post that Eliezer isn’t justified in his opinion of his own importance (and SIAI’s importance)? Because there isn’t (yet) a lot beyond speculation here.
I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it’s doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I’m likely to hold off until I see something stronger.
And I’m likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It’s not an obvious, open-and shut deal.
What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?
I think they are working on their “academic credentials”:
http://singinst.org/grants/challenge
...lists some 13 academic papers under various stages of development.
Thanks for that last link. The paper on Changing the frame of AI futurism is extremely relevant to this series of posts.
I contacted Nick Bostrom about this and he said that there’s no formal relationship between FHI and SIAI.
See my comments here, here and here.
LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit’s website includes a lengthy list of partners, including Google and Scientific American.
The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven’t done a terrible one either, and accusations that they aren’t trying are, so far as I am able to determine, factually inaccurate.
But those don’t really qualify as “published academic papers” in the sense that those terms are usually understood in academia. They are instead “research reports” or “technical reports”.
The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published—i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.
Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.
There are already relevant niche philosophy journals (Ethics and Information Technology, Minds and Machines, and Philosophy and Technology). Robin Hanson’s “Economic Growth Given Machine Intelligence” has been accepted in an AI journal, and there are forecasting journals like Technological Forecasting and Social Change. For more unusual topics, there’s the Journal of Evolution and Technology. SIAI folk are working to submit the current crop of papers for publication.
Cool!
Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.
What are these academic papers published in? A lot of them don’t provide that information; one is in Global Catastrophic Risks.
At any rate, I exaggerated in saying there isn’t any engagement with the academic mainstream. But it looks like it’s not very much. And I recall a post of Eliezer’s that said, roughly, “It’s not that academia has rejected my ideas, it’s that I haven’t done the work of trying to get academia’s attention.” Well, why not?
Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)
If you want it done, feel free to do it yourself. :)
… particularly in as much as they have become (somewhat) obsolete.
Can you clarify please?
Basically, no. Whatever I meant seems to have been lost to me in the temporal context.
No worries, I do the same thing sometimes.
Possibly because this blog is Less Wrong, positioned as “a community blog devoted to refining the art of human rationality”, and not as the SIAI blog, or an existential risk blog, or an FAI blog.
I respectfully disagree with this statement, at least as an absolute. I believe that:
(A) In situations in which people are making significant life choices based on person X’s claims and person X exhibits behavior which is highly correlated with delusions of grandeur, it’s appropriate to raise the possibility that person X’s claims arise from delusions of grandeur and ask that person X publicly address this possibility.
(B) When one raises the possibility that somebody is suffering from delusions of grandeur, this should be done in as polite and nonconfrontational way as possible given the nature of the topic.
I believe that if more people adopted these practices, this would would raise the sanity waterline.
I believe that the situation with respect to Eliezer and portions of the LW community is as in (A) and that I made a good faith effort at (B).
I agree with your conclusion but not this part:
I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.
You are not entitled to that particular proof.
EDIT: The ‘entitlement’ link was broken.
There’s these fellows:
http://singinst.org/aboutus/advisors
Some of them have contributed here:
http://singinst.org/media/interviews
I agree with your conclusion but not this part:
I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.
[You are not entitled to that particular proof]http://lesswrong.com/lw/1ph/youre_entitled_to_arguments_but_not_that/).
I only wish it were possible to upvote this comment more than once.