Eliezer’s comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker’s. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he’s the most important person in human history.
Eliezer’s saying he’s Zaphod Beeblebrox. Maybe he is, but I’m betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don’t believe that they and only they can accomplish the task.
I also don’t care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history—and that seems to me to be what he’s saying—are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.
And that’s a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it’s not going to solve it. I won’t give to SIAI if Eliezer stops saying these things, because it appears he’ll still be thinking those things.
I expect AI progress to come incrementally, BTW—I don’t expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.
I expect AI progress to come incrementally, BTW—I don’t expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.
I’d like to point out that it’s not either/or: it’s possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.
I would say likely, conditional on eventual FOOM. The alternative means both a concentration of probability mass in the next ten years and that the relevant theory and tools are almost wholly complete.
And saddened once again at how people seem unable to distinguish “multi claims that something Eliezer said could be construed as claim X” and “Eliezer claimed X!”
Please note that for the next time you’re worried about damaging an important cause’s PR, multi.
Um, I wasn’t basing my conclusion on multifoliaterose’s statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.
Which of these statements do you reject?:
FAI is the most important project on earth, right now, and probably ever.
FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.
You are the most likely person—and SIAI the most likely agency, because of you—to accomplish saving the multiverse.
Number 4 is unnecessary for your being the most important person on earth, but:
People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.
And then you’ve blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.
And your final statement, that multifoliaterose is damaging an important cause’s PR appears entirely deaf to multi’s post. He’s trying to help the cause—he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated “Eliezer Yudkowsky,” with “AI existential risk.”
Again.
I might be wrong about my interpretation—but I don’t think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn’t in the collective ignorance and stupidity in other people. Just a thought.
My understanding of JRMayne’s remark is that he himself construes your statements in the way that I mentioned in my post.
If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.
Note that you have not made a disclaimer which rules out the possibility that you claim that you’re the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.
I love that “makes it reasonable” part. Especially in a discussion on what you shouldn’t say in public.
Now we’re to avoid stating any premises from which any absurd conclusions seem reasonable to infer?
This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu’s part, not necessarily something the average audience member would do.
Note that saying “But you must therefore argue X...” where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.
Suppose I, as Lord Chief Prosecutor of the Heathens say:
All heathens should be jailed.
Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.
One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that’s an unpopular position. This is a mark of a political argument gone wrong?
I’m trying to find another logical conclusion to XiXiDu’s cited statements (or a raft of others in the same vein.) Is there one I don’t see? Is it just that you’re probably the most important entity in history, but, you know, maybe not? Is it that there’s only a 5% chance that you’re the most important person in human history?
I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.
In this case I would ask you if you really want Joe jailed, or if when you said that “All heathens should be jailed”, you were using the word “heathen” in a stronger sense of explicitly rejecting the “One True God” than the weak sense that Joe is a “heathen” for not understanding the concept.
And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.
I’m too dumb to grasp what you just said in its full complexity. But I believe you are indeed one of the most important people in the world. Further, (1) I don’t see what is wrong with that (2) It is positive for public relations as it attracts people to donate money (Evidence: Jesus) (3) It won’t hurt academic relations as you are always able to claim that you were misunderstood.
I’m sorry for the other comment. I was just trying to take it lightly, i.e. joking. You are right of course.
But someone like me would infer that you think you are important from the given evidence. And I don’t think it is wise to downplay your importance given public relations.
Note that saying “But you must therefore argue X...” where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.
Yeah, but that is part of public relations and has to be taken into account.
Solid, bold post.
Eliezer’s comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker’s. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he’s the most important person in human history.
Eliezer’s saying he’s Zaphod Beeblebrox. Maybe he is, but I’m betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don’t believe that they and only they can accomplish the task.
I also don’t care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history—and that seems to me to be what he’s saying—are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.
And that’s a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it’s not going to solve it. I won’t give to SIAI if Eliezer stops saying these things, because it appears he’ll still be thinking those things.
I expect AI progress to come incrementally, BTW—I don’t expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.
I could be wrong.
--JRM
I’d like to point out that it’s not either/or: it’s possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.
I would say likely, conditional on eventual FOOM. The alternative means both a concentration of probability mass in the next ten years and that the relevant theory and tools are almost wholly complete.
And saddened once again at how people seem unable to distinguish “multi claims that something Eliezer said could be construed as claim X” and “Eliezer claimed X!”
Please note that for the next time you’re worried about damaging an important cause’s PR, multi.
Um, I wasn’t basing my conclusion on multifoliaterose’s statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.
Which of these statements do you reject?:
FAI is the most important project on earth, right now, and probably ever.
FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.
You are the most likely person—and SIAI the most likely agency, because of you—to accomplish saving the multiverse.
Number 4 is unnecessary for your being the most important person on earth, but:
People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.
And then you’ve blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.
And your final statement, that multifoliaterose is damaging an important cause’s PR appears entirely deaf to multi’s post. He’s trying to help the cause—he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated “Eliezer Yudkowsky,” with “AI existential risk.”
Again.
I might be wrong about my interpretation—but I don’t think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn’t in the collective ignorance and stupidity in other people. Just a thought.
--JRM
Which of those statements do you reject?
My understanding of JRMayne’s remark is that he himself construes your statements in the way that I mentioned in my post.
If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.
Note that you have not made a disclaimer which rules out the possibility that you claim that you’re the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.
I have to disagree based on the following evidence:
Q: The only two legitimate occupations for an intelligent person in our current world? (Answer)
and
“At present I do not know of any other person who could do that.” (Reference)
This makes it reasonable to state that you think you might be the most important person in the world.
I love that “makes it reasonable” part. Especially in a discussion on what you shouldn’t say in public.
Now we’re to avoid stating any premises from which any absurd conclusions seem reasonable to infer?
This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu’s part, not necessarily something the average audience member would do.
Note that saying “But you must therefore argue X...” where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.
Gosh, I find this all quite cryptic.
Suppose I, as Lord Chief Prosecutor of the Heathens say:
All heathens should be jailed.
Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.
One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that’s an unpopular position. This is a mark of a political argument gone wrong?
I’m trying to find another logical conclusion to XiXiDu’s cited statements (or a raft of others in the same vein.) Is there one I don’t see? Is it just that you’re probably the most important entity in history, but, you know, maybe not? Is it that there’s only a 5% chance that you’re the most important person in human history?
I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.
In this case I would ask you if you really want Joe jailed, or if when you said that “All heathens should be jailed”, you were using the word “heathen” in a stronger sense of explicitly rejecting the “One True God” than the weak sense that Joe is a “heathen” for not understanding the concept.
And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.
I’m too dumb to grasp what you just said in its full complexity. But I believe you are indeed one of the most important people in the world. Further, (1) I don’t see what is wrong with that (2) It is positive for public relations as it attracts people to donate money (Evidence: Jesus) (3) It won’t hurt academic relations as you are always able to claim that you were misunderstood.
I’m sorry for the other comment. I was just trying to take it lightly, i.e. joking. You are right of course.
But someone like me would infer that you think you are important from the given evidence. And I don’t think it is wise to downplay your importance given public relations.
Yeah, but that is part of public relations and has to be taken into account.