“If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.”
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters. Perhaps WrongBot is merely low-level and misguided, and should listen to more advanced users and mend his ways—but what about Roko, for instance?
If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters.
Well, what Vassar is saying here is tricky! If I say, for example, that if I could do X, I could make a billion dollars and get Eliezer to admit that I am smarter than he is, then I am in effect saying that X is really hard to do. As to why Vassar would say such a thing, well, a few days ago, WrongBot was complaining (and getting upvoted for it) that in his criticism of WrongBot, Vassar had not said anything that WrongBot could use to improve WrongBot’s rationality. So, my guess is that this is Vassar’s somewhat roundabout way of saying to WrongBot that doing that is really hard, and if WrongBot ever comes to have any good suggestions on how to do it, he should share them with everyone here.
I hasten to add that today WrongBot was careful in his wording to avoid implying that anyone here had any obligation to improve WrongBot’s writing or thinking (but of course this careful wording came after Vassar comment).
I might be interpreting Michael Vassar’s post incorrectly, but it seemed like an authentic, if radically optimistic, suggestion and not a hyperbolic or sarcastic one.
It wasn’t sarcastic. I really think that it’s fairly likely to be possible, but extremely difficult. OTOH, I think that many extremely difficult things are worth attempting. That’s why SIAI exists after all. LW posters may disagree fairly frequently, but that’s probably significantly because there are so few of us that we don’t really have time to collectively build an official correct world-view which is far better than any of us could do on our own.
I really do think my claim about the implications of developing such a technique is correct, in fact, understated, and that this follows trivially from the world having resources which are far beyond what is needed to solve its problems if those resources were allocated half-way sanely. A large number of Rokos would definitely be enough to do the job.
If I say that “if you could travel backward in time by arranging four flux capacitors into a wheatstone bridge, someone probably would have travelled back in time already, and consequently, you probably cannot travel back in time by arranging flux capacitors into a wheatstone bridge,” I am being neither hyperbolic nor sarcastic (nor am I being optimistic).
Thank you for continuing to engage after my rather silly reply; while in the process of writing a more detailed response to your latest post, I figured out what you meant originally. I now agree with your earlier interpretation of Michael Vassar’s post, though I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
Your skepticism of the jump is reasonable and understandable. Note however that having served as President of the Singularity Institute for the last two years or so, Vassar has a great deal of experience in thinking about the global situation.
“If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.”
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters. Perhaps WrongBot is merely low-level and misguided, and should listen to more advanced users and mend his ways—but what about Roko, for instance?
katydee replying to Mike Vassar:
Well, what Vassar is saying here is tricky! If I say, for example, that if I could do X, I could make a billion dollars and get Eliezer to admit that I am smarter than he is, then I am in effect saying that X is really hard to do. As to why Vassar would say such a thing, well, a few days ago, WrongBot was complaining (and getting upvoted for it) that in his criticism of WrongBot, Vassar had not said anything that WrongBot could use to improve WrongBot’s rationality. So, my guess is that this is Vassar’s somewhat roundabout way of saying to WrongBot that doing that is really hard, and if WrongBot ever comes to have any good suggestions on how to do it, he should share them with everyone here.
I hasten to add that today WrongBot was careful in his wording to avoid implying that anyone here had any obligation to improve WrongBot’s writing or thinking (but of course this careful wording came after Vassar comment).
I might be interpreting Michael Vassar’s post incorrectly, but it seemed like an authentic, if radically optimistic, suggestion and not a hyperbolic or sarcastic one.
It wasn’t sarcastic. I really think that it’s fairly likely to be possible, but extremely difficult. OTOH, I think that many extremely difficult things are worth attempting. That’s why SIAI exists after all. LW posters may disagree fairly frequently, but that’s probably significantly because there are so few of us that we don’t really have time to collectively build an official correct world-view which is far better than any of us could do on our own.
I really do think my claim about the implications of developing such a technique is correct, in fact, understated, and that this follows trivially from the world having resources which are far beyond what is needed to solve its problems if those resources were allocated half-way sanely. A large number of Rokos would definitely be enough to do the job.
I’ll redouble my efforts, then. This topic also probably deserves a thread of its own.
If I say that “if you could travel backward in time by arranging four flux capacitors into a wheatstone bridge, someone probably would have travelled back in time already, and consequently, you probably cannot travel back in time by arranging flux capacitors into a wheatstone bridge,” I am being neither hyperbolic nor sarcastic (nor am I being optimistic).
Thank you for continuing to engage after my rather silly reply; while in the process of writing a more detailed response to your latest post, I figured out what you meant originally. I now agree with your earlier interpretation of Michael Vassar’s post, though I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
Your skepticism of the jump is reasonable and understandable. Note however that having served as President of the Singularity Institute for the last two years or so, Vassar has a great deal of experience in thinking about the global situation.
My pleasure.
Agreed.