I upvoted your post because it seems relatively lucid and raises some important points, but would like to say that I’m in the middle of writing a pretty long, detailed explanation of why I agree with most of the gripes (e.g. AIs can’t use magic to mine coal/build nanobots) and yet the object-level conclusions here are still untrue. In practice, I seriously doubt we would have more than a year to live after the release of AGI with the long term planning and reasoning abilities of most accountants, even without FOOM. People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
I agree but unfortunately my Google-fu wasn’t strong enough to find detailed prior explanations of AGI vs. robot research. I’m looking forward to your explanation.
It’s nice that you’re open to betting. What unambiguous sign would change your mind, about the speed of AGI takeover, long enough before it happens that you’d still have time to make a positive impact afterwards? Nobody is interested in winning a bet where winning means “mankind gets wiped”.
Yes, that’s the key issue. I’m not sure I can think of one. Do you have any ideas? I mean, what would be an unequivocal sign that AGI can take over in a year time? Something like a pre-AGI parasitizing a major computing center for X days before it is discovered in a plan to expand to other centres...? That would still not be a sign that we are pretty much f. up in a year, but definitely a data point towards things can go bad very quickly
What data point would make you change your mind in the opposite direction? I mean, something that happens and you say: yes, we can go all die but this won’t happen in a year so, but maybe in something like 30 years or more
Edit: I posted two paragraphs originally in separate comments, unifying for the sake of clarity
Yes, but I don’t know if he really did it. I see multiple problems with that implementation. First, the interest rate should be adjusted for inflation, otherwise the bet is about a much larger class of events than “end of the world”.
Next, there’s a high risk that the “doom” better will have spent all their money by the time the bet expires. The “survivor” better will never see the color of their money anyway.
Finally, I don’t think it’s interesting to win if the world ends. I think what’s more interesting is rallying doubters before it’s too late, in order to marginally raise our chances of survival.
It may still be useful as a symbolic tool, regardless of actual monetary value. $100 isn’t all that much in the grand scheme of things, but it’s the taking of the bet that matters.
I upvoted your post because it seems relatively lucid and raises some important points, but would like to say that I’m in the middle of writing a pretty long, detailed explanation of why I agree with most of the gripes (e.g. AIs can’t use magic to mine coal/build nanobots) and yet the object-level conclusions here are still untrue. In practice, I seriously doubt we would have more than a year to live after the release of AGI with the long term planning and reasoning abilities of most accountants, even without FOOM. People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
+1. If you don’t write that post, I will. :)
And if you want feedback on your draft I’d be happy to give it a read and leave comments.
For sure; I think I’m about 45% of the way through, I’ll send you a draft when it’s about 90% done :)
I’m also interested to read the draft, if you’re willing to send it to me.
The user who was authoring the draft has apparently deactivated their account. Are they still working on writing that post?
People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
I agree but unfortunately my Google-fu wasn’t strong enough to find detailed prior explanations of AGI vs. robot research. I’m looking forward to your explanation.
I’m looking forward to reading your post!!
One year. Would you be willing to bet on that?
It’s nice that you’re open to betting. What unambiguous sign would change your mind, about the speed of AGI takeover, long enough before it happens that you’d still have time to make a positive impact afterwards? Nobody is interested in winning a bet where winning means “mankind gets wiped”.
Yes, that’s the key issue. I’m not sure I can think of one. Do you have any ideas? I mean, what would be an unequivocal sign that AGI can take over in a year time? Something like a pre-AGI parasitizing a major computing center for X days before it is discovered in a plan to expand to other centres...? That would still not be a sign that we are pretty much f. up in a year, but definitely a data point towards things can go bad very quickly
What data point would make you change your mind in the opposite direction? I mean, something that happens and you say: yes, we can go all die but this won’t happen in a year so, but maybe in something like 30 years or more
Edit: I posted two paragraphs originally in separate comments, unifying for the sake of clarity
He has a $100 bet with Brian Caplan, inflation adjusted. EY took Brian’s money at the time of the bet, and pays back if he loses.
Yes, but I don’t know if he really did it. I see multiple problems with that implementation. First, the interest rate should be adjusted for inflation, otherwise the bet is about a much larger class of events than “end of the world”.
Next, there’s a high risk that the “doom” better will have spent all their money by the time the bet expires. The “survivor” better will never see the color of their money anyway.
Finally, I don’t think it’s interesting to win if the world ends. I think what’s more interesting is rallying doubters before it’s too late, in order to marginally raise our chances of survival.
It may still be useful as a symbolic tool, regardless of actual monetary value. $100 isn’t all that much in the grand scheme of things, but it’s the taking of the bet that matters.