Not sure how to vote indicating that ‘molecular nanotechnology’ is not a useful or sufficiently specific term, and that biology shows us the sorts of things that are actually possible (very versatile and useful, but very unlike the hilarious Drexler stuff you hear about now and then)...
Molecular nanotechnology is a defined term that most people on Less Wrong understand. I’m not going to write out paragraphs to explain the concept of MNT. If you want to familiarize yourself with the idea, then you can follow the links on the wiki link I posted.
I have a weak opinion about molecular nanotechology vs superhuman AGI. Superhuman AGI probably requires extraordinary insights, while molecular nanotechnology is closer to a lot of grinding. However, this doesn’t give me a time frame.
I find it interesting that you have superhuman AGI rather than the more usual formulations—I’m taking that to mean an AGI which doesn’t necessarily self-improve.
You have an interesting definition of “absolutely eviscerated.” MOB mostly just seems to be tossing teacher’s passwords like they were bladed frisbees.
You responded at DH0, which certainly didn’t help an already inflamed situation. That comment thread is what I was referring to, and also why I wrote “others’ multiple logical failures” to refer to Armondikov’s “it’s impossible until somebody builds it” argument.
Do you address the possibility of complex self-replicating proteins with complex behavior? It looks like the only thing addressed in the article is traditional robots scaled down to molecule size, and it (correctly) points out that that won’t work.
I was wondering about the LW consensus regarding molecular nanotechnology. Here’s a little poll:
How many years do you think it will take until molecular nanotechnology comes into existence? [pollid:417]
What is the probability that molecular nanotechnology will be developed before superhuman Artificial General Intelligence? [pollid:418]
Not sure how to vote indicating that ‘molecular nanotechnology’ is not a useful or sufficiently specific term, and that biology shows us the sorts of things that are actually possible (very versatile and useful, but very unlike the hilarious Drexler stuff you hear about now and then)...
Molecular nanotechnology is a defined term that most people on Less Wrong understand. I’m not going to write out paragraphs to explain the concept of MNT. If you want to familiarize yourself with the idea, then you can follow the links on the wiki link I posted.
With what probability? Do you want the point where we think there’s a 50% probability it comes sooner and a 50% probability it comes later, or 95/5?
I’m hoping to benefit from the wisdom of crowds, so don’t skew your answer in either direction.
Does that mean you want the 50⁄50 estimate?
Is there a way to see the results without voting? I don’t have a strong opinion about molecular nanotechnology.
I have a weak opinion about molecular nanotechology vs superhuman AGI. Superhuman AGI probably requires extraordinary insights, while molecular nanotechnology is closer to a lot of grinding. However, this doesn’t give me a time frame.
I find it interesting that you have superhuman AGI rather than the more usual formulations—I’m taking that to mean an AGI which doesn’t necessarily self-improve.
It won’t let me enter a number that says “Drexlerian MNT defies physics”. What’s the maximum number of years I can put in?
You were absolutely eviscerated in the comments there. Thanks for posting.
You have an interesting definition of “absolutely eviscerated.” MOB mostly just seems to be tossing teacher’s passwords like they were bladed frisbees.
I think MOB is justly frustrated with others’ multiple logical failures and DG’s complete unwillingness to engage.
I didn’t write the post, Armondikov (a postdoc chemist) did, and he engaged at length.
You responded at DH0, which certainly didn’t help an already inflamed situation. That comment thread is what I was referring to, and also why I wrote “others’ multiple logical failures” to refer to Armondikov’s “it’s impossible until somebody builds it” argument.
Do you address the possibility of complex self-replicating proteins with complex behavior? It looks like the only thing addressed in the article is traditional robots scaled down to molecule size, and it (correctly) points out that that won’t work.