I assume the reason than MNT is added to a discussion on AI is because we’re trying to make the future sound more plausible via adding burdensome details.
This is unreasonably accusatory. I’m pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded.
Isn’t this the argument creationists use against evolution? But more seriously, nature does nano-assembly constantly and with pretty remarkable precision in ways we have yet to be able to fully understand or control. This means that there’s at the very least that much to learn about MNT that we’re simply “not smart enough” to be able to understand yet. Consider fields like transfection, where you can buy some reagents and cells from Sigma or whoever and make them create your own custom proteins. This is far far in advance of what we could do 100 years ago but is arguably only a matter of being “smarter” and/or knowing more rather than anything else. Calcium Phosphate transfection doesn’t even use novel chemicals, and yet was only discovered in 1973.
Nature does nano-assembly, but it isn’t arbitrary nano-assembly.
My example of a very hard nano-assembly problem is a ham sandwich, with the hardest part being the lettuce. It’s possible that the easiest way to make a lettuce leaf—they still have live cells—is to grow a head of lettuce.
Maybe the right question (ignoring where MNT fits with AI) is to look at what parts of MNT looks feasible at present levels of knowledge.
This is unreasonably accusatory. I’m pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Pointing out a possible mental bias isn’t accusatory.
This is precisely what I meant. In some examples the line of reasoning “AI->MNT->we’re all dead if it’s not friendly” is specifically prefaced with the discussion that any detailed example is inherently less plausible, but adding the details is supposed to make it feel more believable. My whole argument is that I think this specific detail will backfire in the “making it feel more believable” department for someone who does not already believe in MNT and other transhumanist memes.
I’m pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Whether or not MNT is a likely tool of AI (I think it is), IIRC it is usually used as a lower bound on what an AI can do. This answers leplen’s objection that MNT is a burdensome detail—saying “AI could, for example, use MNT to take over the world”, is only as burdensome as the claim that MNT or some other similarly powerful technologies are possible.
This is unreasonably accusatory. I’m pretty sure MNT is added to the discussion because people here such as Eliezer and Annisimov and Vassar believe it to be both possible and a likely thing for AI to do.
Isn’t this the argument creationists use against evolution? But more seriously, nature does nano-assembly constantly and with pretty remarkable precision in ways we have yet to be able to fully understand or control. This means that there’s at the very least that much to learn about MNT that we’re simply “not smart enough” to be able to understand yet. Consider fields like transfection, where you can buy some reagents and cells from Sigma or whoever and make them create your own custom proteins. This is far far in advance of what we could do 100 years ago but is arguably only a matter of being “smarter” and/or knowing more rather than anything else. Calcium Phosphate transfection doesn’t even use novel chemicals, and yet was only discovered in 1973.
Nature does nano-assembly, but it isn’t arbitrary nano-assembly.
My example of a very hard nano-assembly problem is a ham sandwich, with the hardest part being the lettuce. It’s possible that the easiest way to make a lettuce leaf—they still have live cells—is to grow a head of lettuce.
Maybe the right question (ignoring where MNT fits with AI) is to look at what parts of MNT looks feasible at present levels of knowledge.
Pointing out a possible mental bias isn’t accusatory.
I read that phrase as implying MNT was consciously added to help convince others about FAI, not that it was an unconscious bias eg Eliezer had.
This is precisely what I meant. In some examples the line of reasoning “AI->MNT->we’re all dead if it’s not friendly” is specifically prefaced with the discussion that any detailed example is inherently less plausible, but adding the details is supposed to make it feel more believable. My whole argument is that I think this specific detail will backfire in the “making it feel more believable” department for someone who does not already believe in MNT and other transhumanist memes.
Whether or not MNT is a likely tool of AI (I think it is), IIRC it is usually used as a lower bound on what an AI can do. This answers leplen’s objection that MNT is a burdensome detail—saying “AI could, for example, use MNT to take over the world”, is only as burdensome as the claim that MNT or some other similarly powerful technologies are possible.