I’m commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.
Consider the 2x2 grid where, on one axis, we’re working in either an epistemically unhygienic advocacy frame where its OK to say false things that get people to support the right conclusion or policy (versus a truth seeking frame where you grind from the facts to the conclusion with high quality reasoning processes at each stage for the sake of figuring stuff out from scratch) and on the second axis Leplen’s dismissal of MNT is coherently founded and on the right track (versus it just being a misfiring absurdity heuristic).
I think in this forum it can be generally assumed that “FAI is important” as the background conclusion that is also a message that it is probably beneficial to advocate on behalf of.
If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
Leplen’s claim here is a claim about Leplen’s historically contingent reasoning processes rather than about the object level workabilty of MNT and it is raised as though Leplen is a fairly normal person whose historically likely reaction to MNT is common enough to be indicative of how it will play with many other people. So the part of the 2x2 grid it is from is firmly “advocacy rather than truth” and mostly assuming “Leplen’s reaction is justified”. I think it is worth spelling out what it would look like to explore the other three boxes in the 2x2 grid.
If we retain the FAI-promoting advocacy perspective but imagine that Leplen is wrong because “MNT magic” is actually something future scientists or an AGI could pull together and deploy, then the substantive cost to the world might be that courses of action that are important if MNT is a real concern may not be well addressed by the group of people who may have been mobilized by a “just FAI, not MNT” advocacy. If basically the same AGI-safety strategy is appropriate whether or not an AGI would head towards MNT as a lower bound on the speed and power of the weapons it could invent, then dropping MNT from the advocacy can’t really harm anything. If the appropriate policies are different enough that lots of people convinced of “FAI without MNT” would object to “FAI with MNT” protection measures, then dropping advocacy could be net harmful to the world.
If we retain the idea that Leplen’s dismissal of MNT is coherent and justified, but flip over to a truth-seeking frame (while retaining awareness of a background belief by many old time LWers that MNT is probably important to think about) then the arguments offered to help actually change people’s minds for coherent reasons seem lacking. From a truth seeking perspective it doesn’t matter what turns people on or off if their opinions aren’t themselves important indicators of how the world actually is. The only formal credential offered is in materials science, and this is raised from within an activist advocacy frame where Leplen admits that motivated cognition could account for their attitude with respect to MNT out of defensiveness and a desire to not have skills become obsolete. Lots of people don’t want to become obsolete, so this is useful evidence for figuring out how to convince similarly fearful people of a conclusion about the importance of FAI by dropping other things that might make FAI advocacy harder. But the claim that “MNT is unimportant based on object level science considerations” will be mostly unmoved by the advocacy level arguments here if someone already has chemistry experience, and has read Nanosystems, and still thinks MNT matters. Something else would need to be offered than hand waving and a report about emotional antibodies to a certain topic. So presuming that Leplen’s dismissal of MNT is on track, and that many LWers think MNT is important, it seems like there’s an education gap, where the LW mainstream could be significantly helped by learning the object level reasoning that justify Leplen’s dismissal of MNT. Like where (presuming that it went off the rails somewhere) did Nanosystems go off the rails?
The fourth and final box of the 2x2 grid is for wondering what things would look like if we were in a truth seeking and communal learning mode (not worried about advocacy among random people) and Leplen was wrong to dismiss MNT. In this mode the admixture of advocacy and truth while taking Leplen seriously seems pretty bad because the very local educational process this week on this website would be going awry. It is understandable that Leplen’s reaction is relevant to one of LW’s central advocacy issues and Leplen seems is friendly to that project… and yet from the perspective of an attempt to build community knowledge in the direction of taking serious things seriously and believing true things for good reasons while disbelieving false things when the evidence pushes that way… the conflation is mildly disturbing.
Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded.
This is a bad argument. Like it doesn’t even taken into account the distinctions between bootstrapping from scratch to a single working general assembler versus how it would work assuming the key atoms could be put into the right places once (like whether and how expensively it could build copies of itself). The “bootstrap difficulty” question and the “mature scaleout” questions are different questions and our discussion seems to be papering over the distinctions. The badness of this argument was gently pointed out by drethelin, but somehow not in a way that was highly upvoted, I suspect because it didn’t take the (probably?) praiseworthy advocacy concerns into account.
To be clear, I’m friendly to the idea that MNT might not be physically possible, or if possible it might not be efficient. I’m not a huge expert here at all and would like to be better educated on the subject. And I’m friendly to the idea of designing AGI advocacy messages that gain traction and motivate people to do things that actually improve the world. I’m just trying to point out that mixing both of these concerns into the same rhetorical ball, seems to do a disservice to both...
Which is pretty ironic, considering that “mixing FAI and MNT together seems politically problematic” seems to be the general claim of the article. Mostly I guess I’m just trying to say that this article is even more complicated because now instead of sometimes doping the FAI discussions with MNT, we’re fully admixing FAI and MNT and political advocacy.
It is possible to have expert experience in chemistry and to find MNT preposterous for reasons derived from that experience. In fact, it’s a common reaction; not totally universal, but very common. And the second quote from leplen sums up why, quite nicely and accurately. Even if one trusts the calculations in Nanosystems regarding the stability of the various structures on display there, they will still look like complete fantasy to someone used to ordinary methods of chemical synthesis, which really do resemble “shaking a large bin of lego in a particular way while blindfolded”!
Nanosystems itself won’t do much to convince someone who thinks that assembly is the main barrier to the existence of such structures. Maybe subsequent papers by Merkle and Freitas would help a little. They argue that you could store HCCH in the interior of nanotubes as a supply of carbons, which can then be extracted, manipulated, and put into place—if you work with great delicacy and precision…
But it is a highly nontrivial assertion, that positional control of small groups of atoms, such as one sees in enzymatic reactions, can be extended so far as to allow the synthesis of diamond through atom-stacking by nanomechanisms. Chemists have a right to be skeptical about that, and if they run across an intellectual community where people blithely talk of an AI ordering a few enzymes in the mail and then quickly bootstrapping its way to possession of a world-eating nanobot army, then they really do have a reason to think that there might be crackpots thereabouts; or, more charitably, people who don’t know the difference between science fiction and reality.
I’m commenting a few days after the main flurry of discussion and just wanted to raise a concern about how there seems to be a conflation in the OP and in many of the comments between (1) effective political advocacy among ignorant people who will stick with the results that fall out of the absurdity heuristic even when it gives false results and (2) truth seeking analysis based on detailed mechanistic considerations of how the world is likely to work.
Consider the 2x2 grid where, on one axis, we’re working in either an epistemically unhygienic advocacy frame where its OK to say false things that get people to support the right conclusion or policy (versus a truth seeking frame where you grind from the facts to the conclusion with high quality reasoning processes at each stage for the sake of figuring stuff out from scratch) and on the second axis Leplen’s dismissal of MNT is coherently founded and on the right track (versus it just being a misfiring absurdity heuristic).
I think in this forum it can be generally assumed that “FAI is important” as the background conclusion that is also a message that it is probably beneficial to advocate on behalf of.
Leplen’s claim here is a claim about Leplen’s historically contingent reasoning processes rather than about the object level workabilty of MNT and it is raised as though Leplen is a fairly normal person whose historically likely reaction to MNT is common enough to be indicative of how it will play with many other people. So the part of the 2x2 grid it is from is firmly “advocacy rather than truth” and mostly assuming “Leplen’s reaction is justified”. I think it is worth spelling out what it would look like to explore the other three boxes in the 2x2 grid.
If we retain the FAI-promoting advocacy perspective but imagine that Leplen is wrong because “MNT magic” is actually something future scientists or an AGI could pull together and deploy, then the substantive cost to the world might be that courses of action that are important if MNT is a real concern may not be well addressed by the group of people who may have been mobilized by a “just FAI, not MNT” advocacy. If basically the same AGI-safety strategy is appropriate whether or not an AGI would head towards MNT as a lower bound on the speed and power of the weapons it could invent, then dropping MNT from the advocacy can’t really harm anything. If the appropriate policies are different enough that lots of people convinced of “FAI without MNT” would object to “FAI with MNT” protection measures, then dropping advocacy could be net harmful to the world.
If we retain the idea that Leplen’s dismissal of MNT is coherent and justified, but flip over to a truth-seeking frame (while retaining awareness of a background belief by many old time LWers that MNT is probably important to think about) then the arguments offered to help actually change people’s minds for coherent reasons seem lacking. From a truth seeking perspective it doesn’t matter what turns people on or off if their opinions aren’t themselves important indicators of how the world actually is. The only formal credential offered is in materials science, and this is raised from within an activist advocacy frame where Leplen admits that motivated cognition could account for their attitude with respect to MNT out of defensiveness and a desire to not have skills become obsolete. Lots of people don’t want to become obsolete, so this is useful evidence for figuring out how to convince similarly fearful people of a conclusion about the importance of FAI by dropping other things that might make FAI advocacy harder. But the claim that “MNT is unimportant based on object level science considerations” will be mostly unmoved by the advocacy level arguments here if someone already has chemistry experience, and has read Nanosystems, and still thinks MNT matters. Something else would need to be offered than hand waving and a report about emotional antibodies to a certain topic. So presuming that Leplen’s dismissal of MNT is on track, and that many LWers think MNT is important, it seems like there’s an education gap, where the LW mainstream could be significantly helped by learning the object level reasoning that justify Leplen’s dismissal of MNT. Like where (presuming that it went off the rails somewhere) did Nanosystems go off the rails?
The fourth and final box of the 2x2 grid is for wondering what things would look like if we were in a truth seeking and communal learning mode (not worried about advocacy among random people) and Leplen was wrong to dismiss MNT. In this mode the admixture of advocacy and truth while taking Leplen seriously seems pretty bad because the very local educational process this week on this website would be going awry. It is understandable that Leplen’s reaction is relevant to one of LW’s central advocacy issues and Leplen seems is friendly to that project… and yet from the perspective of an attempt to build community knowledge in the direction of taking serious things seriously and believing true things for good reasons while disbelieving false things when the evidence pushes that way… the conflation is mildly disturbing.
This is a bad argument. Like it doesn’t even taken into account the distinctions between bootstrapping from scratch to a single working general assembler versus how it would work assuming the key atoms could be put into the right places once (like whether and how expensively it could build copies of itself). The “bootstrap difficulty” question and the “mature scaleout” questions are different questions and our discussion seems to be papering over the distinctions. The badness of this argument was gently pointed out by drethelin, but somehow not in a way that was highly upvoted, I suspect because it didn’t take the (probably?) praiseworthy advocacy concerns into account.
To be clear, I’m friendly to the idea that MNT might not be physically possible, or if possible it might not be efficient. I’m not a huge expert here at all and would like to be better educated on the subject. And I’m friendly to the idea of designing AGI advocacy messages that gain traction and motivate people to do things that actually improve the world. I’m just trying to point out that mixing both of these concerns into the same rhetorical ball, seems to do a disservice to both...
Which is pretty ironic, considering that “mixing FAI and MNT together seems politically problematic” seems to be the general claim of the article. Mostly I guess I’m just trying to say that this article is even more complicated because now instead of sometimes doping the FAI discussions with MNT, we’re fully admixing FAI and MNT and political advocacy.
It is possible to have expert experience in chemistry and to find MNT preposterous for reasons derived from that experience. In fact, it’s a common reaction; not totally universal, but very common. And the second quote from leplen sums up why, quite nicely and accurately. Even if one trusts the calculations in Nanosystems regarding the stability of the various structures on display there, they will still look like complete fantasy to someone used to ordinary methods of chemical synthesis, which really do resemble “shaking a large bin of lego in a particular way while blindfolded”!
Nanosystems itself won’t do much to convince someone who thinks that assembly is the main barrier to the existence of such structures. Maybe subsequent papers by Merkle and Freitas would help a little. They argue that you could store HCCH in the interior of nanotubes as a supply of carbons, which can then be extracted, manipulated, and put into place—if you work with great delicacy and precision…
But it is a highly nontrivial assertion, that positional control of small groups of atoms, such as one sees in enzymatic reactions, can be extended so far as to allow the synthesis of diamond through atom-stacking by nanomechanisms. Chemists have a right to be skeptical about that, and if they run across an intellectual community where people blithely talk of an AI ordering a few enzymes in the mail and then quickly bootstrapping its way to possession of a world-eating nanobot army, then they really do have a reason to think that there might be crackpots thereabouts; or, more charitably, people who don’t know the difference between science fiction and reality.