There is perhaps a status quo bias which sees removing something already there as safer than adding something not already there, but I don’t think that’s particularly relevant.
It is particularly relevant, because the regulators are running on corrupted hardware, and the consequence of bias and/or abuse by regulators is much greater for adding things than for taking them out.
Adding substances to the water to sterilize everyone, and taking substances out so that water-without-substances sterilizes everyone would be similar—except that the second is not possible and the first is.
and the consequence of bias and/or abuse by regulators is much greater for adding things than for taking them out.
I agree for “things” as a general class, but once we’ve conditioned on a particular thing (“what’s the optimal level of chemical Q?”) it seems to me that we should have symmetric levels of knowledge about moving the level of that thing up and down when it’s possible to move both directions. (Fluoride and lithium groundwater levels already vary significantly between areas—that’s how we discovered their effects in the first place—and so saying “let’s artificially make our groundwater like their groundwater” doesn’t seem that prone to bias or abuse.)
It may be worth explicitly mentioning that if we’re introducing something completely novel, when we shrink) the state of the evidence towards the reference class, the level of danger for “completely novel thing” is higher than the level of danger for “abundant common thing.” I expect this will be minor, though, and many completely novel things are actually much better studied than abundant common things, because the abundant common things were grandfathered in rather than receiving serious scrutiny. (Here I’m thinking of particular artificial sweeteners, which were proven safe at levels where sugar would be toxic because they had to go to dramatically higher levels of the artificial sweetener to find any toxicity.)
I suppose we should also mention the argument that if we create the ability to add molecules to the water supply, that ability could be corrupted to nefarious ends- but I think that’s a fully general argument against any infrastructure development, and should be responded to by investing in security (and secure design) rather than not investing in infrastructure.
but once we’ve conditioned on a particular thing (“what’s the optimal level of chemical Q?”) it seems to me that we should have symmetric levels of knowledge about moving the level of that thing up and down when it’s possible to move both directions.
Moving the level of that thing down is limited at 0, and thus the effect of bias and abuse is also limited. Moving the level up is not so limited.
Deciding that you’ll condition on a particular thing is itself subject to the same bias and abuse that deciding to add something is. Imagine regulators saying “we’ve already decided that we’re going add sterility drugs to the water, we just need to decide how much”. It’s also solved the same way; just like you say “without satisfying very high standards, you may only filter stuff out and not add stuff”, you say “without satisfying very high standards, you may only condition on things that are already present in significant amounts”.
I think that’s a fully general argument against any infrastructure development, and should be responded to by investing in security (and secure design) rather than not investing in infrastructure.
It is possible to have a multi-peaked preference where directly saying “we’ll create infrastructure, and then we’ll use it in X way” is opposed by a majority, while doing it in two steps as “we’ll create infrastructure which cannot be used in X way” and “now that we have infrastructure, we should remove the security and use it in X way” has each step supported by a majority.
To oppose such things, you have to oppose the first step. (And of course, not everything has multi-peaked preferences, so this is not a fully general argument.)
(That link also describes other slippery slope mechanisms which may apply.)
It is particularly relevant, because the regulators are running on corrupted hardware, and the consequence of bias and/or abuse by regulators is much greater for adding things than for taking them out.
Adding substances to the water to sterilize everyone, and taking substances out so that water-without-substances sterilizes everyone would be similar—except that the second is not possible and the first is.
I agree for “things” as a general class, but once we’ve conditioned on a particular thing (“what’s the optimal level of chemical Q?”) it seems to me that we should have symmetric levels of knowledge about moving the level of that thing up and down when it’s possible to move both directions. (Fluoride and lithium groundwater levels already vary significantly between areas—that’s how we discovered their effects in the first place—and so saying “let’s artificially make our groundwater like their groundwater” doesn’t seem that prone to bias or abuse.)
It may be worth explicitly mentioning that if we’re introducing something completely novel, when we shrink) the state of the evidence towards the reference class, the level of danger for “completely novel thing” is higher than the level of danger for “abundant common thing.” I expect this will be minor, though, and many completely novel things are actually much better studied than abundant common things, because the abundant common things were grandfathered in rather than receiving serious scrutiny. (Here I’m thinking of particular artificial sweeteners, which were proven safe at levels where sugar would be toxic because they had to go to dramatically higher levels of the artificial sweetener to find any toxicity.)
I suppose we should also mention the argument that if we create the ability to add molecules to the water supply, that ability could be corrupted to nefarious ends- but I think that’s a fully general argument against any infrastructure development, and should be responded to by investing in security (and secure design) rather than not investing in infrastructure.
Moving the level of that thing down is limited at 0, and thus the effect of bias and abuse is also limited. Moving the level up is not so limited.
Deciding that you’ll condition on a particular thing is itself subject to the same bias and abuse that deciding to add something is. Imagine regulators saying “we’ve already decided that we’re going add sterility drugs to the water, we just need to decide how much”. It’s also solved the same way; just like you say “without satisfying very high standards, you may only filter stuff out and not add stuff”, you say “without satisfying very high standards, you may only condition on things that are already present in significant amounts”.
It is possible to have a multi-peaked preference where directly saying “we’ll create infrastructure, and then we’ll use it in X way” is opposed by a majority, while doing it in two steps as “we’ll create infrastructure which cannot be used in X way” and “now that we have infrastructure, we should remove the security and use it in X way” has each step supported by a majority.
To oppose such things, you have to oppose the first step. (And of course, not everything has multi-peaked preferences, so this is not a fully general argument.)
(That link also describes other slippery slope mechanisms which may apply.)