I feel confused by how broad this is, i.e., “any example in history.” Governments regulate technology for the purpose of safety all the time. Almost every product you use and consume has been regulated to adhere to safety standards, hence making them less competitive (i.e., they could be cheaper and perhaps better according to some if they didn’t have to adhere to them). I’m assuming that you believe this route is unlikely to work, but it seems to me that this has some burden of explanation which hasn’t yet been made. I.e., I don’t think the only relevant question here is whether it’s competitive enough such that AI labs would adopt it naturally, but also whether governments would be willing to make that cost/benefit tradeoff in the name of safety (which requires eg believing in the risks enough, believing this would help, actually having the viable substitute in time, etc.). But that feels like a different question to me from “has humanity ever managed to make a technology less competitive but safer,” where the answer is clearly yes.
My comment was a little ambiguous. What I meant was human society purposely differentially researching and developing technology X instead of Y where Y has a public (global) harm Z but private benefit and X is based on a different design principle than Y but slightly less competitive but still able to replace Y.
A good example would be the development of renewable energy to replace fossil fuels to prevent climate change.
The new tech (fusion, fission, solar, wind) is based on fundamental principles than the old tech (oil and gas).
Lets zoom in:
Fusion would be an example but perpetually thirty years away. Fission works but wasnt purposely develloped to fight climate change. Wind is not competitive without large subsidies and most likely never will.
Solar is at least lomited competitive with fossil fuels [except because of load balancing it may not be able to replace fossil fuels completely] , purposely developped out of environmental concerns and would be the best example.
I think my main question marks here is: solar energy is still a promise. It hasnt even begun to make a dent in total energy consumption ( a quick perplexity search reveals only 2 percent of global energy is solar-generated). Despite the hype it is not clear climate change will be solved by solar energy.
Moreover, the real question is to what degree the development of competitive solar energy was the result of a purposeful policy. People like to believe that tech development subsidies have a large counterfactual but imho this needs to be explicitly proved and my prior is that the effect is probably small compared to overall general development of technology & economic incentives that are not downstream of subsidies / government policy.
Let me contrast this with two different approaches to solving a problem Z (climate change).
Deploy existing competitive technology (fission)
Solve the problem directly (geo-engineering)
It seems to me that in general the latter two approaches have a far better track record of counterfactually Actually Solving the Problem.
Moreover, the real question is to what degree the development of competitive solar energy was the result of a purposeful policy. People like to believe that tech development subsidies have a large counterfactual but imho this needs to be explicitly proved and my prior is that the effect is probably small compared to overall general development of technology & economic incentives that are not downstream of subsidies / government policy.
But we don’t need to speculate about that in the case of AI! We know roughly how much money we’ll need for a given size of AI experiment (eg, a training run). The question is one of raising the money to do it. With a strong enough safety case vs the competition, it might be possible.
I’m curious if you think there are any better routs; IE, setting aside the possibility of researching safer AI technology & working towards its adoption, what overall strategy would you suggest for AI safety?
I feel confused by how broad this is, i.e., “any example in history.” Governments regulate technology for the purpose of safety all the time. Almost every product you use and consume has been regulated to adhere to safety standards, hence making them less competitive (i.e., they could be cheaper and perhaps better according to some if they didn’t have to adhere to them). I’m assuming that you believe this route is unlikely to work, but it seems to me that this has some burden of explanation which hasn’t yet been made. I.e., I don’t think the only relevant question here is whether it’s competitive enough such that AI labs would adopt it naturally, but also whether governments would be willing to make that cost/benefit tradeoff in the name of safety (which requires eg believing in the risks enough, believing this would help, actually having the viable substitute in time, etc.). But that feels like a different question to me from “has humanity ever managed to make a technology less competitive but safer,” where the answer is clearly yes.
My comment was a little ambiguous. What I meant was human society purposely differentially researching and developing technology X instead of Y where Y has a public (global) harm Z but private benefit and X is based on a different design principle than Y but slightly less competitive but still able to replace Y.
A good example would be the development of renewable energy to replace fossil fuels to prevent climate change.
The new tech (fusion, fission, solar, wind) is based on fundamental principles than the old tech (oil and gas).
Lets zoom in:
Fusion would be an example but perpetually thirty years away. Fission works but wasnt purposely develloped to fight climate change. Wind is not competitive without large subsidies and most likely never will.
Solar is at least lomited competitive with fossil fuels [except because of load balancing it may not be able to replace fossil fuels completely] , purposely developped out of environmental concerns and would be the best example.
I think my main question marks here is: solar energy is still a promise. It hasnt even begun to make a dent in total energy consumption ( a quick perplexity search reveals only 2 percent of global energy is solar-generated). Despite the hype it is not clear climate change will be solved by solar energy.
Moreover, the real question is to what degree the development of competitive solar energy was the result of a purposeful policy. People like to believe that tech development subsidies have a large counterfactual but imho this needs to be explicitly proved and my prior is that the effect is probably small compared to overall general development of technology & economic incentives that are not downstream of subsidies / government policy.
Let me contrast this with two different approaches to solving a problem Z (climate change).
Deploy existing competitive technology (fission)
Solve the problem directly (geo-engineering)
It seems to me that in general the latter two approaches have a far better track record of counterfactually Actually Solving the Problem.
But we don’t need to speculate about that in the case of AI! We know roughly how much money we’ll need for a given size of AI experiment (eg, a training run). The question is one of raising the money to do it. With a strong enough safety case vs the competition, it might be possible.
I’m curious if you think there are any better routs; IE, setting aside the possibility of researching safer AI technology & working towards its adoption, what overall strategy would you suggest for AI safety?