we can’t know how the costs will change between the first and thousandth fusion power plant.
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and technological transformation by cheaper and more abundant electricity. The scale for that transformation is measured in decades, and there are humans designing and permitting and building and operating each and every one, on human timescales. There’s no winner take all dynamic even if your leading competitor builds their first commercial plant five years before you do.
Also: We do have other credible paths that can also greatly increase access to comparably low-cost dispatchable clean power on a similar timescale of development, if we don’t get fusion.
we don’t know if foom is going to be a thing
Also true, which means the default assumption without it is that the scaling behavior looks like the scaling behavior for other successful software innovations. In software, the development costs are high and then the unit costs in deployment quickly fall to near zero. As long as AI benefits from collecting user data to improve training (which should still be true in many non-foom scenarios) then we might expect network effect scaling behavior where the first to really capture a market niche becomes almost uncatchable, like Meta and Google and Amazon. Or where downstream app layers are built on software functionality, switching costs become very high and you get a substantial amount of lock-in, like with Apple and Microsoft.
Even if foom is going to happen, things would look very different if the leaders credibly committed to helping others foom if they are first. I don’t know if this would be better or worse from a existential risk perspective, but it would change the nature of the race a lot.
Agreed. But, if any of the leading labs could credibly state what kinds of things they would or wouldn’t be able to do in a foom scenario, let alone credibly precommit to what they would actually do, I would feel a whole lot better and safer about the possibility. Instead the leaders can’t even precommit credibly to their own stated policies, in the absence of foom, and also don’t have anywhere near a credible plan for managing foom if it happens.
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and technological transformation by cheaper and more abundant electricity. The scale for that transformation is measured in decades, and there are humans designing and permitting and building and operating each and every one, on human timescales. There’s no winner take all dynamic even if your leading competitor builds their first commercial plant five years before you do.
Also: We do have other credible paths that can also greatly increase access to comparably low-cost dispatchable clean power on a similar timescale of development, if we don’t get fusion.
Also true, which means the default assumption without it is that the scaling behavior looks like the scaling behavior for other successful software innovations. In software, the development costs are high and then the unit costs in deployment quickly fall to near zero. As long as AI benefits from collecting user data to improve training (which should still be true in many non-foom scenarios) then we might expect network effect scaling behavior where the first to really capture a market niche becomes almost uncatchable, like Meta and Google and Amazon. Or where downstream app layers are built on software functionality, switching costs become very high and you get a substantial amount of lock-in, like with Apple and Microsoft.
Agreed. But, if any of the leading labs could credibly state what kinds of things they would or wouldn’t be able to do in a foom scenario, let alone credibly precommit to what they would actually do, I would feel a whole lot better and safer about the possibility. Instead the leaders can’t even precommit credibly to their own stated policies, in the absence of foom, and also don’t have anywhere near a credible plan for managing foom if it happens.