Excellent post, thank you. I look forward to playing with the notebook model.
One observation on the timelines (which I realise is somewhat tangential to the core content of the post) is that I believe your estimates around compute cost scaling feel quite optimistic to me.
Memory and compute density scaling is already dropping off meanwhile everyone seems to be struggling with yields on sub-3nm nodes despite huge spend, exacerbating the problem for cost scaling.
We don’t really have many more promising technologies in the pipeline to radically get cost per transistor down so I expect improvements to slow quite a bit towards the end of this decade.
Your assertion that we don’t have many things to reduce cost per transistor may be true, but is not supported by the rest of your comment or links—reduction in transitor size and similar performance improving measures are not the only way to improve cost performance.
Sorry I agree that comment and those links left some big inferential gaps.
I believe the link below is more holistic and doesn’t leave such big leaps (admittedly it does have some 2021-specific themes that haven’t aged so well, but I don’t believe they undermine the core argument made).
This still leaves a gap between cost per transistor and overall compute cost, but that’s a much smaller leap e.g. frequency being bound by physical constraints like the speed of light. etc..
To evidence my point about this trend getting even worse after 2030 - EUV lithography was actively being pursued for decades before active usage in 2030. My understanding is that we don’t have anything that significant at the level of maturity that EUV was at in the 90s. Consider my epistemic status on this point fairly weak though.
Excellent post, thank you. I look forward to playing with the notebook model.
One observation on the timelines (which I realise is somewhat tangential to the core content of the post) is that I believe your estimates around compute cost scaling feel quite optimistic to me.
Memory and compute density scaling is already dropping off meanwhile everyone seems to be struggling with yields on sub-3nm nodes despite huge spend, exacerbating the problem for cost scaling.
https://fuse.wikichip.org/news/7343/iedm-2022-did-we-just-witness-the-death-of-sram/
https://www.semianalysis.com/p/tsmcs-3nm-conundrum-does-it-even
We don’t really have many more promising technologies in the pipeline to radically get cost per transistor down so I expect improvements to slow quite a bit towards the end of this decade.
Your assertion that we don’t have many things to reduce cost per transistor may be true, but is not supported by the rest of your comment or links—reduction in transitor size and similar performance improving measures are not the only way to improve cost performance.
Sorry I agree that comment and those links left some big inferential gaps.
I believe the link below is more holistic and doesn’t leave such big leaps (admittedly it does have some 2021-specific themes that haven’t aged so well, but I don’t believe they undermine the core argument made).
https://www.fabricatedknowledge.com/p/the-rising-tide-of-semiconductor
This still leaves a gap between cost per transistor and overall compute cost, but that’s a much smaller leap e.g. frequency being bound by physical constraints like the speed of light. etc..
To evidence my point about this trend getting even worse after 2030 - EUV lithography was actively being pursued for decades before active usage in 2030. My understanding is that we don’t have anything that significant at the level of maturity that EUV was at in the 90s. Consider my epistemic status on this point fairly weak though.