How critical is ASML to GPU progress?

A couple of years ago, I was wondering how critical ASML was to GPU progress. So I talked to a friend, a senior chip architect, who is deeply plugged into the computer hardware industry. I got a lot of value out of talking to them, and I think others might find the contents of our conversation interesting, even if I’m reporting it 2 years after the fact.

ASML makes specialized photolithography machines. They’re about a decade ahead of competitors, i.e. without ASML machines, you’d be stuck making 10nm chips. Apple is one company that could catch up, in a decade or two.

They use 13.5nm “Extreme UV” to make 3nm scale features by using reflective optics to make interference patterns and fringes. Using low res light to make higher res features has been going on since photolithography tech stalled at 28nm for a while. I, Algon, am convinced this is wizardry.

Extreme UV is much shorter than prior ~100nm and won’t go through glass lenses, so chip manufacturers try to use reflective optics as much as possible.

It’s unclear how you can decrease wavelength and still use existing technology. Perhaps we’ve got 5 generations left. We might have to change to deep UV light then.

The tech can improve in different ways besides higher res. For instance, better yeilds. When we introduce 1nm, it will have low yield. After 10 years, essentially all chips will be made correctly. Standard experience curve stuff applies as well, reducing costs. Eking out all the economic performance of chip making techniques will take like 20 years after you get to the limits of shrinking dies. This would translate directly into continuous improvements in PC’s, AIs and that sort of thing.

If ASML immediately shut down, and no one could use their existing machine, human and intellectual capital, the hardware industry wouldn’t be doomed. For one, the machines they’re already made would still be around. For another, we’d still be able to make a bit of progress. While existing photo-lithography machines are made for particular generations, we can still use them to push hardware ahead by one “half-step” generation, e.g. like from 5nm to 4nm. And, of course, we could still cut costs.

That said, their guess is that ASMl vanishing would probably trigger world recession if they stopped producing new things. [Note: the economy is buoyed by AI atm, and mostly on future promise, which rely on increased compute to realize which rely on increased chip production which rely on more photo-lithography machines. So I’d say this has held up.] Though this is a strange hypothetical, since it is very common in tech for monopoly partners to let customers get access to their tech if they go out of business.

As for how much slack ASML has, the answer is very little. They’re making new machines as fast as they can to keep up with the demand for new fabs.


Who’s buying from ASML? TSMC, Samsung and Intel. ASML Some companies, like my friends, send masks to TSMC to be turned into chips.

ASML don’t seem like they’re trying to screw people over, which confused me given their moat. Yes, someone would step in eventually, like Apple, or China who has already tried hard to replace ASML, but I’d have thought they’d use their monopoly to squeeze out some advantages.

Perhaps it is simply because while ASML has the edge in making some kinds of photolithography machines, other companies have edges in different machines used for different purposes, e.g. Cannon and Nikon. ASML’s photolithography machines are used in the bottom most layers for transistors. Other companies focus on higher layers, with “registration requirements being less strict”. So ASML isn’t the only monopoly along the supply chain, with other companies having perhaps similar moats.

Do you just naturally get monopolies in this industry? Not historically. The early photolithography community used to have co-development between companies, technical papers sharing tonnes of details, and little specialization in companies. The person I talked to says they don’t know if this has stopped, but it feels like it has.


Lots of hardware optimization has happened, and this is partly a software thing, i.e. you make hardware more optimized for some software, and improve the software on chips. Which muddies the algorithm vs hardware split you get. [IIUC, this is what drove a lot of progress in ML chips over the past decade or so, though my friend did not say that in our conversation.]

No comments.