I disagree a bit with your logic here. If 60 % of ChatGPT GPU is cut off as a result of one switch in just one datacenter, the whole model is reduced to something else than what it was. Users with simple queries won’t notice (right away). But the model will get dumber instantly.
I disagree a bit with your logic here. If 60 % of ChatGPT GPU is cut off as a result of one switch in just one datacenter, the whole model is reduced to something else than what it was. Users with simple queries won’t notice (right away). But the model will get dumber instantly.
(How will it copy its weights then?)