A more paranoid man than myself would start musing about anthropic shadows and selection effects.
Why paranoid? I don’t quite get the argument here; doesn’t anthropic shadow imply we have nothing to worry about (except for maybe hyperexistential risks) since we’re guaranteed to be living in a timeline where humanity survives in the end?
A pandemic happened that hurt the economy and increased demand for consumer electronics, driving up the cost of computer chips
Intel announced that it was having major manufacturing issues
Bitcoin, Ethereum, and other coins reached an all-time high, driving up the price of GPUs
I don’t see much of a coincidence here. The pandemic and crypto boom are highly correlated events; it’s hardly surprising deflationary value storages will do well in times of crisis, gold also hit an all-time high during the same period. Besides, the last crypto boom in 2017 didn’t seem to have slowed down investment in deep learning. Intel has never been a big player in the GPU market and CPU prices are reasonable right now but isn’t that relevant for deep learning anyway. And the “AI and Compute” trend line broke down pretty much as soon as the OpenAI article was released, a solid 1.5 − 2 years before the Covid-19 crisis hit. That’s a long time in ML world.
Unless you’re a fanatic reverend of the God of Straight Lines, there isn’t anything here to be explained. When straight lines run into physical limitations, physics wins. Hardware progress clearly can’t keep up with the 10x per year growth rate of AI compute, and the only way to make up for it was to increase monetary investment into this field, which is becoming harder to justify given the lack of returns so far.
But, if you disagree and believe that the Straight Line is going to resume any day now, go ahead and buy more Nvidia stocks and win.
I don’t quite get the argument here; doesn’t anthropic shadow imply we have nothing to worry about (except for maybe hyperexistential risks) since we’re guaranteed to be living in a timeline where humanity survives in the end?
But it doesn’t say we’re guaranteed not to be living in a timeline where humanity doesn’t survive.
If I had a universe copying machine and a doomsday machine, pressed the “universe copy” button 1000 times (for 2¹⁰⁰⁰ universes), then smashed relativistic meteors into Earth in all but one of them… would you call that an ethical issue? I certainly would, even though the inhabitants of the original universe are guaranteed to be living in a timeline where they don’t die horribly from a volcanic apocalypse.
Why paranoid? I don’t quite get the argument here; doesn’t anthropic shadow imply we have nothing to worry about (except for maybe hyperexistential risks) since we’re guaranteed to be living in a timeline where humanity survives in the end?
I don’t see much of a coincidence here. The pandemic and crypto boom are highly correlated events; it’s hardly surprising deflationary value storages will do well in times of crisis, gold also hit an all-time high during the same period. Besides, the last crypto boom in 2017 didn’t seem to have slowed down investment in deep learning. Intel has never been a big player in the GPU market and CPU prices are reasonable right now but isn’t that relevant for deep learning anyway. And the “AI and Compute” trend line broke down pretty much as soon as the OpenAI article was released, a solid 1.5 − 2 years before the Covid-19 crisis hit. That’s a long time in ML world.
Unless you’re a fanatic reverend of the God of Straight Lines, there isn’t anything here to be explained. When straight lines run into physical limitations, physics wins. Hardware progress clearly can’t keep up with the 10x per year growth rate of AI compute, and the only way to make up for it was to increase monetary investment into this field, which is becoming harder to justify given the lack of returns so far.
But, if you disagree and believe that the Straight Line is going to resume any day now, go ahead and buy more Nvidia stocks and win.
But it doesn’t say we’re guaranteed not to be living in a timeline where humanity doesn’t survive.
If I had a universe copying machine and a doomsday machine, pressed the “universe copy” button 1000 times (for 2¹⁰⁰⁰ universes), then smashed relativistic meteors into Earth in all but one of them… would you call that an ethical issue? I certainly would, even though the inhabitants of the original universe are guaranteed to be living in a timeline where they don’t die horribly from a volcanic apocalypse.