Undergraduate student in math and computer science at McGill and Mila. Optimizing for the flourishing of all sentient life regardless of substrate. Aspiring philosopher.
george_adams
“Gemini 3 estimates that there are 15-20k core ML academics and 100-150k supporting PhD students and Postdocs worldwide.”
In my opinion, this seems way too high. What was the logic or assumptions it used?
Land and buildings: 16.5B
IT assets: 13.6B
Where are the GPUs (TPUs mostly in Google’s case)? I figured these would be bigger given the capex of Google, MSFT, etc. on building enormous clusters.
I agree with most of the individual arguments you make, but this post still gives me “Feynman vibes.” I generally think there should be a stronger prior on things staying the same for longer. I also think that the distribution of how AGI goes is so absurd, it’s hard to reason about things like expectations for humans. (You acknowledge that in the post)
I agree with most things said but not with the conclusion. There is a massive chunk of human (typically male) psyche that will risk death/major consequences in exchange for increasing social status. Think of basically any war. A specific example is Kamikazee pilots in WW2 who flew in suicide missions for the good of the nation. The pilots were operating within a value system that rewarded individual sacrifice for the greater mission. The creators of AGI will have increasing social status (and competition, thanks to Moloch) until the point of AGI ruin.
(Also minor point that some accelerationists are proudly anti speciest and don’t care about the wellbeing of humans)
Imo, this analogy breaks down if you take a holistic evolutionary lens. The amount of time you spent learning chess is minuscule compared to the amount of time evolution spent optimizing for creating the general learning machine that is your brain. It’s not obvious how to cleanly analogize the current frontier model training recipe to evolution. But, I claim that your brain has certain inductive biases at birth that make it possible to eventually learn to do thing X, and directly training on thing X wouldn’t have worked for evolution because the general model was just too bad.