“agi happens almost certainly within in the next few decades” → maybe ai progress just kind of plateaus for a few decades, it turns out that gpqa/codeforces etc are like chess in that we only think they’re hard because humans who can do them are smart but they aren’t agi-complete, ai gets used in a bunch of places in the economy but it’s more like smartphones or something. in this world i should be taking normie life advice a lot more seriously.
“agi doesn’t happen in the next 2 years” → maybe actually scaling current techniques is all you need. gpqa/codeforces actually do just measure intelligence. within like half a year, ML researchers start being way more productive because lots of their job is automated. if i use current/near-future ai agents for my research, i will actually just be more productive.
“alignment is hard” → maybe basic techniques is all you need, because natural abstractions is true, or maybe the red car / blue car argument for why useful models are also competent at bad things is just wrong because generalization can be made to suck. maybe all the capabilities people are just right and it’s not reckless to be building agi so fast
some concrete examples
“agi happens almost certainly within in the next few decades” → maybe ai progress just kind of plateaus for a few decades, it turns out that gpqa/codeforces etc are like chess in that we only think they’re hard because humans who can do them are smart but they aren’t agi-complete, ai gets used in a bunch of places in the economy but it’s more like smartphones or something. in this world i should be taking normie life advice a lot more seriously.
“agi doesn’t happen in the next 2 years” → maybe actually scaling current techniques is all you need. gpqa/codeforces actually do just measure intelligence. within like half a year, ML researchers start being way more productive because lots of their job is automated. if i use current/near-future ai agents for my research, i will actually just be more productive.
“alignment is hard” → maybe basic techniques is all you need, because natural abstractions is true, or maybe the red car / blue car argument for why useful models are also competent at bad things is just wrong because generalization can be made to suck. maybe all the capabilities people are just right and it’s not reckless to be building agi so fast