I’m writing a book about epistemology. It’s about The Problem of the Criterion, why it’s important, and what it has to tell us about how we approach knowing the truth.
I’ve also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Much of this depends on what kind of AI we get and how long we live in relatively the same conditions as that AI.
The unstated assumptions here seem to me to be something like:
AI provides relatively fixed levels of automation, getting gradually better over time
AI doesn’t accelerate us towards some kind of singularity so that society has time to adapt to tiering
I’m pretty suspicious of accepting the second assumption here, as I think just the opposite is more likely. But, given the assumptions Acemoglu seems to be making, a two-tiered society split seems a likely outcome to me.