My government name is Mack Gallagher. Crocker’s Rules. I am an “underfunded” “alignment” “researcher”. DM me if you’d like to fund my posts, or my project.
I post some of my less-varnished opinions on my Substack, and my personal blog.
If you like arguing with me on LessWrong, at present I’m basically free round the clock to continue interesting arguments in my Discord.
This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn’t know; I honestly haven’t spent a ton of time talking to such people.
Principle [A] doesn’t just say AIs won’t run out of productive things to do; it makes a prediction about how this will affect prices in a market. It’s true that superintelligent AI won’t run out of productive things to do, but it will also change the situation such that the prices in the existing economy won’t be affected by this in the normal way prices are affected by “human participants in the market won’t run out of productive things to do”. Maybe there will be some kind of legible market internal to the AI’s thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson’s Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.