First off, I’m guessing you’re familiar with the economic arguments in The Sun is Big.
You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A), but I was not endorsing it as actually being true. Indeed, the very next sentence talks about how one can make a parallel argument for the exact opposite conclusion.
I just changed the wording from “implies” to “would imply”. Hope that helps.
If we’re talking about prices for the same chips, [rate of] electricity, teleoperated robots, etc., of course they’ll go down, as the AGI will have invented better versions.
Well, costs will go down. You can argue that prices will equilibrate to costs, but it does need an argument. That’s my whole point. Normally, markets reach equilibrium where prices ≈ costs to producers ≈ value to consumers, with allowance for profit margin and so on. But this system has no such equilibrium! The value of producing AGI will remain much higher than the cost, all the way to Dyson spheres etc. So it’s at least not immediately obvious what the price will be at any given time.
This is really just a false thing to believe about AGI, from us humans’ perspective. It amounts to a new world political order. Unless you specifically build it to prevent all other future creations of humanity from becoming politically non-interventionist superintelligences, while also not being politically interventionist itself.
I already included caveats in two different places that I was assuming no AGI takeover etc., and that I find this assumption highly dubious, and that I think this whole discussion is therefore kinda moot. I mean, I could add yet a third caveat, but that seems excessive :-P
You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A),
My position is that if you accept certain arguments made about really smart AIs in “The Sun is Big”, Principle A, by itself, ceases to make sense in this context.
costs will go down. You can argue that prices will equilibriate to costs, but it does need an argument.
Assuming constant demand for a simple input, sure, you can predict the price of that input based on cost alone. The extent to which “the price of compute will go down”, is rolled in to how much “the cost of compute will go down”. But IIUC, you’re more interested in predicting the price of less abstract assets. Innovation in chip technology is more than just making more and more of the same product at a lower cost. [ “There is no ‘lump of chip’.” ] A 2024 chip is not just [roughly] 2^10 2004 chips—it has logistical advantages, if nothing else. And those aren’t accounted for if you insist on predicting chip price using only compute cost and value trendlines. Similar arguments hold for all other classes of material technological assets whose value increases in response to innovation.
“AI will [roughly] amount to X”, for any X, including “high-skilled entrepreneurial human labor” is a positive claim, not a default background assumption of discourse, and in my reckoning, that particular one is unjustified.
I’m still pretty sure that you think I believe things that I don’t believe. I’m trying to narrow down what it is and how you got that impression. I just made a number of changes to the wording, but it’s possible that I’m still missing the mark.
My position is that if you accept certain arguments made about really smart AIs in “The Sun is Big”, Principle A, by itself, ceases to make sense in this context.
When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”, and put in a link to a wikipedia article with more details. You see what I mean? I wasn’t endorsing it as always and forever true. Quite the contrary: The punchline of the whole article is: “here are three traditional economic principles, but at least one will need to be discarded post-AGI.”
“AI will [roughly] amount to X”, for any X, including “high-skilled entrepreneurial human labor” is a positive claim, not a default background assumption of discourse, and in my reckoning, that particular one is unjustified.
I did some rewriting of this part, any chance that helps?
When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”,
I don’t think you think Principle [A] must hold, but I do think you think it’s in question. I’m saying that, rather than taking this very broad general principle of historical economic good sense, and giving very broad arguments for why it might or might not hold post-AGI, we can start reasoning about superintelligent manufacturing [including R&D] and the effects it will have, more granularly, out the gates.
Like, with respect to Principle [C] my perspective is just “well of course the historical precedent against extremely fast economic growth doesn’t hold after the Singularity, that’s more or less what the Singularity is”.
Edit: Your rewrite of Principle [B] did make it clear to me that you’re considering timelines that are at least somewhat bad for humans; thank you for the clarification. [Of course I happen to think we can also discard “AI will be like a manufactured good, in terms of its effects on future prices”, out the gates, but it’s way clearer to me now that the trilemma is doing work on idea-space.]
I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong.
Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero.
AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn’t know; I honestly haven’t spent a ton of time talking to such people.
Principle [A] doesn’t just say AIs won’t run out of productive things to do; it makes a prediction about how this will affect prices in a market. It’s true that superintelligent AI won’t run out of productive things to do, but it will also change the situation such that the prices in the existing economy won’t be affected by this in the normal way prices are affected by “human participants in the market won’t run out of productive things to do”. Maybe there will be some kind of legible market internal to the AI’s thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson’s Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.
You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A), but I was not endorsing it as actually being true. Indeed, the very next sentence talks about how one can make a parallel argument for the exact opposite conclusion.
I just changed the wording from “implies” to “would imply”. Hope that helps.
Well, costs will go down. You can argue that prices will equilibrate to costs, but it does need an argument. That’s my whole point. Normally, markets reach equilibrium where prices ≈ costs to producers ≈ value to consumers, with allowance for profit margin and so on. But this system has no such equilibrium! The value of producing AGI will remain much higher than the cost, all the way to Dyson spheres etc. So it’s at least not immediately obvious what the price will be at any given time.
I already included caveats in two different places that I was assuming no AGI takeover etc., and that I find this assumption highly dubious, and that I think this whole discussion is therefore kinda moot. I mean, I could add yet a third caveat, but that seems excessive :-P
My position is that if you accept certain arguments made about really smart AIs in “The Sun is Big”, Principle A, by itself, ceases to make sense in this context.
Assuming constant demand for a simple input, sure, you can predict the price of that input based on cost alone. The extent to which “the price of compute will go down”, is rolled in to how much “the cost of compute will go down”. But IIUC, you’re more interested in predicting the price of less abstract assets. Innovation in chip technology is more than just making more and more of the same product at a lower cost. [ “There is no ‘lump of chip’.” ] A 2024 chip is not just [roughly] 2^10 2004 chips—it has logistical advantages, if nothing else. And those aren’t accounted for if you insist on predicting chip price using only compute cost and value trendlines. Similar arguments hold for all other classes of material technological assets whose value increases in response to innovation.
“AI will [roughly] amount to X”, for any X, including “high-skilled entrepreneurial human labor” is a positive claim, not a default background assumption of discourse, and in my reckoning, that particular one is unjustified.
I’m still pretty sure that you think I believe things that I don’t believe. I’m trying to narrow down what it is and how you got that impression. I just made a number of changes to the wording, but it’s possible that I’m still missing the mark.
When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”, and put in a link to a wikipedia article with more details. You see what I mean? I wasn’t endorsing it as always and forever true. Quite the contrary: The punchline of the whole article is: “here are three traditional economic principles, but at least one will need to be discarded post-AGI.”
I did some rewriting of this part, any chance that helps?
I don’t think you think Principle [A] must hold, but I do think you think it’s in question. I’m saying that, rather than taking this very broad general principle of historical economic good sense, and giving very broad arguments for why it might or might not hold post-AGI, we can start reasoning about superintelligent manufacturing [including R&D] and the effects it will have, more granularly, out the gates.
Like, with respect to Principle [C] my perspective is just “well of course the historical precedent against extremely fast economic growth doesn’t hold after the Singularity, that’s more or less what the Singularity is”.
Edit: Your rewrite of Principle [B] did make it clear to me that you’re considering timelines that are at least somewhat bad for humans; thank you for the clarification. [Of course I happen to think we can also discard “AI will be like a manufactured good, in terms of its effects on future prices”, out the gates, but it’s way clearer to me now that the trilemma is doing work on idea-space.]
I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong.
Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero.
AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong.
So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.
So, kudos to Principle (A). Do you agree?
This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actually think about strong AI. I wouldn’t know; I honestly haven’t spent a ton of time talking to such people.
Principle [A] doesn’t just say AIs won’t run out of productive things to do; it makes a prediction about how this will affect prices in a market. It’s true that superintelligent AI won’t run out of productive things to do, but it will also change the situation such that the prices in the existing economy won’t be affected by this in the normal way prices are affected by “human participants in the market won’t run out of productive things to do”. Maybe there will be some kind of legible market internal to the AI’s thinking, or [less likely, but conceivable] a multi-ASI equilibrium with mutually legible market prices. But what reason would a strongly superintelligent AI have to continue trading with humans very long, in a market that puts human-legible prices on things? Even in Hanson’s Age of Em, humans who choose to remain in their meatsuits are entirely frozen out of the real [emulation] economy, very quickly in subjective time, and to me this is an obvious consequence of every agent in the market simply thinking and working way faster than you.