Yes, I’m saying that the economic universe is as deterministic as the physical universe. That doesn’t mean that you can’t make bad decisions, stupid decisions, or decisions that you regret. It just means that your decisions have to make sense according to the laws of economics given all the constraints on the system. It’s important because economists don’t think that’s true. They think there are outstanding Pareto improvements. They think (thought—now inefficiency is jumbled up with a couple of other meanings and it’s hard to say exactly what they mean) externalities are explained by inefficiency, not some constraint. You shouldn’t have to know specifically about transaction costs to know that inefficiency is a fake explanation and if something doesn’t make sense in the world, then you’re either doing the math wrong or missing a piece of information. If you call something Pareto-inefficient, you’re saying that something exists that, given the constraints on the system, the laws of economics would not allow to exist.
If I miss a $20 for some reason that seems silly like wanting to hear a piece of music or because I’m counting air conditioners, then that’s just as valid an explanation as me missing the $20 because the $20 doesn’t exist, or because I am physically paralyzed and cannot pick up the money, or any other reason. Something like “I just had to hear that music” or “I was busy counting air conditioners and didn’t see it” sounds trivial to the human brain, but the universe doesn’t bother to check if you think something’s silly before executing it.
Armen Alchian is well known to economists. I said he was powerful in the sense that he was very good at economics, not that he actually wielded power, in case that wasn’t clear. He will always be on the top of the list of economists who should have won the Nobel Prize and didn’t.
Would you also says that a rock that tumbles down a mountain or a wave crashing on the shore is behaving efficiently? If yes, then the concept is meaningless, everything that happens according to the laws of nature is efficient. If on the other hand you say that only agents behave efficiently, then I would reply that humans are not always agent-like entities, and sometimes behave more similarly to rocks than to true agents.
My example of missing the $20 was supposed to illustrate this, but it seems not to have worked, so I will try once more with an AI program. Imagine an AI programmed to trade in the stock market, aggregating information about prices and executing trades in order to maximize some coded measure of profit. Presumably it is behaving efficiently if anything is. Now imagine that a quantum fluctuation in the hardware causes it to behave (for a second until the error is corrected) against the coded way and throw away some profit. Does its behavior “make sense according to the laws of economics given all the constraints on the system.”?
I contend that if you say the AI behaved efficiently when it left some “money on the table”, you might as well say a falling rock is being efficient, you are using the word to describe something that at the moment was not behaving agent-like at all. And you agree that the AI did not behave efficiently at the moment, then I will contend that humans have the same kind of problems all the time.
The only way you can get around this is by saying that once you decide to count a human or AI as an economic agent, you will describe its actions so that they are all efficient by definition, even at the times they are not very agent-like. You are of course allowed to do this, and it might even make your model of economics more simple and elegant (though hardly more predictive!). But you cannot then say grand things like “The laws of economics do not allow inefficiencies”, “The economic universe is as deterministic as the physical universe”, etc., and think they are deep and meaningful. They are only true because of the definitions you have chosen to use.
Look, Pareto inefficiency requires a violation of the laws of economics. That’s what it means. It is an economic agent forgoing a benefit for no reason—not “no reason” according to the agent, but no reason according to the universe. You are positing explanations for why the Pareto improvement doesn’t happen, so it’s not inefficiency. It doesn’t matter if the reason seems small and trivial to you. The universe doesn’t run things by you before it makes them happen.
If you don’t have economic agents, you’re not talking about an economic system. There is no inefficiency or efficiency there. If you want to say that the AI stopped being an agent for a second, go for it. It doesn’t affect my argument.
Yes, whatever is, is efficient. It isn’t very predictive because it predicts everything that exists in the universe. Just like the laws of physics. But efficiency does predict that you won’t find anything in the universe that violates the laws of economics. This makes a definitive prediction that there will always be an explanation for an economic agent forgoing a benefit. It does not predict that Alejandro1 will find that explanation sufficiently significant.
novalis and Alejandro1,
Yes, I’m saying that the economic universe is as deterministic as the physical universe. That doesn’t mean that you can’t make bad decisions, stupid decisions, or decisions that you regret. It just means that your decisions have to make sense according to the laws of economics given all the constraints on the system. It’s important because economists don’t think that’s true. They think there are outstanding Pareto improvements. They think (thought—now inefficiency is jumbled up with a couple of other meanings and it’s hard to say exactly what they mean) externalities are explained by inefficiency, not some constraint. You shouldn’t have to know specifically about transaction costs to know that inefficiency is a fake explanation and if something doesn’t make sense in the world, then you’re either doing the math wrong or missing a piece of information. If you call something Pareto-inefficient, you’re saying that something exists that, given the constraints on the system, the laws of economics would not allow to exist.
If I miss a $20 for some reason that seems silly like wanting to hear a piece of music or because I’m counting air conditioners, then that’s just as valid an explanation as me missing the $20 because the $20 doesn’t exist, or because I am physically paralyzed and cannot pick up the money, or any other reason. Something like “I just had to hear that music” or “I was busy counting air conditioners and didn’t see it” sounds trivial to the human brain, but the universe doesn’t bother to check if you think something’s silly before executing it.
Armen Alchian is well known to economists. I said he was powerful in the sense that he was very good at economics, not that he actually wielded power, in case that wasn’t clear. He will always be on the top of the list of economists who should have won the Nobel Prize and didn’t.
Would you also says that a rock that tumbles down a mountain or a wave crashing on the shore is behaving efficiently? If yes, then the concept is meaningless, everything that happens according to the laws of nature is efficient. If on the other hand you say that only agents behave efficiently, then I would reply that humans are not always agent-like entities, and sometimes behave more similarly to rocks than to true agents.
My example of missing the $20 was supposed to illustrate this, but it seems not to have worked, so I will try once more with an AI program. Imagine an AI programmed to trade in the stock market, aggregating information about prices and executing trades in order to maximize some coded measure of profit. Presumably it is behaving efficiently if anything is. Now imagine that a quantum fluctuation in the hardware causes it to behave (for a second until the error is corrected) against the coded way and throw away some profit. Does its behavior “make sense according to the laws of economics given all the constraints on the system.”?
I contend that if you say the AI behaved efficiently when it left some “money on the table”, you might as well say a falling rock is being efficient, you are using the word to describe something that at the moment was not behaving agent-like at all. And you agree that the AI did not behave efficiently at the moment, then I will contend that humans have the same kind of problems all the time.
The only way you can get around this is by saying that once you decide to count a human or AI as an economic agent, you will describe its actions so that they are all efficient by definition, even at the times they are not very agent-like. You are of course allowed to do this, and it might even make your model of economics more simple and elegant (though hardly more predictive!). But you cannot then say grand things like “The laws of economics do not allow inefficiencies”, “The economic universe is as deterministic as the physical universe”, etc., and think they are deep and meaningful. They are only true because of the definitions you have chosen to use.
Look, Pareto inefficiency requires a violation of the laws of economics. That’s what it means. It is an economic agent forgoing a benefit for no reason—not “no reason” according to the agent, but no reason according to the universe. You are positing explanations for why the Pareto improvement doesn’t happen, so it’s not inefficiency. It doesn’t matter if the reason seems small and trivial to you. The universe doesn’t run things by you before it makes them happen.
If you don’t have economic agents, you’re not talking about an economic system. There is no inefficiency or efficiency there. If you want to say that the AI stopped being an agent for a second, go for it. It doesn’t affect my argument.
Yes, whatever is, is efficient. It isn’t very predictive because it predicts everything that exists in the universe. Just like the laws of physics. But efficiency does predict that you won’t find anything in the universe that violates the laws of economics. This makes a definitive prediction that there will always be an explanation for an economic agent forgoing a benefit. It does not predict that Alejandro1 will find that explanation sufficiently significant.