Would you also says that a rock that tumbles down a mountain or a wave crashing on the shore is behaving efficiently? If yes, then the concept is meaningless, everything that happens according to the laws of nature is efficient. If on the other hand you say that only agents behave efficiently, then I would reply that humans are not always agent-like entities, and sometimes behave more similarly to rocks than to true agents.
My example of missing the $20 was supposed to illustrate this, but it seems not to have worked, so I will try once more with an AI program. Imagine an AI programmed to trade in the stock market, aggregating information about prices and executing trades in order to maximize some coded measure of profit. Presumably it is behaving efficiently if anything is. Now imagine that a quantum fluctuation in the hardware causes it to behave (for a second until the error is corrected) against the coded way and throw away some profit. Does its behavior “make sense according to the laws of economics given all the constraints on the system.”?
I contend that if you say the AI behaved efficiently when it left some “money on the table”, you might as well say a falling rock is being efficient, you are using the word to describe something that at the moment was not behaving agent-like at all. And you agree that the AI did not behave efficiently at the moment, then I will contend that humans have the same kind of problems all the time.
The only way you can get around this is by saying that once you decide to count a human or AI as an economic agent, you will describe its actions so that they are all efficient by definition, even at the times they are not very agent-like. You are of course allowed to do this, and it might even make your model of economics more simple and elegant (though hardly more predictive!). But you cannot then say grand things like “The laws of economics do not allow inefficiencies”, “The economic universe is as deterministic as the physical universe”, etc., and think they are deep and meaningful. They are only true because of the definitions you have chosen to use.
Look, Pareto inefficiency requires a violation of the laws of economics. That’s what it means. It is an economic agent forgoing a benefit for no reason—not “no reason” according to the agent, but no reason according to the universe. You are positing explanations for why the Pareto improvement doesn’t happen, so it’s not inefficiency. It doesn’t matter if the reason seems small and trivial to you. The universe doesn’t run things by you before it makes them happen.
If you don’t have economic agents, you’re not talking about an economic system. There is no inefficiency or efficiency there. If you want to say that the AI stopped being an agent for a second, go for it. It doesn’t affect my argument.
Yes, whatever is, is efficient. It isn’t very predictive because it predicts everything that exists in the universe. Just like the laws of physics. But efficiency does predict that you won’t find anything in the universe that violates the laws of economics. This makes a definitive prediction that there will always be an explanation for an economic agent forgoing a benefit. It does not predict that Alejandro1 will find that explanation sufficiently significant.
Would you also says that a rock that tumbles down a mountain or a wave crashing on the shore is behaving efficiently? If yes, then the concept is meaningless, everything that happens according to the laws of nature is efficient. If on the other hand you say that only agents behave efficiently, then I would reply that humans are not always agent-like entities, and sometimes behave more similarly to rocks than to true agents.
My example of missing the $20 was supposed to illustrate this, but it seems not to have worked, so I will try once more with an AI program. Imagine an AI programmed to trade in the stock market, aggregating information about prices and executing trades in order to maximize some coded measure of profit. Presumably it is behaving efficiently if anything is. Now imagine that a quantum fluctuation in the hardware causes it to behave (for a second until the error is corrected) against the coded way and throw away some profit. Does its behavior “make sense according to the laws of economics given all the constraints on the system.”?
I contend that if you say the AI behaved efficiently when it left some “money on the table”, you might as well say a falling rock is being efficient, you are using the word to describe something that at the moment was not behaving agent-like at all. And you agree that the AI did not behave efficiently at the moment, then I will contend that humans have the same kind of problems all the time.
The only way you can get around this is by saying that once you decide to count a human or AI as an economic agent, you will describe its actions so that they are all efficient by definition, even at the times they are not very agent-like. You are of course allowed to do this, and it might even make your model of economics more simple and elegant (though hardly more predictive!). But you cannot then say grand things like “The laws of economics do not allow inefficiencies”, “The economic universe is as deterministic as the physical universe”, etc., and think they are deep and meaningful. They are only true because of the definitions you have chosen to use.
Look, Pareto inefficiency requires a violation of the laws of economics. That’s what it means. It is an economic agent forgoing a benefit for no reason—not “no reason” according to the agent, but no reason according to the universe. You are positing explanations for why the Pareto improvement doesn’t happen, so it’s not inefficiency. It doesn’t matter if the reason seems small and trivial to you. The universe doesn’t run things by you before it makes them happen.
If you don’t have economic agents, you’re not talking about an economic system. There is no inefficiency or efficiency there. If you want to say that the AI stopped being an agent for a second, go for it. It doesn’t affect my argument.
Yes, whatever is, is efficient. It isn’t very predictive because it predicts everything that exists in the universe. Just like the laws of physics. But efficiency does predict that you won’t find anything in the universe that violates the laws of economics. This makes a definitive prediction that there will always be an explanation for an economic agent forgoing a benefit. It does not predict that Alejandro1 will find that explanation sufficiently significant.