So we just have this starving man not eating the delicious food set out before him and this cannot be explained in any way. That’s Pareto inefficiency. And that’s nuts.
Of course you can explain anything. You can always say: “God did it.” Your statement: “A rational agent made an efficient choice” doesn’t provide any additional useful information.
If you allow any type of explanation you can’t do any economics based on numbers anymore. You can’t even say 2+2=4 because 2+2=4 is a statement that makes certain assumptions about your unit of measurement.
To do economics you have to make certain assumptions about the nature of economic transactions that allow you to use certain mathematical axioms.
Actors that don’t follow those assumptions get called inefficient and that’s useful.
If you allow any type of explanation you can’t do any economics based on numbers anymore.
Sure you can. It’s just that your numbers will be fuzzy and you’ll have to use not math but statistics. You won’t be able to get much use out of mathematical axioms, but you’ll be able to build statistical models that include and, hopefully, quantify uncertainty involved.
Sure you can. It’s just that your numbers will be fuzzy and you’ll have to use not math but statistics. You won’t be able to get much use out of mathematical axioms, but you’ll be able to build statistical models that include and, hopefully, quantify uncertainty involved.
No, most statistical models need assumptions about the data distribution. You don’t get very far without throwing assumptions in your model.
most statistical models need assumptions about the data distribution.
I don’t think that is true (though we can quibble about what constitutes “most”). Assumptions about data distributions generally allow you to make strong statements about best/optimal/efficient estimators, but those, while useful, are not really necessary for building models.
Most statistical models need assumptions about stability—basically that data from the past is relevant to making statements about the future—but that is a general requirement of almost any forecasting.
Let’s say we have a guy who finds utility in donuts. He however is also afraid of the number 13.
So he prefers having 12 donuts to having 13 donuts just as he prefers having 14 donuts over having 12 donuts.
If you allow people with utility functions like that you are going to have real problems with saying anything worthwhile throught he use of math even if you do use statistics.
If you allow people with utility functions like that you are going to have real problems with saying anything worthwhile throught he use of math even if you do use statistics.
First, I don’t think so.
Second, I much prefer imperfect models that are representative of the real world (where some guys are afraid of the number 13) to neat and sterile models of imaginary simulations.
Second, I much prefer imperfect models that are representative of the real world (where some guys are afraid of the number 13) to neat and sterile models of imaginary simulations.
Could you give examples of economical models that give useful insight which allow those utility functions?
I am not sure what are you asking for. First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”. Second, discontinuous functions are not something outrageous or strange. Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”.
If I have a model of how data is distributed, I think that model contains assumptions.
Baysians have their priors that go into models. Frequentists usually assume that the data follow a normal distribution (or some related distribution) plus an error term.
I don’t think there are models without inbuild assumtions
Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
Models exist to help us make sense of the world. Airplanes are still designed with newtonian physics in mind because it’s a nice easy model.
Only if you know how many weird cases there are.
Keeping your model as simple as possible while still being able to make good predictions is the name of the game.
I don’t think there are models without inbuild assumtions
A model is a simplified description of the world. It is a synonym of “map” (in the map vs territory sense).
Let’s say I sit by the window and count up the gender of people passing by. After some time I have X males, Y females, and Z undetermineds. My model is that the probability of a recognizeably-male passing by my window is X / (X + Y + Z). It’s a trivial model, but it’s still a model and I don’t see what in-built assumptions it comes with except for, as I mentioned before, the assumption of stability (aka that the past is relevant to the future).
Models exist to help us make sense of the world.
Some are. But others exist to make accurate forecasts and for them being “easy” is not a goal.
Of course you can explain anything. You can always say: “God did it.” Your statement: “A rational agent made an efficient choice” doesn’t provide any additional useful information.
If you allow any type of explanation you can’t do any economics based on numbers anymore. You can’t even say 2+2=4 because 2+2=4 is a statement that makes certain assumptions about your unit of measurement.
To do economics you have to make certain assumptions about the nature of economic transactions that allow you to use certain mathematical axioms.
Actors that don’t follow those assumptions get called inefficient and that’s useful.
Sure you can. It’s just that your numbers will be fuzzy and you’ll have to use not math but statistics. You won’t be able to get much use out of mathematical axioms, but you’ll be able to build statistical models that include and, hopefully, quantify uncertainty involved.
No, most statistical models need assumptions about the data distribution. You don’t get very far without throwing assumptions in your model.
I don’t think that is true (though we can quibble about what constitutes “most”). Assumptions about data distributions generally allow you to make strong statements about best/optimal/efficient estimators, but those, while useful, are not really necessary for building models.
Most statistical models need assumptions about stability—basically that data from the past is relevant to making statements about the future—but that is a general requirement of almost any forecasting.
Let’s say we have a guy who finds utility in donuts. He however is also afraid of the number 13. So he prefers having 12 donuts to having 13 donuts just as he prefers having 14 donuts over having 12 donuts.
If you allow people with utility functions like that you are going to have real problems with saying anything worthwhile throught he use of math even if you do use statistics.
First, I don’t think so.
Second, I much prefer imperfect models that are representative of the real world (where some guys are afraid of the number 13) to neat and sterile models of imaginary simulations.
Could you give examples of economical models that give useful insight which allow those utility functions?
I am not sure what are you asking for. First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”. Second, discontinuous functions are not something outrageous or strange. Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
If I have a model of how data is distributed, I think that model contains assumptions.
Baysians have their priors that go into models. Frequentists usually assume that the data follow a normal distribution (or some related distribution) plus an error term.
I don’t think there are models without inbuild assumtions
Models exist to help us make sense of the world. Airplanes are still designed with newtonian physics in mind because it’s a nice easy model. Only if you know how many weird cases there are.
Keeping your model as simple as possible while still being able to make good predictions is the name of the game.
A model is a simplified description of the world. It is a synonym of “map” (in the map vs territory sense).
Let’s say I sit by the window and count up the gender of people passing by. After some time I have X males, Y females, and Z undetermineds. My model is that the probability of a recognizeably-male passing by my window is X / (X + Y + Z). It’s a trivial model, but it’s still a model and I don’t see what in-built assumptions it comes with except for, as I mentioned before, the assumption of stability (aka that the past is relevant to the future).
Some are. But others exist to make accurate forecasts and for them being “easy” is not a goal.