I am not sure what are you asking for. First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”. Second, discontinuous functions are not something outrageous or strange. Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”.
If I have a model of how data is distributed, I think that model contains assumptions.
Baysians have their priors that go into models. Frequentists usually assume that the data follow a normal distribution (or some related distribution) plus an error term.
I don’t think there are models without inbuild assumtions
Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
Models exist to help us make sense of the world. Airplanes are still designed with newtonian physics in mind because it’s a nice easy model.
Only if you know how many weird cases there are.
Keeping your model as simple as possible while still being able to make good predictions is the name of the game.
I don’t think there are models without inbuild assumtions
A model is a simplified description of the world. It is a synonym of “map” (in the map vs territory sense).
Let’s say I sit by the window and count up the gender of people passing by. After some time I have X males, Y females, and Z undetermineds. My model is that the probability of a recognizeably-male passing by my window is X / (X + Y + Z). It’s a trivial model, but it’s still a model and I don’t see what in-built assumptions it comes with except for, as I mentioned before, the assumption of stability (aka that the past is relevant to the future).
Models exist to help us make sense of the world.
Some are. But others exist to make accurate forecasts and for them being “easy” is not a goal.
I am not sure what are you asking for. First, any statistical model with an error term can handle occasional weird cases by sweeping them into the “error”. Second, discontinuous functions are not something outrageous or strange. Sure, an assumption of a monotonous utility function makes life much easier, but being easy is a tertiary, at best, goal of model building.
If I have a model of how data is distributed, I think that model contains assumptions.
Baysians have their priors that go into models. Frequentists usually assume that the data follow a normal distribution (or some related distribution) plus an error term.
I don’t think there are models without inbuild assumtions
Models exist to help us make sense of the world. Airplanes are still designed with newtonian physics in mind because it’s a nice easy model. Only if you know how many weird cases there are.
Keeping your model as simple as possible while still being able to make good predictions is the name of the game.
A model is a simplified description of the world. It is a synonym of “map” (in the map vs territory sense).
Let’s say I sit by the window and count up the gender of people passing by. After some time I have X males, Y females, and Z undetermineds. My model is that the probability of a recognizeably-male passing by my window is X / (X + Y + Z). It’s a trivial model, but it’s still a model and I don’t see what in-built assumptions it comes with except for, as I mentioned before, the assumption of stability (aka that the past is relevant to the future).
Some are. But others exist to make accurate forecasts and for them being “easy” is not a goal.