By the same token, fish in an aquarium—or Braitenberg vehicles—are constantly engaging in bets they don’t realize. Swim to this side, be first to the food but exert energy getting there.
Your perspective is valid, but if the agents refuse/are incapable of seeing the situation from a betting perspective, you have to ask how useful it is (not necessarily thinking in estimated utility, best case, worst case etcetera, but in the “betting” aspect of it). It may be a good intuition pump, as long as we keep in mind that people don’t work that way.
Do fish think in terms of expected value? Of course not. Evolutions make bets, and they can’t think at all. Refactored Agency is a valuable tool—anything that can be usefully as a goal-seeking process with uncertain knowledge can also be modeled usefully as making bets. How useful is it to view arbitrary things through different models? Well, Will Newsome makes a practice of it. So, it’s probably good for having insights, but caveat emptor.
The more complete the models describe the underlying phenomenon, the more isomorphic all models should be (in their Occamian formulation), until eventually we’re only exchanging variable names.
Yes; to check your visual acuity, you block off one eye, then open that one and block the other. To check (and improve) your conceptual acuity, you block off everything that isn’t an agent, then you block of everything that isn’t an algorithm, then you block of everything that isn’t an institution, etc.
Unless you can hypercompute, in which case that’s probably not a useful heuristic.
this is off topic but I’m really disappointed that braitenberg vehicles didn’t turn out to be wheeled fish tanks that allowed the fish to explore your house
By the same token, fish in an aquarium—or Braitenberg vehicles—are constantly engaging in bets they don’t realize. Swim to this side, be first to the food but exert energy getting there.
Your perspective is valid, but if the agents refuse/are incapable of seeing the situation from a betting perspective, you have to ask how useful it is (not necessarily thinking in estimated utility, best case, worst case etcetera, but in the “betting” aspect of it). It may be a good intuition pump, as long as we keep in mind that people don’t work that way.
Do fish think in terms of expected value? Of course not. Evolutions make bets, and they can’t think at all. Refactored Agency is a valuable tool—anything that can be usefully as a goal-seeking process with uncertain knowledge can also be modeled usefully as making bets. How useful is it to view arbitrary things through different models? Well, Will Newsome makes a practice of it. So, it’s probably good for having insights, but caveat emptor.
The more complete the models describe the underlying phenomenon, the more isomorphic all models should be (in their Occamian formulation), until eventually we’re only exchanging variable names.
Yes; to check your visual acuity, you block off one eye, then open that one and block the other. To check (and improve) your conceptual acuity, you block off everything that isn’t an agent, then you block of everything that isn’t an algorithm, then you block of everything that isn’t an institution, etc.
Unless you can hypercompute, in which case that’s probably not a useful heuristic.
this is off topic but I’m really disappointed that braitenberg vehicles didn’t turn out to be wheeled fish tanks that allowed the fish to explore your house