What is Rational?

Eliezer defines rationality as such:

Instrumental rationality: systematically achieving your values.

....

Instrumental rationality, on the other hand, is about steering reality— sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”

Extrapolating from the above definition, we can conclude that an act is rational, if it causes you to achieve your goals/​win. The issue with this definition is that we cannot evaluate the rationality of an act, until after observing the consequences of that action. We cannot determine if an act is rational without first carrying out the act. This is not a very useful definition, as one may want to use the rationality of an act as a guide.


Another definition of rationality is the one used in AI when talking about rational agents:

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

A precept sequence is basically the sequence of all perceptions the agent as had from inception to the moment of action. The above definition is useful, but I don’t think it is without issue; what is rational for two different agents A and B, with the exact same goals, in the exact same circumstances differs. Suppose A intends to cross a road, and A checks both sides of the road, ensures it’s clear and then attempts to cross. However, a meteorite strikes at that exact moment, and A is killed. A is not irrational for attempting to cross the road, giving that t hey did not know of the meteorite (and thus could not have accounted for it). Suppose B has more knowledge than A, and thus knows that there is substantial delay between meteor strikes in the vicinity, and then crosses after A and safely crosses. We cannot reasonably say B is more rational than A.

The above scenario doesn’t break our intuitions of what is rational, but what about in other scenarios? What about the gambler who knows not of the gambler’s fallacy, and believes that because the die hasn’t rolled an odd number for the past n turns, that it would definitely roll odd this time (after all, the probability of not rolling odd ). Are they then rational for betting the majority of their fund on the die rolling odd? Letting what’s rational depend on the knowledge of the agent involved, leads to a very broad (and possibly useless) notion of rationality. It may lead to what I call “folk rationality” (doing what you think would lead to success). Barring a few exceptions (extremes of emotion, compromised mental states, etc), most humans are folk rational. However, this folk rationality isn’t what I refer to when I say “rational”.


How then do we define what is rational to avoid the two issues I highlighted above?