I’d flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It’s almost a definition of what utility is—if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have (probably related to risk).
I’ve recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I’m working on working with that to do effective long-term planning—the best trick I have so far is weighing “unacceptable status quo continues” as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn’t help any, and I’ve been doing that).
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell’s “Principia Mathematica”, but haven’t evaluated it as a source for learning set theory.