Hey I’m interested in implementing some of these decision theories (and decision problems) in code. I have an initial version of CDT, EDT, and something I’m generically calling “FDT”, but which I guess is actually some particular sub-variant of FDT in Python here, with the core decision theories implemented in about 45 lines of python code here. I’m wondering if anyone here might have suggestions on what would it look like to implement UDT in this framework—either 1.0 or 1.1. I don’t yet have a notion of “observation” in the code, so I can’t yet implement e.g. Parfit’s Hitchiker or XOR blackmail. I’m interested in suggestions on what that would look like.
Any other comments or suggestions also much appreciated. I hope to turn this into a top-level post after implementing more decision problems and theories, and getting more feedback.
It would be very helpful for my own debugging to be able to see a table with all the possible worlds, their probabilities, and their utilities. Those worlds are enumerated explicitly in inference.py, and probabilities and utilities computed there also. Would be great to color them according to the value of intervention_node, in order to see which probabilities and utilities are contributing to which conditional expectations for the final argmax. That argmax is this one.
Hey I’m interested in implementing some of these decision theories (and decision problems) in code. I have an initial version of CDT, EDT, and something I’m generically calling “FDT”, but which I guess is actually some particular sub-variant of FDT in Python here, with the core decision theories implemented in about 45 lines of python code here. I’m wondering if anyone here might have suggestions on what would it look like to implement UDT in this framework—either 1.0 or 1.1. I don’t yet have a notion of “observation” in the code, so I can’t yet implement e.g. Parfit’s Hitchiker or XOR blackmail. I’m interested in suggestions on what that would look like.
Any other comments or suggestions also much appreciated. I hope to turn this into a top-level post after implementing more decision problems and theories, and getting more feedback.
Here, I made it use graphviz: https://github.com/alexflint/decision-theory/pull/1
Earth ought to spend at least one programmer-year on basic science of decision theories. Any feature requests?
It would be very helpful for my own debugging to be able to see a table with all the possible worlds, their probabilities, and their utilities. Those worlds are enumerated explicitly in inference.py, and probabilities and utilities computed there also. Would be great to color them according to the value of intervention_node, in order to see which probabilities and utilities are contributing to which conditional expectations for the final argmax. That argmax is this one.
Done, keep em coming.