My best guess at productive subgoal for FAI is development of decision theory along the lines given in the last post, in order to better understand decision-making and the impossible problem in particular (how to define preference given an arbitrary agent’s program; what is a notion of preference that is general enough for human preference to be an instance).
About a year ago I was still at the “rusty technical background” stage, and my attempts to think about decision theory were not quite adequate. Studying mathematics helped significantly by allowing to think more clearly and about more complicated constructions. More recently, study of mathematical logic allowed me to see the beautiful formalizations of decision theory I’m currently working on.
I can’t tell you that studying this truckload of textbooks will get any results, but reading textbooks is something I know how to do, unlike how to make progress on FAI, so unless I find something better, it’s what I’ll continue doing.
Ambient decision theory, as it currently stands, requires some grasp of logic to think about, but the level of Enderton’s book might be adequate. I’m going deeper in the hope of developing more mathematical muscle to allow jumping over wider inferential gaps, even if I don’t know in what way. Relying on creative surprises.
My best guess at productive subgoal for FAI is development of decision theory along the lines given in the last post, in order to better understand decision-making and the impossible problem in particular (how to define preference given an arbitrary agent’s program; what is a notion of preference that is general enough for human preference to be an instance).
About a year ago I was still at the “rusty technical background” stage, and my attempts to think about decision theory were not quite adequate. Studying mathematics helped significantly by allowing to think more clearly and about more complicated constructions. More recently, study of mathematical logic allowed me to see the beautiful formalizations of decision theory I’m currently working on.
I can’t tell you that studying this truckload of textbooks will get any results, but reading textbooks is something I know how to do, unlike how to make progress on FAI, so unless I find something better, it’s what I’ll continue doing.
Ambient decision theory, as it currently stands, requires some grasp of logic to think about, but the level of Enderton’s book might be adequate. I’m going deeper in the hope of developing more mathematical muscle to allow jumping over wider inferential gaps, even if I don’t know in what way. Relying on creative surprises.