Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is … inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.
I think most people on LessWrong are using “decision” in the sense used in Decision Theory.
Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is either exotic or inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.