I’ve been less engaged with the old topics for the last several months while trying to figure out an updateful way of thinking about decision problems (understand the role of observations, as opposed to reducing them to non-observations as UDT does; and construct an ADT-like explicit toy model). This didn’t produce communicable intermediate results (the best I could manage was this post, for which quite possibly nobody understood the motivation). Just a few days ago, I think I figured out the way of formalizing this stuff (which is awfully trivial, but might provide a bit of methodological guidance to future research).
In short, progress is difficult and slow where we don’t have a sufficient number of tools which would suggest actionable open problems that we could assign to metaphorical grad students. This also sucks out all motivation for most people who could be working on these topics, since there is little expectation of success and little understanding of what such success would look like. Even I actually work while expecting to most likely not produce anything particularly useful in the long run (there’s only a limited chance for limited success), but I’m a relatively strange creature. Academia additionally motivates people by rewarding the activity of building in known ways on existing knowledge without producing a lot of benefit, but producing visible, and possibly of high quality, if mostly useless results that gradually build up to systematic improvements.
Because it’s awfully trivial and it’s not easy to locate all the pieces of motivation and application that would make anyone enthusiastic about this. Like the fact that action and utility are arbitrary mathematical structures in ADT and not just integer outputs of programs.
Hm, I don’t see any trivial way of understanding observational knowledge except by treating it as part of the input-output map as UDT suggests. So if your idea is different, I’m still asking you to write it up.
In one sentence: Agent sees the world from within a logical theory in which observations are nonlogical symbols. I’ll of course try to write this up in time.
(the best I could manage was this post, for which quite possibly nobody understood the motivation)
I’m reasonably sure that’s because the problem you see doesn’t actually exist in your example and you only think it does because you misapplied UDT. If you think this is important why did you never get back to our discussion there as you promised? That might result in either a better understanding why this is so difficult for other people to grasp (if I was misunderstanding you or making a non-obvious mistake) or either a dissolution of the apparent problem or examples where it actually comes up (if I was right).
For some reason, I find it difficult to reason about these problems, and have never acquired a facility of easily seeing them all the way through, so it’s hard work for me to follow these discussions. I expect I was not making an error in understanding the problem the way it was intended, and figuring out the details of your way of parsing the problem was not a priority.
why did you never get back to our discussion there as you promised?
It feels emotionally difficult to terminate a technical discussion (where all participants invested nontrivial effort), while postponing it for a short time can be necessary, in which case there is an impulse to signal to others the lack of intention to actually stop the discussion, to signal the temporary nature of the present pause (but then, motivation to continue evaporates or gets revoked on reflection). I’ll try to keep in mind that making promises for continuing the discussion is a bad, no good way of communicating this (it happened recently again in a discussion with David Gerard about merits of wiki-managing policies; I edited out the promise in a few hours).
At this point, if you feel that you have a useful piece of knowledge which our discussion failed to communicate, I can only offer you a suggestion to write up your position as a (more self-contained) discussion post.
I’ve been less engaged with the old topics for the last several months while trying to figure out an updateful way of thinking about decision problems (understand the role of observations, as opposed to reducing them to non-observations as UDT does; and construct an ADT-like explicit toy model). This didn’t produce communicable intermediate results (the best I could manage was this post, for which quite possibly nobody understood the motivation). Just a few days ago, I think I figured out the way of formalizing this stuff (which is awfully trivial, but might provide a bit of methodological guidance to future research).
In short, progress is difficult and slow where we don’t have a sufficient number of tools which would suggest actionable open problems that we could assign to metaphorical grad students. This also sucks out all motivation for most people who could be working on these topics, since there is little expectation of success and little understanding of what such success would look like. Even I actually work while expecting to most likely not produce anything particularly useful in the long run (there’s only a limited chance for limited success), but I’m a relatively strange creature. Academia additionally motivates people by rewarding the activity of building in known ways on existing knowledge without producing a lot of benefit, but producing visible, and possibly of high quality, if mostly useless results that gradually build up to systematic improvements.
Uhh, so why don’t I know about it? Could you send an email to me or to the list?
Because it’s awfully trivial and it’s not easy to locate all the pieces of motivation and application that would make anyone enthusiastic about this. Like the fact that action and utility are arbitrary mathematical structures in ADT and not just integer outputs of programs.
Hm, I don’t see any trivial way of understanding observational knowledge except by treating it as part of the input-output map as UDT suggests. So if your idea is different, I’m still asking you to write it up.
In one sentence: Agent sees the world from within a logical theory in which observations are nonlogical symbols. I’ll of course try to write this up in time.
I’m reasonably sure that’s because the problem you see doesn’t actually exist in your example and you only think it does because you misapplied UDT. If you think this is important why did you never get back to our discussion there as you promised? That might result in either a better understanding why this is so difficult for other people to grasp (if I was misunderstanding you or making a non-obvious mistake) or either a dissolution of the apparent problem or examples where it actually comes up (if I was right).
For some reason, I find it difficult to reason about these problems, and have never acquired a facility of easily seeing them all the way through, so it’s hard work for me to follow these discussions. I expect I was not making an error in understanding the problem the way it was intended, and figuring out the details of your way of parsing the problem was not a priority.
It feels emotionally difficult to terminate a technical discussion (where all participants invested nontrivial effort), while postponing it for a short time can be necessary, in which case there is an impulse to signal to others the lack of intention to actually stop the discussion, to signal the temporary nature of the present pause (but then, motivation to continue evaporates or gets revoked on reflection). I’ll try to keep in mind that making promises for continuing the discussion is a bad, no good way of communicating this (it happened recently again in a discussion with David Gerard about merits of wiki-managing policies; I edited out the promise in a few hours).
At this point, if you feel that you have a useful piece of knowledge which our discussion failed to communicate, I can only offer you a suggestion to write up your position as a (more self-contained) discussion post.