As for a sanity check, such as I can offer: The grandparent seems correct in stating that Silas’s graph doesn’t handle the problem described in the grandparent. Just because it is a slightly different problem. With the grandparent’s problem it seems to be the agent’s knowledge of likely hardware failure modes that is important rather than Omega’s
Well, Psy-Kosh had been repeatedly bringing up that Omega has to account for how something might happen between me choosing an algorithm, and the algorithm I actually implement, because of cosmic rays and whatnot, so I thought that one was more important.
However, I think the “innards” node already contains one’s knowledge about what kinds of things could go wrong. If I’m wrong, add that as a parent to the boxed node. the link is clipped when you compute the “would” anyway.
OOOOOOH! I think I see (part of, but not all) of the misunderstanding here. I wasn’t talking about how Omega can take this into account, I was talking about how the agent Omega is playing games with would take this into account.
ie, not how Omega deals with the problem, but how I would.
Problems involving Omega probably aren’t useful examples for demonstrating your problem either way since Omega will accurately predict our actions either way and our identity angst is irrelevant.
I’d like to see an instantiation of the type of problem you mentioned above, involving the many explicitly dependant systems. Something involving a box to pick or a bet to take. Right now the requirements of the model are not defined much beyond ‘apply standard decision theory with included mechanism for handling uncertainty at such time as the problem becomes available’.
Well, Psy-Kosh had been repeatedly bringing up that Omega has to account for how something might happen between me choosing an algorithm, and the algorithm I actually implement, because of cosmic rays and whatnot, so I thought that one was more important.
However, I think the “innards” node already contains one’s knowledge about what kinds of things could go wrong. If I’m wrong, add that as a parent to the boxed node. the link is clipped when you compute the “would” anyway.
OOOOOOH! I think I see (part of, but not all) of the misunderstanding here. I wasn’t talking about how Omega can take this into account, I was talking about how the agent Omega is playing games with would take this into account.
ie, not how Omega deals with the problem, but how I would.
Problems involving Omega probably aren’t useful examples for demonstrating your problem either way since Omega will accurately predict our actions either way and our identity angst is irrelevant.
I’d like to see an instantiation of the type of problem you mentioned above, involving the many explicitly dependant systems. Something involving a box to pick or a bet to take. Right now the requirements of the model are not defined much beyond ‘apply standard decision theory with included mechanism for handling uncertainty at such time as the problem becomes available’.
So? The graph still handles that.