...Huh? My version of Omega doesn’t bother predicting the agent, so you gain nothing by crippling its prediction abilities :-)
ETA: maybe it makes sense to let Omega have a “trembling hand”, so it doesn’t always do what it resolved to do. In this case I don’t know if the problem stays or goes away. Properly interpreting “counterfactual evidence” seems to be tricky.
...Huh? My version of Omega doesn’t bother predicting the agent, so you gain nothing by crippling its prediction abilities :-)
I would consider an Omega that didn’t bother predicting in even that case to be ‘broken’. Omega is good when it comes to good faith natural language implementation. Perhaps I would consider it one of Omega’s many siblings, one that requires more formal shackles.
...Huh? My version of Omega doesn’t bother predicting the agent, so you gain nothing by crippling its prediction abilities :-)
ETA: maybe it makes sense to let Omega have a “trembling hand”, so it doesn’t always do what it resolved to do. In this case I don’t know if the problem stays or goes away. Properly interpreting “counterfactual evidence” seems to be tricky.
I would consider an Omega that didn’t bother predicting in even that case to be ‘broken’. Omega is good when it comes to good faith natural language implementation. Perhaps I would consider it one of Omega’s many siblings, one that requires more formal shackles.