UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like “beliefs about your future sensory input #11, given sensory inputs #1-#10”. I would say that an UDT agent doesn’t have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it’s relevant, can you explain how?
Your answers have showed me that my original comment was wrong: the question of “algorithmicness” is uninteresting unless we imagine that algorithms can have “subjective experience”, which brings us back to consciousness again. Oh well, another line of attack goes dead.
UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like “beliefs about your future sensory input #11, given sensory inputs #1-#10”. I would say that an UDT agent doesn’t have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it’s relevant, can you explain how?
Yes, it seems most of my comment was irrelevant, and even the original question was so weird that I can no longer make sense of it. Sorry.
Your answers have showed me that my original comment was wrong: the question of “algorithmicness” is uninteresting unless we imagine that algorithms can have “subjective experience”, which brings us back to consciousness again. Oh well, another line of attack goes dead.