Not sure, this came up in a few previous conversations. If an agent is almost certain that it’s completely indifferent to everything, the most important thing it could do is to pursue the possibility that it’s not indifferent to something, that is to work primarily on figuring out its preference on the off chance that its current estimate might turn out to be wrong. So it still takes over the universe and builds complicated machines (assuming it has enough heuristics to carry out this line of reasoning).
Say, “Maybe 1957 is prime after all, and hardware used previously to conclude that it’s not was corrupted,” which is followed by a sequence of experiments that test the properties of preceding experiments in more and more detail, and then those experiments are investigated in turn, and so on and so forth, to the end of time.
Not sure, this came up in a few previous conversations. If an agent is almost certain that it’s completely indifferent to everything, the most important thing it could do is to pursue the possibility that it’s not indifferent to something, that is to work primarily on figuring out its preference on the off chance that its current estimate might turn out to be wrong. So it still takes over the universe and builds complicated machines (assuming it has enough heuristics to carry out this line of reasoning).
Say, “Maybe 1957 is prime after all, and hardware used previously to conclude that it’s not was corrupted,” which is followed by a sequence of experiments that test the properties of preceding experiments in more and more detail, and then those experiments are investigated in turn, and so on and so forth, to the end of time.