As a sort of metaphor/intuition pump/heuristic thingy to make sure I understood things right:
Can you think of this as A living in platonic space/tegmark 4 multiverse “before” your universe and having unlimited power, you importing B from it and using that repeatedly as your limited AI, and the set of axiom sets A cares about as a “level 5 tegmark multiverse” with each of the logics being it’s own tegmark 4 multiverse?
Perhaps not so useful in building such an AI, but I find these kind of more intuitive rough approximations useful when trying to reason about it using my own human brain.
I’m not sure “existence” is the best intuition pump, maybe it’s better to think in terms of “caring”, like “I care about what these programs would return” and “I care about what would happen if these logical facts were true”. There might well be only one existing program and only one set of true logical facts, but we care about many different ones, because we are uncertain.
I already have an intuition setup where “what I care about” and “what really exists” are equivalent. Since, you know, there’s nothing else “exists” could mean I can think of and “what I care about” is what it seems to be used like?
May or may not need an additional clause about things that exist to things that exist also existing, recursively.
Let’s say you “care” about some hypothetical if you’d be willing to pay a penny today unconditionally in order to prevent your loved ones from dying in that hypothetical. If we take some faraway digit of pi, you’ll find that you “care” about both the hypothetical where it’s even and the hypothetical where it’s odd, even though you know in advance that one of those provably does not “exist”. And if you only had a limited time to run a decision theory, you wouldn’t want to run any decision theory that threw away these facts about your “care”. That’s one of the reasons why it seems more natural to me to use “care” rather than “existence” as the input for a decision theory.
Sounds like both of us could use either interpretation without any difference in conclusion, and just find different abstractions useful for thinking about it due to small differences in what other kinds of intuitions we’ve previously trained. Just a minor semantic hiccup and nothing more.
As a sort of metaphor/intuition pump/heuristic thingy to make sure I understood things right: Can you think of this as A living in platonic space/tegmark 4 multiverse “before” your universe and having unlimited power, you importing B from it and using that repeatedly as your limited AI, and the set of axiom sets A cares about as a “level 5 tegmark multiverse” with each of the logics being it’s own tegmark 4 multiverse?
Perhaps not so useful in building such an AI, but I find these kind of more intuitive rough approximations useful when trying to reason about it using my own human brain.
I’m not sure “existence” is the best intuition pump, maybe it’s better to think in terms of “caring”, like “I care about what these programs would return” and “I care about what would happen if these logical facts were true”. There might well be only one existing program and only one set of true logical facts, but we care about many different ones, because we are uncertain.
I already have an intuition setup where “what I care about” and “what really exists” are equivalent. Since, you know, there’s nothing else “exists” could mean I can think of and “what I care about” is what it seems to be used like?
May or may not need an additional clause about things that exist to things that exist also existing, recursively.
Let’s say you “care” about some hypothetical if you’d be willing to pay a penny today unconditionally in order to prevent your loved ones from dying in that hypothetical. If we take some faraway digit of pi, you’ll find that you “care” about both the hypothetical where it’s even and the hypothetical where it’s odd, even though you know in advance that one of those provably does not “exist”. And if you only had a limited time to run a decision theory, you wouldn’t want to run any decision theory that threw away these facts about your “care”. That’s one of the reasons why it seems more natural to me to use “care” rather than “existence” as the input for a decision theory.
Sounds like both of us could use either interpretation without any difference in conclusion, and just find different abstractions useful for thinking about it due to small differences in what other kinds of intuitions we’ve previously trained. Just a minor semantic hiccup and nothing more.