Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.