I think that would just yield your revealed preference function. As I said, trying to optimize that is like a falling apple trying to optimize “falling”. It doesn’t describe what you want to do; it describes what you’re going to do next no matter what.
I think that would just yield your revealed preference function.
No, it wouldn’t. It would read the brain and resolve it into a utility function. If it resolves into a revealed preference function then the FAI is bugged. Because I told it to deduce a utility function.
If we accept that what someone ‘wants’ can be distinct from their behaviour, then “what do I want?” and “what will I do?” are two different questions (unless you’re perfectly rational). Presumably, a FAI scanning a brain could answer either question.
I think that would just yield your revealed preference function. As I said, trying to optimize that is like a falling apple trying to optimize “falling”. It doesn’t describe what you want to do; it describes what you’re going to do next no matter what.
No, it wouldn’t. It would read the brain and resolve it into a utility function. If it resolves into a revealed preference function then the FAI is bugged. Because I told it to deduce a utility function.
If we accept that what someone ‘wants’ can be distinct from their behaviour, then “what do I want?” and “what will I do?” are two different questions (unless you’re perfectly rational). Presumably, a FAI scanning a brain could answer either question.