The purpose of studying LDT would be to realize that the type signature you currently imagine Steve::consequentialist preferences to have is different from the type signature that Eliezer would imagine.
The starting point for the whole discussion is a consequentialist preference—you have desires about the state of the world after the decision is over.
You can totally have preferences about the past that are still influenced by your decision (e.g. Parfit’s hitchhiker).
Decisions don’t cause future states, they influence which worlds end up real vs counterfactual. Preferences aren’t over future states but over worlds—which worlds would you like to be more real?
AFAIK Eliezer only used the word “consequentialism” in abstract descriptions of the general fact that you (usually) need some kind of search in order to find solutions to new problems. (Like I think just using a new word for what he used to call optimization.) Maybe he also used the outcome pump as an example, but if you asked him what how consequentialist preferences look like in detail, I’d strongly bet he’d say sth like preferences over worlds rather than preferences over states in the far future.
The purpose of studying LDT would be to realize that the type signature you currently imagine Steve::consequentialist preferences to have is different from the type signature that Eliezer would imagine.
You can totally have preferences about the past that are still influenced by your decision (e.g. Parfit’s hitchhiker).
Decisions don’t cause future states, they influence which worlds end up real vs counterfactual. Preferences aren’t over future states but over worlds—which worlds would you like to be more real?
AFAIK Eliezer only used the word “consequentialism” in abstract descriptions of the general fact that you (usually) need some kind of search in order to find solutions to new problems. (Like I think just using a new word for what he used to call optimization.) Maybe he also used the outcome pump as an example, but if you asked him what how consequentialist preferences look like in detail, I’d strongly bet he’d say sth like preferences over worlds rather than preferences over states in the far future.