Is there something I’m missing? It seems like TurnTrout’s already given us all the pieces. Seems like we can say that “Something has high impact to someone if it either affects something they value (the personal side) or affects their ability to do things more broadly (the objective side).”
Something is a big deal if it affects our ability to take future actions? (That seems to be the deal about objectively being bad.)
Is the point here to unify it into one sort of coherent notion?
Okay, so let’s back up for a second and try to do all of this from scratch...When I think about what “impact” feels like to me, I imagine something big, like the world exploding.
But it doesn’t necessarily have to be a big change. A world where everyone has one less finger doesn’t seem to be a big change, but it seems to be high impact. Or a world where the button that launches nukes is pressed rather than not pressed. Maybe we need to look some more into the future? (Do we need discounting? Maybe if nukes get launched in the far future, it’s not that bad?)
I think it’s important to think relative to the agent in question, in order to think about impact. You also want to look at what changed. Small changes aren’t necessarily low impact, but I think large changes will correspond to high impact.
It seems like “A change has has high impact if the agent’s valuation of the after state is very different than their valuation of the current state” is the best I have after 15 minutes...
Thought as I worked through the exercise:
Is there something I’m missing? It seems like TurnTrout’s already given us all the pieces. Seems like we can say that “Something has high impact to someone if it either affects something they value (the personal side) or affects their ability to do things more broadly (the objective side).”
Something is a big deal if it affects our ability to take future actions? (That seems to be the deal about objectively being bad.)
Is the point here to unify it into one sort of coherent notion?
Okay, so let’s back up for a second and try to do all of this from scratch...When I think about what “impact” feels like to me, I imagine something big, like the world exploding.
But it doesn’t necessarily have to be a big change. A world where everyone has one less finger doesn’t seem to be a big change, but it seems to be high impact. Or a world where the button that launches nukes is pressed rather than not pressed. Maybe we need to look some more into the future? (Do we need discounting? Maybe if nukes get launched in the far future, it’s not that bad?)
I think it’s important to think relative to the agent in question, in order to think about impact. You also want to look at what changed. Small changes aren’t necessarily low impact, but I think large changes will correspond to high impact.
It seems like “A change has has high impact if the agent’s valuation of the after state is very different than their valuation of the current state” is the best I have after 15 minutes...