This is a good reminder for those who haven’t thought of it or haven’t examined their voices and habits for awhile. But https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing, and watch out for a rationalist habit to overweight the legible (measurable, calculable) portions of life choices and ignore the hidden and hard-to-quantify.
Consider that Philip may preFer not to have gotten much done on the dimensions he can communicate, and the heat gives him an easy justification. Or that the emotional cost of picking and committing to spend for the unit is higher than the benefit for a week or two.
What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?
To the extent that “you” care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don’t, then guess to maximize the guesser’s payout even at the expense of clones (who will make the same guess, but be wrong more often).
Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.
I am still finding it difficult to understand how the focus on causality of mapmaking is more helpful than examining the intent to summarize (which encompasses which information gets thrown away based on what domain of prediction the map is created for) and the (pretty pure bayesean) accuracy of predictions.
Note that mapping is a type of abstraction that’s independent of causal chains and feedback/control loops. You can make an excellent thermostat that doesn’t understand a thing about thermodynamics, air flow, or control theory (though you may need to know something about these things in order to make the thermostat work well).
Hmm. I might have a sense of where you’re going, but the terminology is confusing to me. Nothing happens spontaneously, every future state happens because of the past state of the universe, so your intro makes very little sense to me. I think the distinction you’re pointing to isn’t spontaneous/caused, I think it’s natural/artificial, or maybe automatic/planned, or maybe inevitable/intentional. In any case, it seems to be about human conscious decisions to create the map. I’m not sure why this doesn’t apply to the human conscious decision to create the roads being mapped, but I suspect there’s an element of objective/subjective in there or full-fidelity/simplified-model.
I’m also unsure if the “cartographic process” is the human intent to make a map/model, or the physical steps (measurements, update of display, etc.) that generate the map.
Not new that I could tell—it is a refreshing clarity for strict determinism—free will is an illusion, and “possible” is in the map, not the territory. “Deciding” is how a brain feels as it executes it’s algorithm and takes the predetermined (but not previously known) path.
He does not resolve the conflict that it feels SOOO real as it happens.
Understanding how it works and remembering details when asked out of context may be very different things. I wish the participants had been given follow-up questions about how it works, and then the exercises repeated when a bicycle was present.
The topic is interesting, but no discussion about it is interesting. These are not contradictory.
The open question about strong determinism vs libertarian free will is interesting, and there is a yet-unexplained contradiction between my felt experience (and others reported experiences) and my fundamental physical model of the universe. The fact that nobody has any alternative model or evidence (or even ideas about what evidence is possible) that helps with this interesting question makes the discussion uninteresting.
True. There are two distinctions (I think) you’re making from base utilitarianism (preferences over state of the universe in terms of agent-experienced utility):
1) This is about path, not state. You have an opinion about something to do/experience that’s independent of any difference in expected value of a future state. It’s also (I think) explicitly indexical—you care that it’s you having this experience, not that it’s experienced by more people.
2) This is about … something … which isn’t on the pain/pleasure axis. I’m less sure of this one, as I tend to experience identity-affirming things as somewhat pleasurable and I’m not sure that’s any less comparable on this dimension than any other pleasure or personal disappointment.
The torture example is similar on the first point, but misses the second. Is that roughly correct?
I’ve heard this referred to as “experience preference”, or sometimes “experiential utility”, in that they are things you want to experience, distinct from states you want the universe to be in. (skipping the rabbit hole of whether all experience is memory or whether this is a preference for having a memory vs having an experience).
It occurs a lot in the negative as well—things you don’t want to experience (or don’t want ANYONE to experience), regardless of the state of the universe afterward. Many torture-tradeoff discussions hinge on this point—to a lot of people, suffering is bad not because of consequences or because is reduces a hedonic sum, but is a dimension of bad in itself.
I really appreciate the clarity of including both the math and the good examples. It may be useful to acknowledge that many of the observation/communication limits are in fact contingent on something unstated. What is it that keeps you from observing another variable or drawing better-fit categories?
but that’s no reason to not aspire to the true precision of the Bayes-structure to whatever extent possible
Which brings us to the actual hard question: to what extent _is it_ possible? A lot of the time, the right answer is to notice that the compromise that’s commonly made is undershooting the precision that you could handle, and determine which of your audience or conversational partners are ahead of or behind you in your modeling of the actual world. In your first example, you can ABSOLUTELY talk about financial emancipation, contract enforcement, or other adult rights that can be applied to some minors. Some things are taboo or sacred and you probably can’t openly talk about them without severely violating norms, but you can still model the details inside your head to avoid truth-deflecting norms/laws.
Picking the right level of detail for the person or group (or internally, topic) you’re dealing with is, IMO, far more important that picking the actual sharp lines to draw on a fuzzy canvas.
If you haven’t interviewed in awhile, no harm in practicing on them. It does seem pretty woo-ey, but one can make a pretty strong argument that most popular woo contains some actually helpful elements. I do worry that the job is actually a sales job in the guise of a technician (like it’s based on commissions or quotas for clients you bring in), and if that’s not what you want, you should be extremely clear about it before you accept.
I like this insight—not only nonlinear but actually discontinuous. There are some marginal instants of torture that are hugely negative, mixed in with those that are more mildly negative. This is due to something that’s often forgotten in these discussions: ongoing impact of a momentary experience.
Being “broken” by torture may make it impossible to ever recover enough for any future experiences to be positive. There may be a few quanta of brokenness, but it’s not the case that every marginal second is all that bad, only some of them.
I think you’re missing at least one key element in your model: uncertainty about future predictions. Commitments have a very high cost in terms of future consequence-effecting decision space. Consequentialism does _not_ imply a very high discount rate, and we’re allowed to recognize the limits of our prediction and to give up some power in the short term to reserve our flexibility for the future.
Also, one of the reasons that this kind of interaction is rare among humans is that commitment is impossible for humans. We can change our minds even after making an oath—often with some reputational consequences, but still possible if we deem it worthwhile. Even so, we’re rightly reluctant to make serious committments. An agent who can actually enforce it’s self-limitations is going to be orders of magnitude more hesitant to do so.
All that said, it’s worth recognizing that an agent that’s significantly better at predicting the consequences of potential commitments will pay a lower cost for the best of them, and has a material advantage over those who need flexibility because they don’t have information. This isn’t a race in time, it’s a race in knowledge and understanding. I don’t think there’s any way out of that race—more powerful agents are going to beat weaker ones most of the time.
Let us suppose <impossible thing>. Now <impossible result> remains impossible, how? Maybe the universe has a mysterious agency we can trick or bargain with!
I think you’ll need to back up a bit further if you want to explore this. “time travel is possible” isn’t well enough defined to be able to reason about, except in the human conceptual space with no physics attached. And if you’re assuming away physics, you don’t need to explain anything, just let the paradoxes happen.
I can’t tell if this is just another example that strategic choices tend to be valuable (guaranteed non-negative, but in practice usually positive). OF COURSE an opponent’s choice is going to reduce your value in a zero-sum game.
I do want to warn against applying to other aspects of life that aren’t purely zero-sum and aren’t designed by a human to balance the power between both parties. See also https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional
I don’t understand why https://en.wikipedia.org/wiki/Theory_of_the_second_best doesn’t get more consideration. In a complex interconnected system, V can not only be much less than E, it can be much less than would be obtained with ~C. You may not get mere utopia, you may get serious dystopia.
Other reasons that people may have (I have some of these reasons, but not all):
not a classical utilitarian
don’t believe those timelines
too distant to feel an emotional tie to
unclear what to do even if it is a priority
very high discount rate for future humans
belief that moral value is relative with cognitive ability (an extremely smart AI may be worth a few quitillion humans in a moral/experiential sense)
Of these, I think the one that I’m personally least moved by while acknowleging it as one of the better arguments against utilitarianism is the last. It’s clear that there’s SOME difference in moral weight for different experiences of different experiencers. Which means there’s some dimension on which a utility monster is conceivable. If it’s a dimension that AGI will excel on, we can maximize utility by giving it whatever it wants.
(don’t write fiction, but have run and playtested a lot of RPGs, which share many of the worldbuilding elements).
Among the hard parts is figuring out how much suspension of disbelief your audience will willingly bring, on what topics. This _is_ fiction, so we’re not generally trying to truly predict a hypothetical “possible” outcome, we’re trying to highlight similarities and differences from our own. This VERY OFTEN implies assuming a similarity (where the point of departure has less effect that is likely) and then justifying it or constraining the departure so it’s less difficult to maintain that this element of society would still be recognizable.
Ehn. For kids who will EVER accept this as fair, you’re putting too much thought into politics. If the kids are this manipulable, they’ll probably accept your authority in the one-shot case as well.
Also, more iterations gives them more time to realize that you’re cheating (by shifts in how to count syllables) or that the game is fully deterministic (and you’re cheating by deciding who to start with).
This is only usable for such low-stakes cases where the participants don’t mind that it’s not fair. And in those cases, don’t waste time on pointless complexity. Of course, if this is part of the entertainment, I reverse that advice—choose the single-elimination method to extend the duration of the tension of not knowing.