In fact, current lie detector technology isn’t that good—it relies on a repetitive and careful mix of calibration and test questions, and even then isn’t reliable enough for most real-world uses. The original ambiguity remains that the problem is underspecified: why do I believe that it’s accuracy for other people (probably mostly psych students) applies to my actions?
Yes, everyone arguing that there is a correct probability without definition of what that probability is predicting is misguided.
This used to be common, called “country club billing”. most credit cards stopped it in the 70s, American Express continued it through part of the 90s. It’s expensive for merchants and card processors, not valued by most customers, and as far as I know nobody is seriously considering bringing it back.
The various contradictory incentives about data privacy and who knows what when are all trivial compared to the amount of work it’d take, for no significant value to customers. The number of humans who bother to keep and categorize receipts is TINY, and it’s probably correlated with not spending very much on credit-card fees. Attracting these customers may well be negative-value, but even if it’s positive, it’s not worth much effort.
I don’t think you need to claim that there are different kinds of uncertainty to solve these. If you clearly specify what predicted experiences/outcomes you’re applying the probability to, both of these examples dissolve.
“Will you remember an awakening” has a different answer than “how many awakenings will be reported to you by an observer”. Uncertainty about these are the same: ignorance.
think we all recognize that this is a bit of an exaggeration
No, this is mathematically true. A strict 1% improvement over 365 consecutive cycles is 3778% improvement. Compound interest is really that powerful. No exaggeration there.
It’s misleading, though. The model doesn’t apply to most human improvement. It’s almost impossible to improve any metric by 1% in a day, almost impossible to avoid negative growth sometimes, certainly impossible (for any real human) to maintain a rate of improvement—declining marginal return for interventions kicks in quickly.
I think it’s worth noting decay, but you also need to recognize that novelty is a different dimension than growth in capability. You can have lots of novelty with zero change (neither improvement nor decay) in your likelihood of furthering any goals.
Sure. I’m asking about the “we all saw how that worked out” portion of your comment. From what I can see, it worked out fairly well. Are you of the opinion that the French Revolution was an obvious and complete utilitarian failure?
Coase basically applied the insight that value is not the same as price, and there’s no way to set a price that satisfies all the stakeholders. It’s an idea that needs to be more central to thinking about human interaction.
the French revolution was heavily influenced by him and we all saw how that worked out.
Can you make this a little more explicit? France is a pretty nice place—are you saying that the counterfactual world where there was no revolution would be significantly better?
Maybe you’re not freeloading on them, you’re honoring their and your comparative advantages. They’re willing to take more risks than you in who and how much to punish, and the fact that you don’t want to correct them in either direction indicates you’d rather accept their choices than to try to calculate the proper amount yourself. Or maybe you _should_ be supervising more closely because they’re wrong.
How to determine which model (freeloading vs division of labor vs dereliction of duty) fits the situation is the tricky part.
where are we with reacts? Need a :raises hand: emoji.
I worry a lot about trying to reason about very complex equilibria when only looking at one force. It’s _BOTH_ an adversarial and cooperative game—there are (asymmetric, but usually same sign) benefits to clear, honest communication. And even for adversarial portions, there may be a positive sum even when one player is harmed, if other players gain more than the harm.
I can make a model, even, that outsourcing the punishment so that extra-judgey people get most of the flak for the judgement, but still provide overall value, is optimal for some utility aggregation functions. I don’t currently like or claim applicability of this model, but it’s not obviously wrong.
It could be argued that it’s all ignorance. The die will roll the way that physics demands, based on the velocity, roll, pitch, yaw of the die, and the surface properties of the felt. There’s only one possible outcome, you just don’t know it yet. If you roll a die in an opaque cup, the uncertainty does not change in kind from the time you start shaking it to the time you slam it down—it’s all the same ignorance until you actually look.
You can, if you like, believe that there is unknowability at the quantum level, but even that doesn’t imply true randomness, just ignorance of which branch you’ll find your perceptive trail following.
Luckily (heh), Bayes’ Theorem doesn’t care. It works for updating predictions on evidence, regardless of where uncertainty comes from.
That’s roughly how I think of preferences. It’s absolutely possible (and, in fact, common) for humans to make choices based on things that have no perceptible existence. It’s harmless (but silly (note: I _LIKE_ silly, in part because it’s silly to do so)) to have such preferences, and usually harmless to act on them.
In the context of the OP, and world-value comparisons across distinguishable segments of universes, there is simply no impact from unrealized/undetectable preferences across those universe-segments that don’t contain any variation on that preference.
I like this direction of thought. Note that for all of these traps, success is more often a matter of improvement rather than binary change or “escape from trap”. And persistence plays a large role—very few improvements come from a single attempt.
I will admit that I find the concept of preferences over indistinguishable / imaginary universes or differences in hypothetical universes to be incoherent. One can have a preference for invisible pink unicorns, but that preference is neither more nor less satisfied by any actual-world time segment.
If you have a pointer to any literature about utility impact of irrelevant preferences, I’d like to take a look. All I’ve seen in the past is about how preferences irrelevant to a decision should not impact an aggregation result.
Your satisfaction of that preference has nothing to do with their confidence, it’s all about whether you actually find out. You could get into philosophy about what “true” even means for something you have no evidence for or against, but that’s not necessary to talk about the impact on your utility. Without some perceptible difference, your utility cannot be different.
Well, yes. Once all evidence (including any impact or detectable difference in the state of the universe) is gone, it CANNOT have a further adverse effect on utility.
Of course it can—the value of that preference is determined by what (counter)evidence is discovered when.
“take over” is a human, fuzzy concept, the exact application of which is context-dependent. And it’s still useful. Any of “determining the direction and speed the horse goes next”, “deciding whether to feed or starve the horse”, “locking the horse in a barn to prevent it being ridden” or lots of other activities can be put under the heading “taking over”.
If the details matter, you probably need to use more words.
I don’t disagree with any of this, except the implication that policy is “ours”, and makes any sense on any level. IMO, drug (and criminal) policy is a weird mishmash of moralizing, bad social causality theory, top-down control intent, and profiteering. Logical arguments about what a good policy might be are rather irrelevant.