This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate. Doing so in a universe which then presents you with counterevidence means you’re not rational.
Which I suppose could be termed “infinitely confused”, but that feels like a mixing of levels. You’re not confused about a given probability, you’re confused about how probability works.
In practice, when a well-calibrated person says 100% or 0%, they’re rounding off from some unspecified-precision estimate like 99.9% or 0.000000000001.
This mapping does not match any actual decisions in blackmail. First, it’s not a simultaneous choice, it’s a branching multi-turn decision tree. Second, there are more than 2 actions available at various stages. Either of these would make prisoner’s dilemma analysis suspect, together it becomes much more like multi-street multi-bet poker than like PD.
The “victim” first makes choices (or is born into a situation) susceptible to blackmail. The blackmailer learns of this, and has at least 3 choices: publish the information, threaten to publish, or bury the information. The “victim” in the threaten-to-publish (blackmail) case offers incentives (which may be the same as the requested fee, or may not) to bury rather than publish, and the blackmailer chooses which action to take. Even leaving out true defection cases (accept the money and publish anyway, or killing the blackmailer), this is a fairly complex payout tree, and the correct choices are specific to the situation. In fact, since parts of the payout tree are unknown to one or both players, it’s likely that mixed strategies come into play, to prevent exploitation of the unknowns.
On the “a day in hell cannot be outweighed” question, do you have any analysis of that intuition? Are you assuming that you’ll remember that day and be broken by it, or is there some other negative value you’re putting on it. Do you evaluate “a day in hell, 1000 years in heaven, then termination” differently from “1000 years in heaven, then a day in hell, then termination”? How about “a day in hell, mind-reset to prior state, then 1000 years in heaven”?
The reason I ask is that I’m trying to understand what’s being evaluated. Are you comparing value of instantaneous experience integrated over time, or are you comparing value of effect that experiences have on your identity?
Correct. I suspect it’s too broad a category, and imprecisely defined (or at least different from things commonly measured) to have much basis OTHER than analogies and anecdotes. Fortunately, it’s also irrelevant to most of us—no action is proposed or expected as part of this discussion/debate.
[ edit: I’m embarassed to say that until now I hadn’t even looked up the legality of it. In the US, blackmail is specifically about demanding payment for not informing of a violation of US law, not for any other topics of gossip. It’s also a relatively minor offence. Many jurisdictions just treat it as a special case of extortion, which is severe, but unclear whether that’s because it’s often violent or because it’s bad on it’s own.]
(Are trigger warnings still a thing? it occurs to me that this topic may interact badly with suicidal thoughts. Please take it only as the silly exploration of imagination-space that it is).
I don’t give a lot of weight to the basilisk possibility—that was something of a throwaway comment.
What I meant is that if the truth is that the simulation controller is specifically interested in you as an experience-haver within the simulation, then there is no possibility to intentionally influence the simulation. Your perceptions and your cognition will be manipulated to make you believe whatever the simulator things furthers their goals. Your universe of perception and possible actions simply won’t contain things that counter their goals.
And one amusing (perhaps only to me) possibility of motivation for such a personal simulation is that my current life is the worst that the attacker can imagine within the constraints that I must believe it’s real. Maybe I’m being tortured, as punishment for some outside-universe crime. I find it amusing, as I assign positive value to this moment of experience (which is the only thing I can be sure of), so the basilisk is being thwarted in it’s mission of punishing me. And also because it seems ludicrously unlikely.
[note: this subthread is far afield from the article—LW is about publication, not private thoughts (unless there’s a section I don’t know about where only specifically invited people can see things) . And LW karma is far from the sanctions under discussion in the rest of the post.]
Have you considered things to reduce the assymetric impact of up- and down-votes? Cap karma value at −5? Use downvotes as a divisor for upvotes (say, score is upvotes / (1 + 0.25 * downvotes)) rather than simple subtraction?
In terms of informational simplicity, there’s a measurement bound at the limits of your memory of perceptions. You can’t distinguish between a simulation of quantum or atomic (or even simple mechanical) rules, from a tiny simulation of point-in-time set of memories and experience. You literally cannot know the difference (and there may be no difference—see the philosophical zombies debate) between an NPC and a “real person”.
If it turns out that this is your personal Basilisk punishment, and you’re being tortured for not bringing forth the creator in another universe, there is no theoretical or practical way to know that.
Do you directly care about the goals of the simulator(s)? In the absence of a creator, how do you answer about evolution (a goal-less optimization process)? Are your goals dependent on a creator, or independent?
You might not share any goals, but still care because you fear interference or termination if the simulation no longer furthers the creator’s goals. I argue that this fear is unfounded. Whether you’re a side-effect or a primary reason for the simulation, you’re still part of the environment being simulated. If the creators are bothering to simulate this level of detail, then they think this level of detail is important to their goals.
Any detectable change the creators make in the simulation (like creating or altering the path of an asteroid, tweaking human behavior, communicating with some or all subjects, etc.) reduces the simulation value of the simulation.
It’s possible they _WANT_ to simulate a universe with no intelligent interstellar life. If so, they’d build the filters into the simulation, rather than noticing a problem and changing the code. They might notice a problem, change the code, and terminate/restart the simulation, but I can’t imagine any way to guess the things that will accelerate or prevent this from happening. obDouglasAdams: “There is another theory which states that this has already happened.”
I don’t know what my price would be, and I hope it’s too high to ever come into play. I like to think there’s no possible situation I’d turn someone in, and I’d favor the individual over the mob at any cost to myself. But that’s not true for the vast majority of humans, and probably not for me either.
But we’re not talking about heroes (even if I hope I would qualify and fear I wouldn’t). We’re talking about the range of human behavior and motivation. It’s clear that social pressure _is_ enough for some people to turn others in. Medals, rewards, etc. likely increase that a little. Blackmail probably decreases it a little, as the data-holders can now get paid for keeping secrets rather than doing it in spite of incentives.
“compared to what?” should always be part of the analysis. In the examples you give (unjust persecution if private information is published), I believe you’d prefer blackmail to publication, and prefer unpaid silence to blackmail. It’s unclear what intuitions you have if there’s a social or monetary reward for turning them in. Is blackmail acceptible, if it’s no more than the value of the foregone reward?
Clarified—the hypocrisy is that blackmail is prohibited but not enforced against. We claim that it’s bad, but allow it most of the time. I could argue that hypocrisy itself falls into this category (we complain about it, but don’t actually punish it) as well, but I didn’t intend to.
Many laws incorporate scaling in terms of damage threshold or magnitude of single incident. We have very few laws that are explicit about scale in terms of overall frequency or number of participants in multiple incidents. City zoning may be one example of success in this area—only allowing so many residents in an area, without specifying who.
There are very few criminal laws such that something is legal only when a few people are doing it, and becomes illegal if it’s too popular. Much more common to just outlaw it and allow prosecutors/judges leeway in enforcing it. I’d argue that this choice gets exercised in ways that are harmful, but it does get the job (permitting low-level incidence while preventing large-scale infractions) done.
There are some groups with which I enjoy discussing politics, and some which I believe it is effective to do so, as it leads to decisions I make about where to live, what donations to make, how to vote (though that’s pretty small impact), who to publicly support, etc.
In all cases, they’re relatively small groups, with enough face-to-face contact that I can estimate the levels of rationality and knowledge, and tailor the discussions to what I can learn, more than how I can convince them of something (note: many times there _is_ an adversarial tone to the truth-seeking. I think this works well in person, and very badly online).
My main resonance with those threads is that politics (at all levels, from family to office to town to nation to world) is baked into humanity, and cannot be ignored. And simultaneously, the topics can’t be abstracted or generalized enough to actually discuss dispassionately in almost any group situation. Politics is prisoner’s dilemma with participants (including myself) whose motives are unknown and inconsistent, known to be at best partially-rational. When I can play with a consistent group of identified participants, I can learn which mechanisms work for each. When I play simultaneously with a large group, which statistically will include some always-defect, I fall into the defect-for-defense pattern.
I do, in fact get mind-killed. Often by what I perceive as my correspondents’ mind-killing, so internally it feels like I’m tit-for-tat, but it’s not clear that there’s enough bandwidth to ever get back to a good equilibrium.
So, please do put individual thought into it, and if you have the right groups of people discuss in those groups. But not here, and probably not anywhere online in large groups. For those places, just recognize that you’re there (as are most participants) to win people to your side, not to learn.
So many (including Robin) are mixing up arguments about blackmail (threat of revealing true information) with arguments about non-blackmail-motivated investigating, revealing, or concealing information (gossip) in the absence of threat and payment.
I suspect there are few or no examples of societies with the mix of personal freedoms, nonviolent dispute resolution, and economic sophistication (need some form of liquidity for trade) where blackmail is as significant as today.
In any case, hypocrisy has been around longer than history—today’s situation, where it (edit: blackmail) is prohibited but that is rarely enforced, is likely the common case.
Some things are acceptable on small quantities but unacceptable in large ones. You don’t want to incentivise those things.
This takes some unpacking. For things that are acceptable on small scales and not large ones, should we prohibit the scale rather than the act? The status quo is that blackmail is frowned upon, but not enforced unless particularly noteworthy. That bugs me from a rule-implementation standpoint, but may be ideal in a practical sense.
There’s another largely-unaddressed element to the debate: underlying freedoms of transaction and of information-handling. All of the arguments about blackmail are about it as an incentive for something—why are we not debating the things themselves? Arguments against gossip and investigation are not necessarily arguments against blackmail.
Before addressing the incentives, you should seek clarity/agreement on what behaviors you’re trying to encourage and prevent. I still have heard very few examples of things that are acceptable without money involvement (investigating and publishing someone for spite or social one-ups) and become unacceptable only because of the blackmail.
Leaving aside the question of why you believe that your preferences don’t reduce to hedonism (when considering the possibility of preference to identify as someone who’s preferences don’t reduce to hedonism)...
One partial solution is to recognize that I am not atomic. Parts of my mind have goals and knowledge that differ from other parts—it’s not a crisp separation, but it’s not a uniform belief-mass.
Which opens the path to an analogy to standard ML practice: separating your inputs into training and test sets (which are independent) builds way stronger models than putting all of it into training, even though it’s less data input to the actual model. I think this does give some insight into the preference to initial ignorance for games and entertainment/practice mysteries. I don’t think it resolves all aspects of the question, of course.
I think I’d say “arbitrageur” rather than “privateer”—they’re not combatants authorized to prey on an opposing state’s commerce, they’re just noticing and fixing (by taking a cut of) an information-value asymmetry. In fact, much of the debate is similar to other arbitrage prohibitions—people hate “price gouging”, “scalping”, “speculation”, and many other similar things.
These are perfectly legitimate in theory, but are based on underlying coordination failures that cause bad feelings, and they tend to cluster with not-OK behaviors (lying, artificial manipulation, interference with competitors, unsanctioned violence, etc.). It’s perfectly reasonable to look at the cluster of behaviors and decide to prohibit the lot, even though it catches some things that are theoretically acceptable.
The hypocrisy angle is interesting—many people seem to prefer that it’s “prohibited, but tolerated at small scale”. I suspect we’ll face a lot of these issues as humanity becomes more densely packed and visibly interconnected—there are a LOT of freedoms and private choices that our intuition says should be allowed, but which we recognize cause massive harm if scaled up. Currently, they’re mostly handled by hypocrisy—saying it’s allowed/disallowed, but then enforcing against egregious cases. I wonder if there are better ways.
In many contexts, the primary benefit of the summary is brevity and simplicity, more even than information. If you have more time/bandwidth/attention, then certainly including more information is better, and even then you should prioritize information by importance.
In any case, I appreciate the reminder that this is the wrong forum for politically-charged discussions. I’m bowing out—I’ll read any further comments, but won’t respond.
I think I was taking “coordination” in the narrow sense of incenting people to do actions toward a relatively straightforward goal that they may or may not share. In that view, nuance is the enemy of coordination, and most of the work is simplifying the instructions so that it’s OK that there’s not much information transmitted. If the goal is communication, rather than near-term action, you can’t avoid the necessity of detail.