# ProofOfLogic

Karma: 121
• Not exactly.

(1) What is the family of calibration curves you’re updating on? These are functions from stated probabilities to ‘true’ probabilities, so the class of possible functions is quite large. Do we want a parametric family? A non-parametric family? We would like something which is mathematically convenient, looks as much like typical calibration curves as possible, but which has a good ability to fit anomalous curves as well when those come up.

(2) What is the prior oven this family of curves? It may not matter too much if we plan on using a lot of data, but if we want to estimate people’s calibration quickly, it would be nice to have a decent prior. This suggests a hierarchical Bayesian approach (where we estimate a good prior distribution via a higher-order prior).

(3) As mentioned by cousin_it, we would actually want to estimate different calibration curves for different topics. This suggests adding at least one more level to the hierarchical Bayesian model, so that we can simultaneously estimate the general distribution of calibration curves in the population, the all-subject calibration curve for an individual, and the single-subject calibration curve for an individual. At this point, one might prefer to shut one’s eyes and ignore the complexity of the problem.

• First I don’t think conflating blame and “bad person” is necessarily helpful.

OK, yeah, your view of blame as social incentive (skin-in-the-game) seems superior.

The most case is what is traditionally called “being tempted by sin”, e.g., someone procrastinating and not doing what he was supposed to do.

I agree that imposing social costs can be a useful way of reducing this, but I think we would probably have disagreements about how often and in what cases. I think a lot of cases where people blame other people for their failings are more harmful than helpful, and push people away from each other in the long term.

And don’t get me started on situations where most of the participants are only there for a paycheck, a.k.a., the real world.

It sounds like we both agree that this is a nightmare scenario in terms of creating effective teams and good environments for people, albeit common.

However, even when the primary motive is money, there’s some social glue holding things together. I recommend the book The Moral Economy, which discusses how capitalist societies rely to a large extend on the goodwill of the populace. As mutual trust decreases, transaction costs increase. The most direct effect is the cost of security; shops in different neighborhoods require different amounts of it. This is often cited as the reason the diamond industry is dominated by Hasidic Jews; they save on security cost due to the high level of trust they can have as part of a community. Some of this trust comes from imposing social costs, but some of it also comes from common goals of the community members.

The Moral Economy argues that the lesson of the impossibility theorems of mechanism design is that it would not be possible to run a society on properly aligned incentives alone. There is no way to impose the right costs to get a society of selfish agents to behave. Instead, a mechanism designer in the real world has to recognize, utilize, and foster people’s altruistic and otherwise pro-social tendencies. It is also shown empirically that designing incentives as if people were selfish tends to make people act more selfish in many cases.

So, I will try and watch out for blame being a useful social mechanism in the way you describe. I’m probably underestimating the number of cases where imposed social costs are useful precisely because they don’t end up being applied (IE, implicit threats). At present I still think it would be better if people were both less quick to employ blame, and less concerned about other people blaming them (making more room for self-motivation).

• Well, yes, and I think that’s mostly unfortunate. The model of interaction in which people seek to blame each other seems worse—that is, less effective for meeting the needs and achieving the goals of those involved—than the one where constructive criticism is employed.

The blame model seems something like this. There are strong social norms which reliably distinguish good actions from bad actions, in a way which almost everyone involved can agree on. These norms are assumed to be understood. When someone violates these norms, the appropriate response is some form of social punishment, ranging from mild reprimand to deciding that they’re a bad person and ostracizing them.

The constructive criticism model, on the other hand, assumes that there are some common group goals and norms, but different individuals may have different individual goals and preferences, and these might not be fully known, and the group norms might not be fully understood by everyone. When someone does something you don’t like, it could be because they don’t know about your preferences, they don’t know about a group norm, they don’t understand the situation as well as you and so fail to see a consequence of an action which you see, etc. Since we assume that people do have somewhat common goals, we don’t have to enforce norm violations with punishment—by default, we assume people already care about each other enough that they would have respected each other’s wishes in an ideal situation. Perhaps they made a mistake because they lacked a skill (which is where the constructive feedback comes in), or didn’t understand the situation, your preferences, or the existing norms. Or, perhaps, they have an overriding reason for doing what they did. Social punishment (even the mild social punishment associated with most cases of blame) often doesn’t fix anything and may make things worse by escalating the conflict or creating hard feelings.

If you discuss the problem and find that they didn’t misunderstand or lack a necessary skill or have an overriding reason that you can agree with, and aren’t interested in doing differently in the future, then perhaps you don’t have enough commonality in your goals to interact. This is still different from the blame model, where sufficiently bad violations mark someone as a “bad person” to be avoided. You may still wish them the best; you simply don’t expect fruitful interactions with them.

That being said, there are cases where you might really judge someone to be a “bad person” in the more common sense, or where you really do want to impose social costs on some actions. Sociopaths exist, and may need to be truly avoided and outed as a “bad person” (although pro-social psychopaths also exist; being a sociopath doesn’t automatically make you a bad person). However, it seems to me as if most people have overactive bad-person detectors in this regard, which harm other interactions. I don’t think this is because easily-tripped bad-person detectors are on the optimal setting given the high cost of failing to detect sociopaths. I think it’s because the concept of blame conflates the very different concepts involved in cheater-detection/​sociopath-detection and situations where less adversarial responses are more appropriate.

(Response also posted back to the blog.)

• Edited to “You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame.” to try to make the point clearer.

# Chaos and Consequentialism

24 Apr 2017 20:43 UTC
3 points
(weird.solar)
• Noticing the things one could be noticing. Reconstructing the field of mnemonics from personal experience. Applied phenomenology. Working toward an understanding of what one’s brain is actually doing.

(Commenting in noun phrases. Conveying associations without making assertions.)

• These signals could be used outside of automoderation. I didn’t focus on the moderation aspect. Automoderation itself really does seem like a moderation system, though. It is an alternate way to address the concerns which would normally be addressed by a moderator.

• True, I didn’t think about the added burden. This is especially important for a group with frequent newcomers.

I try hard to communicate these distinctions, and distinctions about amount and type of evidence, in conversation. However, it does seem like something more concrete could help propagate norms of making these sorts of distinctions.

And, you make a good point about these distinctions not always indicating the evidence difference that I claimed. I’ll edit to add a note about that.

# Thoughts on Automoderation

12 Apr 2017 21:29 UTC
4 points
(medium.com)
• Very cool! I wonder if something like this could be added to a standard productivity/​todo tool (thinking of Complice here).

I think the step “how can you prevent this from happening” should perhaps add something like “or how can you work around this” instead—perhaps you cannot prevent the problem directly, but can come up with alternate routes to success.

I found it surprising that the script ended after a “yes” to “Are you surprised?”. Mere surprise seems like too low a bar. I expected the next question to be “Are you so surprised that it doesn’t seem worth planning for this eventuality?”.

Also, I accidentally typed “done.” rather than “done”, and it was entered as a step in the plan. I think it would be good if variations like that were treated as the same. And, it would be nice to be able to go back one step rather than resetting entirely.

# The Mon­key and the Machine

23 Feb 2017 21:38 UTC
17 points
(sideways-view.com)
• I find this and the smoker’s lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent’s decision-making. We can perhaps suppose that (in both cases) the agent’s preferences are what is affected (by the genes, or by the physics). But then, shouldn’t the agent be able to observe this (the “tickle defense”), at least indirectly through behavior? And won’t this make it act as CDT would act?

But: I find the blackmail letter to be a totally compelling case against EDT.

• It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn’t assume a good outcome. But perhaps you’re saying that we should at least have a vision of a good outcome in mind to steer toward.

And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.

Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you’re not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.

Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can’t find any solutions which pass our threshold. It’s then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what’s relatively best?

Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it’s just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it’s ignoring trade-offs by the fallacy of absolute thinking.

• I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/​knowledge ladder is steep enough that you never really feel like you’ve “made it”; there are always smarter people to make you feel dumb. So at any level, you’d better get used to asking stupid questions.

And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I’m asking.

I think it would be nice if someone wrote a post on “visceral comparative advantage” giving tips on how to intuitively connect “the best thing I could be doing” with comparative advantage rather than absolute notions. I’m not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.

I don’t think many people on the “front lines” as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don’t know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn’t think of now.

• The people that in the end tested lucid dreaming were the lucid dreamers themselves.

Ah, right. I agree that invalidates my argument there.

Yes, that makes sense. I don’t think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.

Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)