>> Why take the risk of this escalating into a seriously negative sum outcome?
Because if someone does that to you (walking up to you and insulting you to your face, apropos of nothing), then the value to them of the situation’s outcome no longer matters to you—or shouldn’t, anyway; this person does not deserve that, not by a long shot.
I think I’m with Wei_Dai on this one—insulting me to my face, apropos of nothing, doesn’t change my valuation of them very much. I don’t know the reasons for such, but I presume it’s based on fear or pain and I deeply sympathize with those reasons for unpleasant, unreasoning actions. Part of my reaction is that it’s VERY DIFFICULT to insult me in any way that I won’t just laugh at the absurdity, unless you actually know me and are targeting my personal insecurities.
Only if it’s _NOT_ random and apropos of nothing am I likely to feel that there are strategic advantages to taking a risk now to prevent future occurrences (per your Schelling reference).
Note also that displayed totals are misleading—this is definitely not one vote per reader. A vote can be anywhere from 1 to over 10, depending on karma total of the voter and whether it’s a “strong” vote. For totals below 30 or so, it’s mostly noise rather than signal—this is 6-8 votes out of possibly hundreds of readers.
One point in favor of biasing toward non-exceptions (I still won’t say “none at all”) is that some parts of me are adversarial with the parts who are identifying rules. They can be very persuasive that this is a time for an exception, so it makes sense to have a pretty high bar (mostly: it contradicts another rule in the same ring) for making exceptions.
I don’t think I can pass an ITT for strict lawful thinking. I’m absolutely supportive of discovering and creating summaries of future decision intent, and of being somewhat rigorous in doing so. But I can’t ignore the fundamental complexity of the real world, and the fact that these are ONLY extremely compressed expressions of a set of beliefs.
I may be stuck in toolbox thinking, though I’ll definitely use lawful models as some of my tools. Or I may simply not be smart enough to identify and make legible the incredible variety of decisions I face over time. Rules (and habits, which are basically unconscious rules) make this tolerable, as I can spend very little energy on most of them. But there are daily choices where I see conflicts among rules and have to choose among rules that might apply, and also among the meta-rules to pick the right rule, and meta-meta-rules to weigh across different meta-rules, etc.
I kind of wonder if we have actual different felt experiences on the topic. I can only think of stated rules as porous and directional, and I feel good when I violate one for a good purpose. Take that, over-simplistic, condescending non-agent worldview! I also feel good when I recognize a new context in which a rule applies and find that the rule is stronger than I previously thought, so I’m not anti-rule in general, just that I think they’re a convenience rather than a truth.
I’ve talked with other people who are horrified when they find a case that an accepted rule interferes with doing the best thing, and work hard to reconcile the situation with patches or meta-rules (and get angry when I use the word “rationalization”). They seem to feel near-physical pain from violating (some) rules without a lot of justification. I have sometimes been guilty of thinking they just need to find the right Manic Pixie Dream Person (and even worse, sometimes think it should be me) to break them out of the bonds of propriety, but I also wonder if there’s something deeper in the way the world actually feels day-to-day to them and to me.
Alternate approach: recognize that rules (as opposed to physical laws) are always and only guidelines, or defaults, or lossy summaries of one’s intent. There’s no such thing as a complete and consistent ruleset, and even if we could get close, it wouldn’t fit in our brains.
Rules are like models: none are true (none are fully binding or complete descriptions of desired behavior), many are useful (in that they can give good defaults and heuristics for common cases, where deeper computation is undesirable or infeasible).
There are no real rules. Exceptions may be fiction, but that’s because rules are fiction in the first place. Rules don’t exist in the territory, they’re just fuzzy areas on maps.
It’s not a perfect example for the topic, as the actual reason for the advice is to avoid ANY ongoing monetary commitment for as long as possible. No payroll, no vendor contracts, nothing that creates a monthly “nut” that drains your capitol before you have actual revenue streams.
It’s also the case that employees come with agency alignment problems, but that’s secondary.
Yes, I did take this a bit too harshly. Maybe not an attack, but a criticism/objection that I feared would be discouraging more than helpful. I may have over-indexed on it a bit because it’s something I am working on for myself, and I may have projected it onto you. My natural tendency is to focus on complexity and difficulty that will have to be resolved, rather than supporting and reinforcing the good parts, and I apologize if I misattributed your suggestion.
I doubly apologize that I did exactly the thing to you that I feared you were doing to OP—your suggestion is good that consistency might be important for the effect, and the warning is also good that losing the randomization may let more outside correlations creep in.
There are a lot of mechanisms one could hypothesize and test if there is, indeed, any significant effect to explain/refine. Doing an initial simple experiment to show that something is here and it’s worth diving deeper (to figure out timing, dosage, multi-day effects, etc.) is the right thing, and I applaud the poster for writing down their plan before starting.
Can you give some pointers to “philosophy” as a community? It feels like a type mismatch to compare a bunch of message boards and blogs (‘rationalist community’) to an academic pursuit (‘philosophy’).
I wonder if this post and thread is conflating multiple meanings of “truth-seeking” in a way that causes confusion. My version of rationality is about truth-seeking in terms of my beliefs about the world, and my processes (including hidden ones) for selecting the best model for any given decision. Influence over future experiences is the truth I’m seeking.
Academics (including scientists and philosophers) are “truth-seeking” in a much more theoretical sense, looking for consistent descriptions of parts of the world (or sometimes other imagined worlds), and in getting agreement (or at least publication references) on such.
Each observes and learns from the other, of course, but they’re not really all that similar. I think of rationality as engineering more than science.
This is presented in such a way that I’m suspicious there’s something deeper than the question (which I tried to answer in a straightforward way) that’s bugging you about comment threads for a specific type or topic (or perhaps author or style) of post.
You imply that people are upset or think something’s wrong, but I don’t actually know if you agree or not, nor what information you’re really looking for with this question. This may be one of those (surprisingly common, and anathema to typical rationalist-wannabe nerds like myself) cases where you really can’t start with general solutions to specific problems, even if it looks like the problems are very similar and should have a shared cause and repeatable solution. You have to address an actual real instance of the problem half a dozen to a few thousand times before those useful generalizations can be made.
For me, accumulated karma is mostly an indicator of how long someone’s been here and how much they’ve participated. Common use seems to be mostly upvote; downvotes aren’t rare, but a pretty neutral comment is likely to get 2-10 karma, and only a pretty bad one gets into the negative range. And posters who routinely get downvoted (for whatever reasons) likely either change or leave, so there’s a strong selection toward an expectation of more upvotes than downvotes.
I find karma changes for a comment I make is somewhat useful—mostly it indicates how popular is the post I’m commenting on, but secondarily it gives me a sense of whether I’m commenting on the points that most readers find salient in the post.
I’ll admit that votes carry more emotional weight than I want them to—I know they’re meaningless internet points, and a rather noisy signal of popularity, but it still feels nice when something gets more upvotes than normal, and hurts a bit when I’m downvoted.
Be careful with unstated assumptions about belief aggregation. “the discourse” doesn’t have beliefs. People have beliefs, and discourse is one of the mechanisms for sharing and aligning those beliefs. It helps a lot to give names to people you’re worried about, to make it super-clear whether you’re talking about your beliefs, your current conversational partner’s beliefs, or beliefs of other people who hear a summary from one of you.
If Alice discourages Bob from saying X, then Charlie might go on believing not-X. This is a very different concern from Bob being worried about believing a false not-X if not allowed to discuss the possibility. Both concerns are valid, IMO, but they have different thresholds of importance and different trade-offs to make in resolution..
I’m not sure we have much evidence in whether actual prediction markets reliably benefit from an influx of new participants. I suspect it’s as complicated as other endeavors: it’ll depend on the selection and expectations of those new people, and how much training and/or accommodation is needed for them.
In my company, we often talk about “maximum team onboarding rate” in terms of how quickly we can bring new team members up to productivity and retain our team goals and culture. We do pretty reliably grow in scope, but not unboundedly and not without quite a bit of care in terms of selection and grooming of new members.
Ehn. “excessive” is doing a LOT of work here, and needs to be expanded for this to make sense. I’m with you if you mean “apologies more intricate than the crime”, and disagree if you mean “apologizing often”. Learning to make (and to receive) a good apology is a useful skill in interpersonal relationships.
Apologies communicate knowledge of harmful behavior, ideally in a way that lets the victim understand and get closure on the incident. They help in reducing attribution bias (where people assume you’re a jerk, rather than a fallible human). They make it clear that it’s a behavior you’d rather not have people copy.
Especially if it’s uncomfortable to admit your imperfections, you will be biased against making apologies unless you see clear benefit, rather than making apologies unless you see harm. It’s FAR too easy to be over-cautious instead of under-cautious in this. And even worse when status games start playing into it (apologize upward, ignore downward to reinforce a position rather than to communicate knowledge of harm and future intent-not-to-harm).
There are certainly apology-like behaviors which I’ll recommend against—passive aggressive “I’m sorry my legitimate behavior is unpleasant for you” and defensive affirmation-seeking “I’m sorry! Please tell me it’s OK and you like me!”. These objections are more about integrity of apology, not excess quantity.
Put me down as “yay genuine apologies!”
I think I missed the indirection required to use Lob’s theorem (which I thought was about not being able to prove that a statement is unprovable, not about proving false things—that is, for our formal systems, we accept incompleteness but not incorrectness).
Mainly I don’t see where you have the actual proposition setup in your “proof”—the knowledge that you’re choosing between $5 and $10, not between $5 and “something else, which we are allowed to assume is $0″. If you don’t know (or your program ignores) that you will be offered $10 iff you reject the $5, you’ll OF COURSE take the $5. You can test this in real humans: go offer someone $5 and don’t say anything else. If they turn it down, congratulate them on their decision-making and give them $10. If they take the $5, tell them they fell victim to the 5-10 problem and laugh like a donkey.
FactorialCode’s comment _is_ an application that shows the problem—it’s clearly “among things I can prove, pick the best”, not “evaluate the decision of passing up the $5″. I’d argue that it’s _ALSO_ missing the important part of the setup—you can prove that taking $10 gives you $10 as easily as that $5 gives you $5, and the “look for proof” is handwaved (better than absent) without showing why the proof.
Not always—every once in awhile you’ll find a solution that is a pareto-improvement: better on all dimensions than the next-best alternative. Those are good days!
One reason that most of your actual decisions involve tradeoffs is that easy no-tradeoff decisions get made quickly and don’t take up much of your time, so you don’t notice them. Many of the clear wins with no real downside are already baked into the context of the choices you’re making. For the vast majority of topics you’ll face, you’re late in the idea-evolution process, and the trivial wins are already in there.
Agreed, and I’d be more specific with the modifications:
“maybe I’m the wrong one” → “Maybe my approach is not the optimal one” → Maybe there are dimensions of optimization (like solution search costs or budget justification) that I’m weighting differently from my boss.
“I trust my partner to be cooperating with me” → “I trust my partner (and am willing myself) to spend a bit of effort in finding the causes of disagreement, not just arguing the result”
And it goes both directions—be honest with yourself about what dimensions you’re weighting more heavily than others for the decision, and what optimization outcomes might be different for you than for your customers and boss. A clear discussion about the various impacts and their relative importance can be very effective (true in some companies/teams, not in others. In some places, you have to either trust that the higher-ups are having these discussions, or convince yourself that the decisions aren’t intolerable (or seek work elsewhere)).
On the object level, I write software that supports lots of IoT devices, some using Linux, some Android, some FreeRTOS, some Windows-ish, and a whole lot of “other”—microcontrollers with no real OS at all, just a baked-together event loop in firmware built very specifically for the device. There are very good reasons to choose any of them, depending on actual needs, and it’s simply incorrect to say that Android is terrible for IoT. Very specifically, if you want a decent built-in update mechanism, want to support a lot of off-the-shelf i/o and touchscreens, and/or need some kinds of local connectivity (bluetooth audio, for instance), Android’s a very solid choice over Linux.
I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.
The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question—both whether their agreement is real and whether it’s the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it’s beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.
In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it’s mostly wrong is actively harmful. My objection to the objection to “appeal to consequences” is that the REAL objection is to bad epistemology of consequence prediction, not to the desire to predict consequences.
In a completely separate direction, consequences of speech acts in public/group settings are WAY more complicated than epistemic consequences of a truth-seeking discussion among a small group of fairly close rationalist-inclined friends. Both different rules/defaults/norms apply, and different calculations of consequences of specific speech actions are made.
All that said, I prefer norms that lean toward truth-telling and truth-seeking, and it makes me suspicious when that is at odds with consequences of speech acts. I have a higher standard of evidence for my consequence predictions for lying than I have for withholding relevant facts than I have for truth-telling.
“Paying utility” in this kind of analysis means to undertake negative-utility behaviors outside the game we’re analyzing, in order to achieve better (higher-utility) outcomes in the area we’re discussing. The valuation / bargaining question is about how to identify how important the game is relative to other things.
For simple games, it’s often framed in dollars: “how much would you pay to play a game where you can win X or lose Y with this distribution”, where the amount you’d pay is the value of the game (and it’s assumed, but not stated nearly often enough that the range of outcomes is such that it’s roughly linear to utility for you).
I think this writeup gets a little confusing in not being very explicit about when it’s talking about an agent’s overall utility function, and when it’s talking about a subset of a utility function for a given game. There is never a “willingness to pay” anything that reduces overall utility. The question is willingness to pay in one domain to influence another. This willingness is obviously based entirely on maximizing overall utility.
A useful technique in this (whether formally double-cruxing or just in trying to get agreement on big group decisions) is to narrow the scope of the disagreement, so you can stay with concrete outcomes of the discussion. Don’t try to resolve whether minimal presentation or high information density is better as a paradigm in general. Do try to resolve, for the anticipated common (and uncommon but important) uses of our product, what range of cognitive expectations we should cater to, and how we can meet the needs of the entire (or at least the demand-weighted bulk of the) audience.