Just this guy, you know?
Dagon
I don’t think this analysis is comparing the right things. The honor system for vaccinations is worse than enforcement, as you note, but it’s probably not worse than nothing at all, which is what you actually chose.
Honor system has other bad effects, like turning some people into the honor police, who go further than others would like in investigating. And implying that disagreement or inability to comply is “dishonorable”. But in pure effectiveness terms, it’s above zero.
Part of my response is “this is very context-dependent”, and that is overwhelmingly true for a group house or book club. Alice can, of course, leave either one if she feels Bob is ruining her experience. She may or may not convince others to kick Bob out if he doesn’t shape up, depending on the style of group and charter for formal ownership of the house.
She’d be far better off, in either case, being specific about what she wants Bob to do differently, rather than just saying “work harder”.
[note: I don’t consider myself Utilitarian and sometimes apply True Scotsman to argue that no human can be, but that’s mostly trolling and not my intent here. I’m not an EA in any but the most big-tent form (I try to be effective in things I do, and I am somewhat altruistic in many of my preferences). ]
I think Alice is confused about how status and group participation works. Which is fine, we all are—it’s insanely complicated. But she’s not even aware how confused she is, and she’s making a huge typical mind fallacy in telling Bob that he can’t use her preferred label “Utilitarian”.
I think she’s also VERY confused about sizes and structures of organization. Neither “the Effective Altruist movement” nor “rationalist community” are coherent structures in the sense she’s talking about. Different sites, group homes, companies, and other specific groups CAN make decisions on who is invited and what behaviors are encouraged or discouraged. If she’d said “Bob, I won’t hire you for my lab working on X because you don’t seem to be serious about Y”, there would be ZERO controversy. This is a useful and clear communication. When she says “I don’t think you should call yourself Utilitarian”, she’s just showing herself as insecure and controlling.
Honestly, the most effective people (note: distinct from “hardest working”) in sane organizations do have the most respect and influence. But that’s not a binary, and it’s not what everyone is capable of or seeks. MOST actual humans are members of multiple groups, and have many terms in their imputed utility function. How much of one’s effort to give to a given part of life is a pretty wide continuum.
I did a lot of interviewing and interview training for a former large employer, and an important rule (handed down through an oral tradition because it can’t really be written down and made legible) was “don’t hire jerks”. I’d rather work with Bob than Alice, and I’m sad that Alice probably won’t understand why.
Interesting thought, but use of randomness in adversarial games is a very old idea, and applies to CDT just as well as other decision theories. It IS part of strategies to defeat prediction, but it’s not Newcomb-like.
I apologize for speculating about the motte-and-bailey usage of this framing. It pattern matches to other things I’ve seen where someone tries to generalize a relationship between evidence and truth in unclear ways, and then applies it to political topics where the actual propositional truth is far removed from the debate over framing and preferences. I have no reason to believe that your intent was anything but good.
My discomfort remains, in that you don’t make it clear what types of “truth” you’re talking about, nor acknowledge that different experiences do lead to different predictions of future experiences,with a lot of truths being not objective, or at least not resolvable by individual humans.
The 6 vs 9 cartoon seems like the obvious case where the participants don’t have access to the cartoonist or the person/process which created the figure on the ground. They could acknowledge that the shared truth is only that there’s a pattern visible in that shape, and that it could be interpreted as a 6 or a 9 depending on context. They CANNOT state that there is any objective truth to “what it is”. or “what it is supposed to be”.
And that generalizes—it’s not clear that there exists any “objective truth” at human perception levels. All of human experience is so far abstracted and modeled by one’s brain that the underlying quantum field interactions are averaged out and imperceptible. A lot of these sums and averages can be quite confident—the likelihood that tomorrow will contain a set of particles forming a lumpy sphere spinning in a very similar way as today’s Earth is pretty close to 1. But not exactly 1 - there’s always an epsilon. Maybe the simulation ends. Maybe we’ve missed something in our model of physics. Maybe something perfectly normal but very unlikely happens (like a collision with very large very fast object coming from outside the solar system). All vanishingly unlikely, but not actually impossible.
“Cogito ergo sum” is tautalogical, so absolutely true. But it’s not objective—it doesn’t prove anything (or anyone) else.
I appreciate the effort, but I’m not sure this is a good fit for LessWrong. It seems to be using “truth” and “belief” in ways that aren’t formally defined, and doesn’t seem to be aware of Bayes’ Rule or direct math treatments of evidence and uncertainty.
I can’t tell, but it feels a bit like a prelude to some motte-and-bailey about “truth” being applied to models and generalizations, which are neither true nor false in any rigorous sense, only applicable or useful in some cases.
For a fixed, known, negative-expectation game, no betting strategy can change the mean outcome. It CAN change the distribution of outcomes, including the median and modal outcome. Well, depending on how you model the overall game (multiple iterations). One common analysis is “play until you win X or lose Y”, sometimes “or Z iterations”, but that makes generalization much harder, and is irrelevant most of the time.
In this approach, repeated Martingale bets of less than X is simply worse than Martingale starting at X. This is because your “bet handle”, the total money at risk, including re-betting winnings, is higher.
Much more interesting is postive-value bets, where you get to use logarithms. There’s a fair bit of LW discussion under the tag https://www.lesswrong.com/tag/kelly-criterion .
Thanks for discussing the topic and showing how it could work. I remain skeptical, but very much look forward to seeing reports of success or real-world transactions that this tool enabled.
I’m going to bow out at this point—feel free to respond or further explain, and I’ll gratefully read and learn, but probably won’t reply further.
Yeah, part of it was the selection for recency of vote, even on old comments—a positive-total comment from the past got some new downvotes, and that triggered the throttle.
That’s probably a flaw that shouldn’t result in rate-limiting (which reduces NEW posts, not old ones, obviously), but my main point is that the imperfect implementation is still pretty good.
I’m glad you’re continuing to refine it, but I don’t want it removed entirely or reworked from the ground up.
Well, NOW I’m confused.
No trust in the other party is required to use this tool,
It requires trust that they’ll honor the results of the tool and boycott any renegotiation or further contact if outside-tool negotiations are attempted. It’s not clear (to me) how much that trust differs from the trust needed to negotiate “normally”.
I got rate-limited a few weeks ago for a small number of strong downvotes on a single comment. I blame the over-indexing on strong-votes, and still overall support the system. It DOES have some false-positives, but there is a real problem with otherwise-valuable posters getting caught in a high-volume useless back-and-forth, making the entire post hard to think about.
Rate throttling is a transparent, minimally-harmful, time-limited mechanism to limit that harm. It makes mistakes, and it’s annoying when one disagrees with votes. But I don’t know of a better option.
So this strategy fails against people that keep their word.
I guess. But I don’t know of any real-world transactions where it’s expected that people keep their word on something like this. And if there IS that enforcement/trust available, then the MUCH simpler “both write down our best acceptable price, we’ll split the difference if a deal is possible” seems to be just as effective.
I know of no cases of one-shot negotiations that fit this model. The obvious exploit is to lie and then negotiate “normally” if the tool fails to make a deal in your favor.
I suspect the “fair price reporting” mechanism depends on exactly the same social judgement that would make standard negotiations work.
That said, I appreciate the effort and thought into what elements of dealing are painful, and I hope to be proved wrong.
Indeed! The primary reason to have an auction rather than a fixed-price first-come-first-served offer is to FIND the buyer who values it highest. This selection effect guarantees the price will be higher than others are willing to pay.
Do you have any statistics on impact of such laws (on the rate of intervention, or on the prosecution of bystanders)? I wonder if many of them are wishful thinking or historical preference, and not actively used in modern times. The Good Samaritan versions do have good effects, in making people aware that they won’t be punished for trying to help, but the penalties for non-interference seem likely to be toothless.
I wonder if the combination of duty to help and good samaritan laws make “flip the switch” legally acceptable (or mandatory!) for the trolley problem.
Likewise WA State. It has well-publicized (including mention in first aid classes, and some PSAs) Good Samaritan laws, but the offense for non-aid is only applicable to police officers. There is a “failure to summon aid” that seems to apply to all citizens, but I’ve NEVER heard about it until now, even with fairly large news stories about exactly this.
[didn’t downvote—it’s already negative. but I’d like to explain why I think it should be negative. ]
I don’t think “stupid” is a useful descriptor in this context, and this post does nothing to explain or understand what elements of decision or intent we should be looking at. I can’t tell what is being said, nor what definitions would become apparent if you taboo’d “stupid” and “intelligent”.
There are certainly good ways to ask such a question with reasonable motivation to get it right. You could include this in a 5-question quiz, for instance, and say “you get paid only (or more) if you get 3 correct”. And then vary and permute the questions so nobody uses 20 accounts to do the same task and not separately answer the questions.
But that’s expensive and time-consuming, and unless the paper specifies it, one should assume they did the simpler/cheaper option of just paying people to answer.
[separate comment, so it can get downvoted separately if needed]
Regardless of reliability or accuracy of the distribution of results, it’s clear that many, perhaps most, living humans are not rationally competent, most of the time. A lot of us have some expertise or topics where we have instincts and reflective capacity to get good outcomes, but surprisingly few have the general capability of modeling and optimizing their behaviors and experiences.
I’d expect that this would be front-and-center of debates about long-term human flourishing, and what “alignment” even means. The fact that it’s mostly ignored is a puzzle to me.
I don’t think you’ve shown that “mandatory + not checked” is worse than “optional” for this case. Presumably there’s nonzero positive impact by explicitly stating that members are attending under the expectation that everyone is vaxxed, even if it’s not verified.