There are two distinct parts to this.
1) does competitive debate do a good job of truth seeking (and/or can it be reformed to do so). I’m with many commentators in that I suspect the answer is no. The format is just not suited to it.
2) do some of the skills of competitive debate aid in truth-seeking outside of such debates. Probably, but I suspect those skills come along with habits and attitudes that make them less effective in truth-seeking than if they were learned elsewhere.
Good point. Note that gambling has the added difficulty in that it’s emotionally adversarial—the casino/bookie is setting the game and environment to confuse the player’s estimates of success probability and magnitude. Interviewing probably has a little of this in that many interviewers are more interested in making themselves feel smart than in hiring the candidates that will contribute most.
In any case, focusing on someone’s motivation and their perception of the distribution of successes and failures should be secondary to analysis of the real possible outcomes. For most people, there exist jobs they shouldn’t interview for. A blanket “keep trying” is unhelpful, without specific analysis of expected value.
Hmm, I worry that motivation is only part of the picture. There’s also idiosyncrasy between agents in terms of ability and acceptance of outcome.
By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty,
True enough, though you can replace “would not” with “will not” or “did not”.
without assigning odds in the vicinity of 10x what I started out assigning that the project would work
This is not the counterfactual I’d assign the most weight to. “Without being Eliezer” is probably too specific, but “without having Eleizer’s history of rewarded iconoclasm” wouldn’t be a stretch. It’s extremely likely that you _ARE_ orders of magnitude more likely to succeed at these endeavors than the majority of people who say they’re interested in the topic.
“value” has connotations about identity and long-term worldview. “priority” could include long-term tradeoff choices, but is more often about current circumstances.
Difficulty of such predictions is why it’s called “the singularity”. Our unstated intuitions of what “value” is will break down when a sufficient amount of humans become unambiguously superfluous (meaning not only is there no job for them, there’s no likely value they can provide to the AIs and human elite which justifies their CO2 and waste output).
In fact, I suspect we experience very different things in our work and social life. I do recognize that there are situations where rank is more important than value, but I have trouble imagining functioning that way for very long. As a result I forget the diversity of human experience and that many people DO experience that.
If you replace “status” with “relative social rank” in the OP do you disagree with it?
I disagree that “ordinal social rank” is a thing which matters in almost any situation. Value, esteem, and respect are great determiners of promotions, choice work assignments, etc. They are not, however, strictly a relative measure against other coworkers. It’s more a relative measure against the universe of possible employees. Which makes it more absolute than relative.
Heuristics are good. Heuristics are very good. You don’t even know how many times your heuristics have saved you. You possibly have no idea what they are saving you from.
Agreed. I want the t-shirt.
Status is, within a fixed group, a zero-sum game. People in the workplace are constantly attempting to improve their position on the ladder at the expense of others.
Oops, wrong. Status is a complex mix of different types of esteem and judgement that people apply to one another in different contexts. It’s nowhere near zero-sum or even linear.
Which is _WHY_ heuristics are good—they are learned simple responses for complex inputs. They’re not optimal for all (or perhaps any) possible applications, but they’re a likely-good-enough approach for common situations when you don’t know all the relevant details and haven’t built a more complete model.
There’s a very real question of intent and timing baked into this discussion, which we should bring forth. Entering into a contract that you don’t believe is enforceable and you don’t intend to honor, but which you tell your counterparty that you think it’s valid, is fraud. Entering into a contract that you intend to honor, and you accept the contractual penalties even if unenforced is best-behavior. The middle grounds of “found out later that I don’t accept the effects of my agreement, and it turns out to be unenforceable” is in question—my intuition is it’s blameworthy, but there’s room for debate.
The more interesting part of this is the availability of unenforceable contracts. Where surrogacy is not enforced, it doesn’t happen nearly as often. Some may find this good (fewer people in the world!) and some may not (we need more people!), but it’s clear that Posner’s correct that disallowing it removes the supply rather than protecting the suppliers.
(it’s true that people find other ways to do it, like going to another country or lending money rather than direct payments, and then forgiving the debt as part of adoption proceedings. that’s not relevant to the question of whether to prevent some kinds of voluntary and optional contract).
Ok, then I’m very confused. “punching” is intentional harm or intimidation, typically to establish hierarchy or enforce compliance. If you meant something else, you should use different words.
Specifically, if you meant pigouvian taxes or coasean redress (both of which are not punitive, but rather fee-for-costs-imposed), rather than censure and retribution, then most of my disagreement evaporates.
Then you confuse the idea of legality with the idea of enforceability. It’s not illegal to write a contract that’s not legally enforceable.
Writing a contract that you don’t plan to honor is fraud, isn’t it?
I can’t disagree that policy (aka threat of violence for non-conformers) changes things faster than reasoned thought.
I _DO_ disagree that it improves the world more than reason does. Recycling is a great example—the status quo had massive subsidies for undifferentiated garbage collection, so consumers could not see how much savings, if any, there would be from separating their recyclables. Moral suasion that directly contradicts daily experience (in paying my garbage bill) is going to have very low effect. So we passed laws, causing people to go to significant effort and _STILL_ see no benefit. Now we have a common pattern of separating and rinsing our recyclables, and a bad feeling that it all goes to the same place, and no evidence of actual savings or reuse.
So we’ve spent a bunch of energy in changing laws and in wearying consumers on the topic, but have not achieved the effects we might have if we’d just been transparent about the costs we are concerned about (by charging sufficiently for landfill and charging less for clean recyclables).
Sure, it’s fun to discuss what’s right in bizarre situations, but that’s very different from the decisions philh is talking about. I strongly doubt that your group house has decided “We like you, and that act was right for that situation, but we’re going to punish you so others won’t try it”.
I totally buy the argument _IN GROUPS LARGE ENOUGH TO BE IMPERSONAL_ that you punish deviance from the norm, even when that deviance is correct and necessary. More hero they, who suffer for their necessary actions. Stanislav Petrov was a hero to disobey orders, and the Soviet government was correct to reprimand him.
I do not think this is true in groups smaller than some multiple of Dunbar’s number. If you can discuss the specifics with a significant percentage of members, then you can do the right thing contextually, rather than blindly enforcing the rules (which, even for complex unwritten norms, are too simple for reality).
Peterson is correct (IMO) that “right” is a shorthand for behavioral constraints on people, not something innate. He’s also right that animals cannot (as far as we know) think abstractly enough to respect or exercise rights. I disagree that rights are based on agreements—they’re more about commonly-held expectations and social reinforcements, but that’s not relevant to the question at hand.
So I don’t think animals have rights (and honestly, I don’t think humans do either, in a universal sense; Rights are always contextual). But I also don’t think “rights” is the best filter for how to treat other entities. You should be asking “which animal experiences have moral weight, and how does that compare to the weight of various human desires”?
For me, the answer is “nonzero, but much much lower than humans”. And I don’t know how to answer existential questions like “would it be better for a being to never exist, or to live in well-fed captivity for some time, then die painfully?”
To your object-level recommendation that we “bestow” (I’d prefer “recognize” as a verb that applies better to the concept of rights) animal’s rights, I say no. They have the right to remain tasty (or, in the case of pets, “entertaining/useful/comforting”) for humans. If they choose to give up that right, they won’t be brought into being in the first place.
A better split than abstract-specific (unless you’re honestly trying to objectively describe best actions, without having any application in mind, is facts-evaluation-action.
Firs, get agreement on what Bob did or is doing. Bob may agree that it’s happening, or you may have to provide evidence to convince people. Then get agreement that the behavior is not acceptable and needs to be stopped. Then separately again propose and agree on what actions you collectively (meaning you and the people you’re trying to convince) will take to achieve this change in Bob’s behavior or remove his ability to harm you. And finally (often combined into the previous), decide if Bob owes any recompense for past harms.
In most situations (unless you’re a lawmaker or judge, or water-king of a post-apocalyptic tribe, or maybe a parent of the offender), you should not discuss or consider punishment AS punishment. Behavioral changes, recompense for damage caused, or exclusion are really the only considerations.
I don’t think it’s possible to decouple the arguments very completely, and attempting to do so is likely to backfire, when everyone notices that you published your abstract punching justification pretty much so you could get support when you punch Bob. I also think there’s a real risk of accidental coordination problems—reasons for you to punch bob will be easy to overgeneralize, and then EVERYONE punches Bob, which is far too severe for whatever justification you thought you have.
I know this is supposed to be allegorical, but I think this applies to many related questions: A better policy is not punching people, ever. And not accepting or justifying that it’s OK sometimes. Even as a response to Bob’s punches, the proper response is to escape and then to address the behavior. This is almost NEVER effective if you start by repeating the behavior you want to prevent. Pepper-spray if needed in the moment, then intervention (if you care about Bob and think it may work) or arrest (if Bob’s a stranger). Yes, this is escalation. Yes, this is the way to address Bob’s unacceptable actions.
If you really want to explore this, figuring out the difference between other allowed and disallowed trades in various philosophies would be a good start. Prostitution, child labor, payment for (one’s own) organs, and payment for keeping secrets (blackmail) are all things that on the face of it seem like private transactions with not much justification for preventing. But all are prohibited in some or all cultures.
I’m also curious if you think we should enforce the return of payment and reimbursement of expenses for fraudulent contracts. Accepting payment and valuable services, and then not performing according to the contract, seems like a crime in itself.
You could also go down the identity route—are the parties who (voluntarily, we presume) made the agreement actually the same moral entities as those who seek to nullify the contract? Is a mother-to-be the same person as a new mother? She may not understand her future-instance’s emotions or preferences enough to be able to make binding decisions.
But don’t take that too far, or you’ll realize that the fiction of agreements and contracts isn’t a moral issue, but a practical legal issue—things fall apart if you don’t pretend that people are more responsible for their situation than they actually are. “consider the equilibrium” is basically Posner’s argument, and it’s pretty strong.
Ehn, no system can force someone to be a good parent, but some systems might nudge people in one direction or another. Barriers (permeable, and surmountable when the drive is sufficient) to less-commonly-good situations might serve a purpose.
Walking the line between encouraging responsibility and best-for-the-child and allowing those who choose/need to do otherwise (where best-for-everyone is distance from the child) to do so is not easy, in theory or practice.
Voting is the worst mechanism for this decision, but I vote for #2. LW is for reflections on and advice for having true beliefs (being less wrong), and POSSIBLY for raising the sanity waterline, in helping others to have true beliefs.
It’s a place to be rational, not to support rationalism.
The scaling question is a good one. There are a _LOT_ of businesses where marginal cost is much less than average cost, due to capital and fixed expenses, and transaction costs that are more about setup and initial exploration rather than scaling with supply.
To the extent that healthcare suppliers are in this category, it’s quite believable that they’d negotiate steep discounts for a “excess” business, which they can undertake without needing additional capital or headcount. But they cannot give those discounts to a majority of their customers or they’ll go out of business.
Effectively, these discounts may be “leftover capacity” from the more expensive insurance-paid traffic that these suppliers get.