Of course, there are situations that raise the prostitution issue even more directly.
“So, honey, just how much of a headache do you have?”
Of course, there are situations that raise the prostitution issue even more directly.
“So, honey, just how much of a headache do you have?”
Are current global temperatures optimized for human welfare?
It does seem extremely unlikely that global temperatures are optimized for human welfare, but not as hard to believe that human welfare is optimized for current global temperatures.
Someone Is Wrong On The Internet can be a surprisingly powerful force.
Taking the word “afford” literally, if you can only afford $50 on the first auction, and you lose the auction, then you’ll have an extra $50 on the next auction, and will be able to afford $100. If you lose that auction, you’ll be able to afford $200 on the auction after that. I think that the concept you’re thinking of is not so much afford, as marginal utility cost. For someone with a yearly income of $200,000, a marginal util is going to cost a lot more than for someone with a yearly income of $40,000. Thus, the richer person may be willing to bid more, because the utils are worth more to that person. It is therefore more efficient (that is, it is a Pareto improvement) for the richer person to win the auction, and give money to the poorer person that the poorer person can use to buy utils elsewhere. And I really wonder at who’s deciding what goes in the Google Chrome spell checker dictionary, because apparently “util” is in it, but “externality” is not.
And if this is presented as some sort of “competition” to see whether LW is less susceptible than the general populace, then if anyone has fallen for it, that can further discourage them from reporting it. A lot of this is exploiting the banking system’s lack of transparency as to just how “final” a transaction is; for instance, if you deposit a check, your account may be credited even if the check hasn’t actually cleared. So scammers take advantage of the fact that most people are familiar with all the intricacies of banking, and think that when their account has been credited, it’s safe to send money back.
As a meta-example, I found this post title rather uninformative as to what the post is about, which made we reluctant to take the time to read something by someone who appeared to not be taking the time to tell me what the post was about. I figured, though, if this was a worthless post, it would get downvoted, so I the Karma system has shown some use for providing attentional capital. As we move more to an information society, having good attentional capital systems will become more important.
From wikipedia article on rejection therapy:
“At the time of rejection, the player, not the respondent, should be in a position of vulnerability. The player should be sensitive to the feelings of the person being asked.”
How does one implement this? One of my barriers to social interactions is the ethical aspect to it; I feel uncomfortable imposing on others or making them uncomfortable. Using other people for one’s own therapy seems a bit questionable. Does anyone have anything to share about how to deal with guilt-type feelings and avoid imposing on others with rejection therapy?
But the point of voting is for you to be a provider of information, not to be a consumer of information. If your vote is simply reflecting the information content already available from the other votes, what have you added? Put in LW terms, your vote should entangled with information unique to you. If the only information it is entangled with is other votes, then you’re just perpetuating an information cascade. This isn’t Hollywood Squares. The point of voting isn’t to “win”. It’s not to pat yourself on the back for upvoting useful posts. It’s to provide people with useful information about whether the post is useful. When you’re deciding whether to upvote, you shouldn’t be asking “Do I think this post is useful?”, but “Is this post, given the current vote total more likely to be useful than other posts with the same post total?”
Perverse incentives.
I realize that no analogy is perfect, but I don’t think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it’s just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.
The downvote option does make it easier to be negative, but it also gives people an option in between ignoring trolls (and risking people thinking that other people don’t have a problem with them) and engaging them (and rewarding them). It also lets people express disapproval of a post without wasting another post saying nothing but “I disagree”. Of course, people still do downvote and then also post a comment that has no semantic content other than “I don’t like you”.
At a party, there are all sorts of feedback: smiles, laughs, nods, frowns, awkward silences, glares, etc. The upvote/downvote is a rough analog of that.
Can people PLEASE stop editing their posts in response to other posts, and not mentioning the edit in the original post? It’s rather irritating to read an exchange along these lines:
Person A: blah blah blah Person B: I don’t think you should say “yadda yadda yadda” Person A: You’re right. I’ve edited my post.
Now I have no idea to what extent pjeby’s criticism was directed at the post that I actually read, versus at the original post.
But whatever that process is, there must be a first action, and one cannot ask for consent for that first action, because then asking for consent would come before that first action, and thus it would not be the first action.
Furthermore, the idea that one is giving humans a chance to “signal” consent is rather problematic. Telling Martians that they should give humans the chance to “signal” consent, rather than asking for consent, because asking for consent for rude, and then complaining about Martians not getting clear consent is rather bizarre. Can Martians really blamed for feeling frustrated at the idea that they should prioritize consent, but it’s rude to outright ask for consent? If communication of consent doesn’t occur through explicit statements, but through “signals”, aren’t there going to be Martians who think they’re getting consent when they’re not, and think they’re not when they are? Basically, this system rewards the Martians with the lowest threshold for believing that they have gotten consent.
Whether ratios multiply is a mathematical question, not a psychological one, so your comment doesn’t make sense. The only relevance human rationality has is whether ratios are an accurate model for humans. Furthermore, the OP said “If a microeconomist had a list of 120 ratios between each value, she could describe a great deal of a rational agent’s behavior in a wide new variety of contexts.” so humans are being modeled as rational beings, both explicitly (the OP outright says it) and implicitly (the use of ratios implies a certain level of rationality, no pun intended).
Once you choose a model, you can only model systems as being consistent with your model. If you model people as having a well defined utility function, then you are modeling them as having preference ratios that multiply. If the ratio between the utilitons that A gives you and the utilitons B gives you is X, and the ratio between the utilitons that B gives you and the utilitons C gives you is Y, then the ratio between the utilitons that A gives you and the utilitons C gives you must be XY. If U(A)/U(B) = X and U(B)/U(C) = Y, then U(A)/U(C) = XY. Saying “Well, people are irrational, so their utility function might not make sense” isn’t a valid response to this. U(A) is a real number, so it must follow the rules of real numbers. The fact that it the domain of U is a set of things that doesn’t have to follow those rules doesn’t change the fact that its codomain does have to follow those rules.
If you want to propose some other model, maybe come up with some mathematical structure that doesn’t follow these rules, then you are welcome to try. But you should be clear in properly formalizing this model, and not put it in terms of things that have to follow rules that your model doesn’t follow.
If the off-diagonals are impossible, then it’s not the Prisoners’ Dilemma, it’s just Cake or Death. If you’re facing an identical copy of yourself, then it’s really the Newcomb Paradox. Open code (or, at least, unilateral open code) only works in the Ultimatum Game. One study found that knowing how the other person chose actually increases defection; the probability that Player Two defects given that Player One defects > probability given Player One cooperates > probability given Player One’s action is unknown. Furthermore, open code only makes sense if you have a way of committing to following the code, and part of the Prisoners’ Dilemma is that there is no way for either player to commit to a course of action. And another part of the whole concept of the one-off Prisoners’ dilemma is that there is no way to retaliate. If the players can sue each other for reneging on agreements, then it’s basically an iterated Prisoners’ Dilemma (and if there’s a defined endpoint, then you have the Backwards Induction Paradox).
I suppose this might be better place to ask than trying to resurrect a previous thread:
What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don’t necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at least partially resistant to cherry picking/massaging, etc.) Basically, what sort of evidence E does Signal have to offer, such that I should update towards it being effective, given both E, and “E has been selected by Signal, and Signal has an interest in choosing E to be as flattering rather than as informative as possible” are true?
Also, the last I heard, there was a deposit requirement. What’s the refund policy on that?
If we’re talking about Langford-type basilisks, that’s a reasonable position. But if you’re claiming that no idea can cause disutility, I find that idea to be ridiculous. And you arguing against an idea on the basis that it would be insulting to humanity is rather … ironic.
It’s hard to compare them. Harvard includes work-study, but Berkeley apparently includes both work-study and loans (and doesn’t disaggregate them).
It is interesting that Harvard seems to be acting as if it were engaging in price discrimination, even though the traditional conditions don’t apply (the supply curve is inelastic, there are partial substitutes available).
are families that make i.e. $200k/year and have $300k in saving.
I’m having trouble understanding what was intended by “i.e.”.
“people who don’t even know about Bayesian updates, let alone the existence of akrasia...”
I think you’re confusing the map and the territory. There are people who are unaware of the word akrasia, but I don’t think there are many people who are unaware of akrasia itself. Same thing, to a lesser extent, with Bayesian updates;
Why is it more likely that the followup experiment was flawed, rather than the original? Are we giving a prior of > 50% to every hypothesis that a social scientist comes up with?