Yes, always reserve your penises for when they actually fit really well.
Oh, I’m totally getting downvoted to hell for this.
Yes, always reserve your penises for when they actually fit really well.
Oh, I’m totally getting downvoted to hell for this.
I took it.
For the P(Warming) question, you might get people answering different versions of the question on this. For example, my personal evaluation of the probability of warming and that humans are a major cause is very, very high, but my evaluation of the probability that humans are the primary cause is much lower.
“pre-1980” = “pre-lukeprog”, and thus, the ancient days
(kidding)
In large formal groups: Robert’s Rules of Order.
Large organizations, and organizations which have to remain unified despite bitter disagreements, developed social technologies such as RRoO. These typically feature meetings that have formal, pre-specified agendas plus a chairperson who is responsible for making sure each person has a chance to speak in an orderly fashion. Of course, RRoO are overkill for a small group with plenty of goodwill toward each other.
In small formal groups: Nonce agendas and rotating speakers
The best-organized small meetings I’ve ever attended were organized by the local anarchists. They were an independently-minded and fierce-willed bunch who did not much agree but who had common interests, which to my mind suggests that the method they used might be effectively adapted for use in LW meetups. They used the following method, sometimes with variations appropriate to the circumstances:
Before and after the formal part of the meeting is informal social time.
Call the meeting to order. Make any reminders the group needs and any explanatory announcements that newcomers would want to know, such as these rules.
Pass around a clipboard for people to write agenda items down. All that is needed are a few words identifying the topic. (People can add to the agenda later, too, if they think of something belatedly.)
Start with first agenda item. Discuss it (see below) until people are done with it. Then move on to the next agenda item. In discussing an agenda item, start with whoever added it to the agenda, and then proceed around the circle giving everyone a chance to talk.
Whoever’s turn it is, they not only get to speak, but they are the temporary chairperson also. If it helps, they can have a “talking stick” or “hot potato” or some physical object reminding everyone that it’s their turn. They can ask questions for others to answer without giving up the talking stick. If you want to interrupt the speaker, you can raise your hand and they can call on you without giving up the talking stick.
Any other necessary interruptions are handled by someone saying “point of order”, briefly stating what they want, and the group votes on whether to do it.
In small informal groups: Natural leaders
Sometimes people have an aversion to groups that are structured in any manner they aren’t already familiar and comfortable with. There’s nothing wrong with that. You can approximate the above structure by having the more vocal members facilitate the conversation:
Within a conversation on a topic, deliberately ask people who aren’t as talkative what they think about the topic.
When the conversation winds down on a topic, deliberately ask someone what’s on their mind. Be sure to let everyone have a chance.
Tactfully interrupt people who are too fond of their own voices, and attempt to pass the speaker-role to someone else.
Yes. Page 287 of the paper affirms your interpretation: “REMORSE does not exploit suckers, i.e. AllC players, whereas PAVLOV does.”
The OP has a mistake:
Remorse is more aggressive; unlike cTFT, it can attack cooperators
Neither Remorse nor cTFT will attack cooperators.
On the “all arguments are soldiers” metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.
And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it’s possible to switch to the other. I don’t seem to have a neutral mode. YMMV! I’d be interested in hearing whether it works the same way for other people or not.
For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.
You didn’t actually dissolve the problem of qualia—you just rationalized it away. The goal we like to aim for here in “dissolving” problems is not just to show that the question was wrongheaded, but thoroughly explain why we were motivated to ask the question in the first place.
If qualia don’t exist for anyone, what causes so many people to believe they exist and to describe them in such similar ways? Why does virtually everyone with a philosophical bent rediscover the “hard problem”?
Another friction is the stickiness of nominal wages. People seem very unwilling to accept a nominal pay cut, taking this as an attack on their status.
Salary negotiation is a complicated signalling process, indeed. I’m currently an unemployed bioengineer and have been far longer than I would have liked, and consequently I would be willing and eager to offer my services to an employer at a cut rate so that I could prove my worth to them, and then later request substantial raises. But this is impossible, because salary negotiations only occur after the company has decided that I am their favorite candidate out of however many hundreds apply.
Worse, if I take the first move and openly (e.g. on my resume or cover letter) inform the company of my willingness to work on the cheap, they would assume that I am signalling being a very low-quality engineer, which is very far from the case.
Unemployment does very much seem to be an information trap.
nit to pick: Rod and cone cells don’t send action potentials.
I’m tempted to send the “with controls” graphs to the newspaper and suggest the headline: HAPPINESS CAUSES CHILDREN.
This suggests a joke solution: Tell people about the box, then ask them for a loan which you will repay with proceeds from the box. Then you can live off the loan and let your creditors worry about solving the unsolvable.
Problem 2 reminds me strongly of playing GOPS.
For those who aren’t familiar with it, here’s a description of the game. Each player receives a complete suit of standard playing cards, ranked Ace low through King high. Another complete suit, the diamonds, is shuffled (or not, if you want a game of complete information) and put face down on the table; these diamonds have point values Ace=1 through King=13. In each trick, one diamond is flipped face-up. Each player then chooses one card from their own hand to bid for the face-up diamonds, and all bids are revealed simultaneously. Whoever bids highest wins the face-up diamonds, but if there is a tie for the highest bid (even when other players did not tie), then no one wins them and they remain on the table to be won along with the next trick. All bids are discarded after every trick.
Especially when the King comes up early, you can see everyone looking at each other trying to figure out how many levels deep to evaluate “What will the other players do?”.
(1) Play my King to be likely to win. (2) Everyone else is likely to do (1) also, which will waste their Kings. So instead play low while they throw away their Kings. (3) If the players are paying attention, they might all realize they should (2), in which case I should play highest low card—the Queen. (4+) The 4th+ levels could repeat (2) and (3) mutatis mutandis until every card has been the optimal choice at some level. In practice, players immediately recognize the futility of that line of thought and instead shift to the question: How far down the chain of reasoning are the other players likely to go? And that tends to depend on knowing the people involved and the social context of the game.
Maybe playing GOPS should be added to the repertoire of difficult decision theory puzzles alongside the prisoner’s dilemma, Newcomb’s problem, Pascal’s mugging, and the rest of that whole intriguing panoply. We’ve had a Prisoner’s Dilemma competition here before—would anyone like to host a GOPS competition?
On the Freakonomics blog, Steven Pinker had this to say:
There are many statistical predictors of violence that we choose not to use in our decision-making for moral and political reasons, because the ideal of fairness trumps the ideal of cost-effectiveness. A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher. Thankfully, this rational policy would be seen as a moral abomination.
I’ve seen a common theme on LW that is more or less “if the consequences are awful, the reasoning probably wasn’t rational”. Where do you think Pinker’s analysis went wrong, if it did go wrong?
One possibility is that the utility function to be optimized in Pinker’s example amounts to “convict the guilty and acquit the innocent”, whereas we probably want to give weight to another consideration as well, such as “promote the kind of society I’d wish to live in”.
Extremely temporary friendships. I suspect, without demonstrable evidence beyond stories from friends and myself, that location-based networking applications have lead to us developing better skills to manage temporary group friendships among travelers and locals. CouchSurfing, AirBnB, Grindr, etc., started out fairly awkward for all involved several years ago, but now it seems to me that people are comfortable and adept with the norms.
This will be done unblinded, because Kurzweil’s predictions are so well known that it would be infeasible to find large numbers of people who are technologically aware but ignorant of them.
Is this true? It could be, or alternatively it could simply appear true from your perspective of familiarity. I’m only vaguely aware of Kurzweil and have never heard any mention of him among my group of largely grad student / geek friends.
I’m hereby anti-mugging you all. If any of you give in to a Pascal’s Mugging scenario, I’ll do something much worse than whatever the mugger threatened. Consider yourself warned!
Consider this ugly ASCII version of the expression for AIXI found in this paper by Marcus Hutter,
a_k := arg max[a_k, SUM[o_k*r_k … max[a_m, SUM[o_m*r_m, (r_k +...+ r_m) SUM[q:U(q,a_1...a_m) = o_1*r_1..o_m*r_m, 2^-l(q)] ]]...]] .
What I was thinking was to replace the inner sum for the Solomonoff prior, SUM[q:..., 2^-l(q)], with a repeat of the interleaved maxes and SUMs.
SUM[q:U(q,a_1...a_m)=o_1*r_1..o_m*r_m, max[a_k, SUM[o_k*r_k … max[a_m, SUM[o_m*r_m, (r_k + … + r_m)]]...]] ] .
Now that I write it out explicity, I see that, while it isn’t circular, it’s definitely double-counting. I’m not sure that’s a problem, though. Initially, for all deterministic programs q that model the environment, it calculates its expected reward assuming each q one at a time. Then it weights all q by the rewards and acts to maximize the expected reward for that weighted combination of all q.
Socialists averaged having read 47% of the sequences. If you include communists it goes down very slightly.
Non-socialists averaged having read 52% of the sequences.
The difference is not statistically significant at the customary alpha=0.05 level, but it’s very close.
The article’s conclusion is that “people decide they want to convert for emotional reasons, but some can’t believe it at first, so they use apologetics as a tool to get themselves to believe what they’ve decided they want to believe.”
So we expect apologetic literature and speakers as a market niche wherever there are emotionally manipulative (claimed) rewards and punishments attendant on belief. Some rewards and punishments are quite real, like social status, praise, and condemnation. Others are fictional, like afterlives and the deep satisfaction of living according to divine law.
Similarly to mainstream religion, there is plentiful apologetic literature, speakers, and films for political ideologies. The social rewards of being in a political group are real; the future consequences that are promised if only enough elections can be won may or may not be real.
Given religions where beliefs are not rewarded or punished, we’d expect little or no consumption of apologetics. Shinto, neopaganism, and Unitarian Universalism fit that. However, there is certainly plenty of apologetic literature for secular humanist atheism, which also lacks the rewards/punishments. That looks to almost entirely undermine the hypothesis.
There is also basically no apologetic literature for believing in the greatness of particular sports teams, despite the large social rewards of being in a fanbase and the promised vicarious glory of psyching your team up for a win by your fervent support. OK, so to me the hypothesis is dead. Something more is going on than simple market response to rewarded/punished belief.
Any ideas what?
There were three times in my life when I consumed apologetics. First was when I was evangelical Protestant and it was a tool for the religious imperative of winning converts. Second was when I could no longer believe my childhood religion, but still believed in God and the importance of Jesus, and so I read the apologetics of other religions to see which was most likely true, and I ended up converting Catholic for a while. Third was when I became infatuated with the principled style of libertarian political ideology and needed the apologetics to “understand” why nothing fit.
Based on my own anecotal experience, then, my next hypothesis would be that apologetic argument and literature is demanded when people are (1) committed to a theory (for any reasons good or bad) and (2) also committed to acknowledging the facts, and (3) the facts don’t fit the theory in a straightforward way, and (4) complex fits of facts to theory are tolerated.
Religions that propose explanations would then be expected to have apologetics, and religions that don’t propose explanations would not. All political ideologies would be expected to have apologetics, because it’s an unfortunate fact of life that the consequences of politics are very complicated. Secular humanist atheists, insofar as they propose explanations for life, the universe, and everything, similarly end up occasionally faced with bizarre and extraordinary scenarios that defy simple explanation, and so they have apologetics. Some sports fans may, after a loss, blame the coach, the refs, the weather, and other factors, but at least in my experience most are willing to believe the other team played better. Oddly, we even end up with pro-science apologetics sometimes; at least I remember my physics and chemistry professors spending inordinate time mis-explaining phenomena when they were committed to the phenomena being explainable primarily by that week’s lesson.
It seems to fit. And it suggests that the process leading to apologetics can be interrupted at two places, as described elsewhere by Eliezer. First, don’t be committed to a theory. Don’t make a belief part of your identity. Let your beliefs be faithless and blown about by the winds of evidence. Second, count facts that require detailed explanations as contrary evidence even if the explanation is adequate. (This is not strictly Bayesianly correct but it seems like a good approximation.)
(summary)
Correlation does not imply causation,
but
causation implies correlation,
and therefore
no correlation implies no causation
...which permits the falsification of some causal theories based on the absence of certain correlations.