There’s nothing stopping us from combining positive and negative reinforcement. I think it would be a pretty easy sell to propose adding the random, small no-speeding rewards without removing the existing laws and fines.
DSimon
“Something other than a comparison of the available evidence must be invoked to explain this discrepancy.”
Yes, but what leads you to think that a status-based explanation is helpful here? The two predictions you listed are both perfectly compatible with many non-status-related hypotheses. For example, it’s quite plausible that one meme happened to get a head start.
Hypothesis: Driving-while-drunk is a much bigger deal now than it was several decades ago, and it maintains its prominence primarily because of momentum: people keep talking about it because “everyone’s talking about it”. In order for driving-while-sleep-deprived to hold the same position, it would have to climb up that same hill, and the necessary PR work and/or meme pool churn hasn’t caused that to happen yet.
One common theme I see in the other comments is that the justice system is just way too slow and underequipped to handle every case properly. It might be fruitful to focus on some way of making the justice system more efficient while maintaining roughly the same output as before.
I’m not a lawyer, but one possibility occurs to me: how about a time limit on each section of a trial? Similar to some debate formats: prosecution gets X minutes to introduce the case, defense gets Y minutes to respond, and so on, up to a limited number of segments which can only be extended at the judge’s discretion due to new evidence coming to light.
This would mean that complex and/or high profile cases would receive much much less time than they otherwise would, but that might be worth it if this system can be shown to nearly always arrive at the same conclusion (or, given the ideals of the US justice system, if any error tends to be in favor of the defendant).
Anyone have any data on the distribution of trial lengths? That would tell us if it would be better overall to optimize short trials to be a little bit shorter or to optimize very long trials to be short. In other words, would it be more effective overall to reduce the duration of OJ Simpson style trials to only a few days long, or to reduce breaking-and-entering trial durations by 20%?
I like the written questions thing, but I don’t think the jury should have the ability to grant or deny additional time, for two reasons:
Since we’re talking about an inexpert jury here (right? I know somebody proposed an expert jury below), they’ll need to be learning about the law as they go. If they have to learn about what conditions ought to mean additional time should or should not be granted, that’s taking away time and effort away that they could be using to learn about the evidence and the relevant laws.
More importantly, it would reintroduce the problem of lawyers being able to delay cases for as long as they can (which they will of course do if things look like they aren’t going their way). In a short time, it’s probably easier to convince a jury that you need more time then it is to convince them that the evidence points one way or another. I can imagine the system becoming one where extra time is granted nearly always, and the “Do you think the trial should continue?” question to the jury seen as just a formality.
I agree that the differential treatment between professions is stronger evidence of status having an effect.
However, if I understand what you’re saying correctly, I don’t think this statement makes sense:
I honestly can’t find a better explanation than the associations people have with each behavior, where status considerations play an important role in shaping their response.
Alone, not being able to find a better explanation for something than X isn’t good support for X being the explanation. There needs to be significant positive evidence in favor of X, or there’s no reason to choose it over “Ayedunno” (aka the null hypothesis).
This strikes me as being roughly similar to peoples’ opinions of the value of having children who outlive them. As the last paragraph of the OP points out, it doesn’t really matter if it’s a copy of me or not, just that it’s a new person whose basic moral motivations I support, but whom I cannot interact with
Having their child hold to moral motivations they agree with is a major goal of most parents. Having their child outlive one them is another (assuming they don’t predict a major advance in lifespan-extending technology soon), and that’s where the non-interactivity comes in.
The post-death value of the child’s existence is their total value minus the value of the experiences I share with that child, or more generally the effects of the child’s existence that I can interact with.
In this sense, the question of the poll can (I think) be rephrased as: what, to you, would the post-your-death value of a child that you raise well be?
Well, they’ll make more copies if they’re a copy of you from before you put in the year’s work.
I object a great deal! Once we’re all carrying around wearable cameras, the political possibility of making it illegal to rip out the wires would seem much less extreme than a proposal today to introduce both the cameras and the anti-tampering laws. Introducing these cameras would be greasing a slippery slope.
I’d rather keep the future probability for total Orwellian surveillance low, thanks.
If my preferences were such that I valued eating babies then it would be rational for me to eat babies. Rational is not nice, good, altruistic or self sacrificial. It just is.
Well, you’re right that rationality is just a system for achieving a goal; it is the same process regardless of whether that goal is making the world a better place or turning it into a desert wasteland.
But, the OP is asking us to use rationality in a practical way and report back. That means we have to pick a goal, or there’s nothing to point our rationality at. Making the world a better place for the people living in it (or to use a more utilitarian phrasing, reducing the net amount of potential and actual suffering in the world) seems like a pretty good one. It matches my own personal goals, at any rate.
Therefore: if you don’t think the specific steps outlined in the OP are optimal for achieving that goal, please describe your alternative! I’m not being sarcastic; to use the chant, if the OP’s steps are effective, I want to believe they’re effective, and if they’re not, I want to believe they’re not.
But, please don’t confuse that practical matter with the issue of choosing a goal; that argument is outside the bounds of rationality (except for the specific case of trying to justify one value as a sub-goal of another one).
I think one possible strategy is to get people to start being rational about being in favor of things they already support (or being against things that they already disagree with). For example, if someone is anti-alt-med, but for political reasons rather than evidence-based reasons, get them to start listening to The Skeptic’s Guide to the Universe or something similar.
Once they see that rationality can bolster things they already support, they may be more likely to see it as trustworthy, and a valid motivation to “update” when it later conflicts with some of their other beliefs.
What my point is is that none of the the actions listed are an effective way of achieving anything. Neither of the two purposes of altruistic actions are served (that being signalling and actually changing the world to match altruistic preferences.)
(For this response I’m going to focus on the goal of improving the world, not on signalling.)
One of the options was to give blood, which contributes directly to the reduction of suffering. I admit that I haven’t personally looked into the effectiveness of the blood donation system, but as a basic medical technology it’s quite sound, right? Why do you feel that donating blood is ineffective?
Two of the options were about donating to charities; one to a specific charity that seeks to defend a college student falsely accused of murder, and another a more general request to donate to any “reputable charity”. I can understand that you might reasonably default to the null hypothesis on evaluating the effectiveness of any particular charity, particularly a minor one with little reputation like the Amanda Knox Defense charity… but it’s a much stronger statement that reputable charities in general are not “an effective way of achieving anything”! Could you describe in more detail what leads you to that conclusion?
Finally, the remaining two options were about letter-writing or otherwise contacting people with political power in the hopes of influencing their actions. In terms of cost vs. benefit, this strikes me as being very hard to attack. Communication is cheap and easy, and public approval is a major factor in most political systems. By telling politicians explicitly what will earn your approval or disapproval, you’re taking advantage of this system. I like the description here of this idea.
Do you disagree and feel that communicating with politicians is an ineffective way of influencing their decisions? If so, do you have a more effective alternative to propose?
Or alternately, force virality onto memes that promote rationality, or at least dis irrationality.
“Where’s mah Bayesian update?”
“Invisible evidence!”
How can you use your brain to test if a sensation your brain is experiencing cannot be faked by your brain?
Seems reasonable to me; if there’s the expected amount of crime in an area, then it’s not too worthy of special attention. If there’s a higher than usual amount of crime, then it’s clearly worthy of special attention.
However, if there’s a lower than usual amount of crime, then it’s also worthy of special attention, because that indicates that something odd is happening there (or, it indicates that something has genuinely reduced the amount of crime and not just the metric, which is worth investigating and hopefully replicating).
I wrote out a long response involving an analogy to a CPU self-test program, but at the end I realised that I had arrived at the same conclusion you stated. :-) So I’m voting you up and wish to extend you an Internet high-five.
However, on this topic, it seems like there’s no good approach for handling the scenario where your brain messes with your internal tests in such a way as to point them invariably at a false positive, i.e. anosognosia.
I agree that a good self-test of the sort you describe would reduce the probability for most kinds of anything-goes insanity, but what sort of test could be used to check against the not-insignificant subset of insanity that specifically acts against self-tests and forces them to return false positive at the highest level?
When I was 14, my father was stationed in Japan. I went rock climbing with this kid from school. He fell and got injured, and I had to bring him to the hospital. We came in through the wrong entrance, and passed this guy in the hall. He was a janitor. My friend came down with an infection, and the doctors didn’t know what to do. So they brought in the janitor. He was a doctor. And a Buraku—one of Japan’s untouchables. His ancestors had been slaughterers, gravediggers. And this guy knew that he wasn’t accepted by the staff, didn’t even try. He didn’t dress well. He didn’t pretend to be one of them. People around that place didn’t think he had anything they wanted, except when they needed him—because he was right, which meant that nothing else mattered. And they had to listen to him.
-- Dr. Greg House
Speaking in terms of real pop-up boxes, you might be surprised at how easy it is for people to ignore the content of even the most blaring, attention-grabbing error messages.
A typical computer user’s reaction to a pop-up box is to immediately click whatever they think will make it go away, because a pop-up box is not a message to be understood but a distraction from what they’re actually trying to accomplish. A more obnoxious pop-up box just increases the user’s agitation to get rid of it.
As rationalists, we try hard to avoid falling into traps like these (I’m not sure if there’s a name for the fallacy of ignoring information because it’s annoying, but it’s not exactly a high-utility strategy), but part of the way we should do that is to design systems that encourage good habits automatically.
I like Firefox’s approach; when it wants you to choose between Yes or No on an important question (“Really install this unsigned plugin?”), it actually disables the buttons on the pop-up for the first 3 seconds. You see the pop-up box, your well-honed killer instinct kicks in and attempts to destroy it by mindlessly clicking on Yes so you can get back to work already… but that doesn’t work, you’re surprised, and that jolt out of complacency inspires you to actually read the message.
I suspect a “Hey, have you noticed that something has penetrated the skin of your left foot?” warning might benefit from having the same mechanism.
Type 1: Implicit reasoning
Type 2: Explicit reasoning
Oh, also: the OP refers to Type 1 as being “autonomous” and Type 2 as being “algorithmic”, so another option would be to just stick with those words.
(Hi everyone; this is my first time posting here.)
If someone delivered that 100%-applause-light paragraph to me in a speech, my first impulse would be to interpret it as an honest attempt to remind the audience of obvious but not necessarily currently-in-context ideas. For example, this statement from the middle:
“To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm.”
Taken literally as a set of assertions, this really is quite empty of novel or unexpected content. However, directed at an audience of humans, aware of but still vulnerable to cognitive bias, the statement above implies another statement which is more useful: “We should be careful to not act like who, despite intending not to, panicked rather than thinking productively. We should also be careful to not act like whose enthusiasm overwhelmed their necessary sense of caution, even though they knew the value of that caution.”
People who agree with the part of the 1st virtue that says “A burning itch to know is higher than a solemn vow to pursue truth” may still sometimes need to be reminded to check themselves and make sure they’re doing the former rather than the latter.