Suppose I program a robot to enumerate many possible courses of action, determine how each one will affect every person involved, take a weighted average and then carry out the course of action which produces the most overall happiness. But I deliberately select a formula which will decide to kill you. Suppose the robot is sophisticated enough to suffer. Is it right to make the robot suffer? Is it right to make me suffer? Does it make a difference whether the key to the formula is a weird weighting or an integer underflow?
dspeyer
The poverty may be partly illusory. It sounds like a lot of their economy is not money-mediated (inside the family or work done for social recognition). This means that their wealth is underreported by money-based statistics like median income. A common risk when comparing differently structured societies.
Do things become any clearer if you figure that some of what looks like time-discounting is actually risk-aversion with regard to future uncertainty? Ice cream now or more ice cream tomorrow? Well tomorrow I might have a stomach bug and I know I don’t now, so I’ll take it now. In this case, changing the discounting as information becomes available makes perfect sense.
Would it make a difference if instead of simulation, they had gotten human dna and were speed-growing clones to torture?
Are the six bad things you listed supposed to be swamps or alligators?
If they’re alligators, what is the swamp?
This does not match my experience doing things after studying them thoroughly. Unless your definition of “everything physical about color” includes neurology far beyond the state of the art.
Since a human mind can hold only an infinitesimal fraction of that information, Mary is now a mind quite unlike our own, and likely to have very different qualities.
I can’t be sure, but drinking cold water throughout might help.
Another approach is to contemplate the various virtues that people can have, and consider their relative importance. You might need to do this as a sort of regular meditation.
As an off-the-cuff, how would you sort by importance: rationality, creativity, knowledge, diligence, empathy^1 , kindness, honor, and generosity^2 ? Does how you act correspond to how you answer? If not, make a practice of reminding yourself.
You may also find it useful to enumerate the virtues of the specific people who are annoying you. If you cannot think of any, stop associating with them. If the thought of not associating with them is unpleasant, examine that unpleasantness to discover their virtues.
1= Empathy is a talent for understanding others, which may or may not result in being kinder to them.
2 = Generosity should be taken in the broadest sense: determination to help others despite costs to oneself, and may or may not involve giving material possessions.
If so, let’s make sure to have signatures be visually distinctive so they don’t disrupt the flow of conversation. Maybe make them grey, if there’s a shade that’s distinct from black and readable against our backgrounds.
Minimum wage has the side effect of leaving unemployed, the people who do not possess the requisite skill to command the minimum wage in the market
That is indeed what economic theory predicts. Treating a theoretical prediction as a fact is generally a mistake, except with extremely well-tested theories (physics at Newtonian scales with a small number of objects is the only example I can think of). This is particularly true of economics, which shows worrisome heuristics as a field and especially true of the minimum wage / unemployment link which has been specifically looked for and not found
- 26 Jul 2011 6:06 UTC; 6 points) 's comment on A potentially great improvement to minimum wage laws to handle both economic efficiency as well as poverty concerns by (
James Thurber wrote an amusing one-page parody of The Tortoise and the Hare that serves as a nice explanation of publication bias.
This needs a safety hatch.
It is a recurring pattern in history for determined, well-intentioned people to seize power and then do damage. Certainly we’re different because we’re rational, but they were different because they were ${virtueTheyValueMost}. See also The Outside View and The Sorting Hat’s Warning.
A conspiracy of rationalists is even more disturbing because of how closely it resembles an AI. As individuals, we balance more logic based on our admittedly underspecified terminal values against moral intuition. But our intuitions do not match, nor do we communicate them easily. So collectively moral logic dominates. Pure moral logic without really good terminal values… we’ve been over this.
I don’t know, but I’ll throw some ideas up. These aren’t all the possibilities and probably don’t include the best possibility.
Each step must be moral taken in isolation. No it’ll-be-worth-it-in-ten-years reasoning, since that can go especially horribly wrong.
Work honestly within the existing systems. This allows existing safeguards to apply. On the other hand, it assumes it’s possible to get anything done within existing systems by being honest.
Establish some mechanism to keep moral intuition. Secret-ballot mandatory does-this-feel-right votes.
Divide into several conspiracies, which are forbidden to have discuss issues with eachother, preventing groupthink.
Have an oversight conspiracy, with the power to shut us down if they believe we’ve gone evil.
Sounds like a special case or “judging an argument by its appearance” (maybe somebody can make that snappier). It’s fairly similar to “it’s in latin, therefore it must be profound”, “it’s 500 pages, therefore it must be carefully thought-out” and “it’s in helvetica, therefore it’s from a trustworthy source”.
Note that this is entirely separate from judging by the arguer’s appearance.
- 5 Sep 2014 16:10 UTC; 21 points) 's comment on Rationality Quotes September 2014 by (
For non-lurking time, there’s no need to ask, is there? Just pull the signup dates from the user database for everyone who has posted recently.
I think it would be more informative to ask people to take one specific online test, now, and report their score.
Are there any free, non-spam-causlng, online IQ tests that produce reasonable results (i.e. correlate strongly to standard IQ tests)?
The greatest risk question would benefit from a write-in option. I consider economic/political collapse a greater risk than those listed.
Then there’s the fallacy of shades of gray: that every space can be reasonably modeled as 1-dimensional.
- 15 Apr 2013 20:59 UTC; 0 points) 's comment on Open Thread, April 15-30, 2013 by (
Don’t take ideas seriously unless you can take uncertainty seriously.
Taking uncertainty seriously is hard. Pick a belief. How confident are you? How confident are you that you’re that confident?
The natural inclination is to guess way to high on both of those. Not taking ideas seriously acts as a countermeasure to this. It’s an over-broad countermeasure, but better than nothing if you need it.