A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).
Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).
Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).
mattnewport
Even better would be an Amazon like recommendation system - ‘other people who benefited from this tip also benefited from...’
For instance, when I explained my change in life plans to people who are very familiar with me, I was able to use the phrasing “I’m dropping out of school to join a doomsday cult” because I knew this sounded so unlike me that none of them would take it at face value. Alicorn wouldn’t really join a doomsday cult; it must be something else! It elicited curiosity, but not contempt for my cult-joining behavior. To more distant acquaintances, I used the less loaded term “nonprofit”. I couldn’t countersignal my clever life choices to people who didn’t have enough knowledge of my clever life choices; so I had to rely on the connotation of “nonprofit” rather than playing with the word “cult” for my amusement.
I’m not sure this is a very good example. The reason that saying “I’m dropping out of school to join a doomsday cult” works is that people who are really joining a doomsday cult wouldn’t say that. Acknowledging that you are aware of the phenomenon of doomsday cults is an effective way of signalling that you are not in fact falling for the recruitment tactics of such a cult and does not require the person you are talking to to know anything much about you personally.
If anything this is a more effective tactic when used on relative strangers who do not know you very well and might think you actually are joining a doomsday cult if you just tried to describe what you are doing in a few short sentences. This seems more like an example of straightforward signalling that you are fully aware of the existence of doomsday cults and so people can assume that you have not in fact been seduced by one.
make the deadline for reports a curve instead of a cliff. Each day of delay costs some percentage of the grade.
We had this system for my second year physics project at university. I hadn’t started it when the deadline arrived and decided the penalty rate was too steep to bother starting when the deadline passed. Several weeks later I was summoned to explain why I hadn’t handed the project in and I explained that it hadn’t seemed worth starting given how little it would be worth by the time I finished it (by this point the penalty had long since reduced the potential grade to ~0). They told me if I completed it before the end of term they wouldn’t apply the penalty
How do you feel about the destruction of a partially bent piece of steel wire before it has been bent fully into paperclip shape?
The ‘war on drugs’ is the obvious example that sprang to mind and has already been mentioned. It has cropped up a number of times without objection.
I suspect that one of the reasons theism has ‘special status’ is that it requires little domain specific knowledge to recognize its irrationality. Basic scientific and historical knowledge and experience of the world are enough to throw up serious doubts for anyone who starts down the path of rationality. Other examples that spring to mind require a little more specialist knowledge.
An example: there is some overlap between the ‘economist blogger’ community and the OB readership. Economics bloggers have on occasion discussed the fact that there are certain uncontroversial truths accepted within economics that are not uncontroversial amongst non-economists. Examples are the benefits of free trade over protectionism, the ineffectiveness of price controls, the general efficiency benefits of markets and the net benefits of relatively open immigration policies. I had to learn a bit about economics and be presented with the results of studies to be fully persuaded by some of these arguments—unlike atheism it was not obvious to me from my direct experience that they were true. Perhaps someone more rational than myself could have deduced these truths from first principles and direct observation but as a general rule I would not assume that someone who had not a passing familiarity with economics would have found these truths to be self evident.
Another, and perhaps more troubling, reason is that I suspect a certain amount of self-censorship is at work in order not to risk fragmenting the community with examples that while less controversial in the general population might be more controversial within the self-selecting subset of Less Wrong readers. The Larry Summers affair might be an example of the kind of belief that might be self-censored in a burgeoning rationalist community, despite the noticeable lack of representation of a certain demographic within that community.
- 3 Feb 2010 18:53 UTC; 2 points) 's comment on Open Thread: February 2010 by (
I think a lot of the ‘irrational’ workplace behaviour you describe can also be seen as a rational response to bad incentives on the part of employees. It is relatively rare for jobs to consistently reward employees for performance that contributes directly to company profits so much employee behaviour is instead a response to what is actually rewarded by a perverse incentive structure.
One of the reasons small companies and startups can be successful despite lacking the resources or economies of scale of larger companies is that large companies have great difficulty maintaining a structure that rewards employees for productive activity.
- 5 Mar 2010 18:59 UTC; 3 points) 's comment on The fallacy of work-life compartmentalization by (
The quality of the late papers is, on average, lower than the quality of the on-time papers. This makes sense; the more diligent students would tend to do better work and get it in on time.
Do you have a method for disentangling any negative bias you might have towards late papers (because they are ‘a pain in the ass’) from your quality judgements? I imagine the degree to which completely objective quality measurements are possible is a function of what subject you are teaching.
Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is −98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high − 0.99 to be exact.
If I can find another 99 people as confused as you I’ll be a rich man.
One of the reasons the seduction community has been a topic on less wrong is the application of rationality to success in everyday life. If there is any significant subset of desirable women who are not easily approached then someone in the seduction community will have tried to figure out a way to engineer an approach opportunity. If there are a lot of attractive single gardeners in the world then there is probably a blog somewhere that extols the virtues of garden centres as potentially fruitful pickup venues.
You can argue that the consensus judgement of the community as to what constitutes an attractive/desirable woman is flawed but to the extent that the ‘hard to reach’ women you describe are considered desirable, the likelihood is that someone will have tried to figure out how to reach them effectively.
Ask anybody who’s actually productive—especially those who make a lot of money by being productive, and nearly all of them will tell you that they love their work.
I have noticed this pattern but have always been a little skeptical because there seem to be obvious signalling reasons to make this claim irrespective of its truth. I’ve also considered the possibility that there are personality types who are telling the truth when they basically claim to be happy and motivated all the time. The third possibility I’ve considered is that people mean something different by ‘love my work’ than I understand by it—not that they are literally full of enjoyment and motivation all the time while working.
I don’t believe I’ve ever met anyone who I’ve had what felt like an honest conversation with about work who literally ‘loved their work’. They may enjoy some parts of it but much of it is still effortful and not the most enjoyable thing they could think of doing at any given moment.
Could you clarify exactly what you think productive people mean when they say they ‘love their work’ and explain what leads you to believe that it is literally true?
Games often fall into the trap of optimizing for addictiveness which is not quite the same thing as pleasure. Jonathan Blow has talked about this and I think there is a lot of merit in his arguments:
He clarified, “I’m not saying [rewards are] bad, I’m saying you can divide them into two categories – some are like foods that are naturally beneficial and can increase your life, but some are like drugs.”
Continued Blow, “As game designers, we don’t know how to make food, so we resort to drugs all the time. It shows in the discontent at the state of games – Radosh wanted food, but Halo 3 was just giving him cheap drugs.”
…
Blow believes that according to WoW, the game’s rules are its meaning of life. “The meaning of life in WoW is you’re some schmo that doesn’t have anything better to do than sit around pressing a button and killing imaginary monsters,” he explained. “It doesn’t matter if you’re smart or how adept you are, it’s just how much time you sink in. You don’t need to do anything exceptional, you just need to run the treadmill like everyone else.”
I work in the games industry and I see this pattern at work a lot from many designers.
Here’s my alternative explanation for your triads which, while obviously a caricature, is no more so than yours and I think is more accurate: un-educated / academic / educated non-academic.
Essentially your ‘contrarian’ positions are the mainstream positions you are more or less required to hold to build a successful academic (or media) career. Some academics can get away with deviation in some areas (at some cost to their career prospects) but relatively few are willing to risk it. Intelligent, educated individuals who have not been subject to excessive exposure to academic groupthink are more likely to take your meta-contrarian positions.
See also Moldbug’s thoughts on the University.
Your visual system is not evolved to be a colorimeter because that is not actually very useful for the kinds of things we use our visual system for. Thinking that your brain ‘should’ identify the same RGB values as the same ‘colors’ in a different context reflects a confusion about what invariant properties of the world the visual system represents as ‘color’.
Our conscious experience of color is related to the spectral composition of light that reaches our retinas but the RGB value of a pixel is not sufficient to describe the more complex qualia we label ‘colors’. If there is any ‘failure’ captured by this illusion it is a failure to understand what a good job the brain does of extracting useful information from the complex pattern of light that falls onto our retinas rather than a failure of the visual system. A colorimeter is a relatively simple $90 device. Matching the human visual system’s performance on the inverse rendering problem is an unsolved hard AI problem.
The anchoring phenomenon which can result in poor choices in certain circumstances on the other hand does reflect a ‘failure’ in the sense that a generally useful heuristic may lead us to make poor judgements. I’d say it is an example of misapplying a heuristic to a problem it is ill suited for. I think comparing it to the colour constancy phenomenon is misleading and inapt.
Indeed. Maybe start with Rationality is Systematized Winning. Giving a few examples of people failing at rationality is not an effective criticism of rationality.
Taleb’s books are interesting and he makes quite a few good points but the man has a lot of flaws. Eric Falkenstein sums up the case against him quite well.
I would call the police, who would track you down and verify that you were bluffing.
And you’d probably be cited for wasting police time. This is the most ridiculous statement I’ve seen on here in a while.
Was that actually his claim or was he saying that it doesn’t necessarily reduce the frequency at which people do it? Clearly the frequency of drug use has gone up since they were made illegal. Now perhaps it would have gone up faster if drug use had not been made illegal but that’s rather hard to demonstrate. It’s at least plausible that some of the popularity of drugs stems from their illegality as it makes them a more effective symbol of rebellion against authority for teenagers seeking to signal rebelliousness.
Claiming that criminalizing can’t possibly reduce the frequency at which people do something would be a pretty ridiculous claim. Claiming that it hasn’t in fact done so for drugs is quite defensible.
It sounds interesting but I’m a little wary of your one line dismissal of any potential side effects without reference. To the best of my knowledge the function of sleep is still not completely understood and the long term effects of reduced sleep are not known. A suggestion to take any kind of supplement every day for the rest of your life places a fairly high bar on safety. Taking melatonin to overcome jet-lag seems very likely to be safe but I’m more wary of using it on an ongoing daily basis.
Do you have any references to support the claim that there are no long term side effects of daily use?
Do any of the studies on hyperbolic discounting attempt to show that it is not just a consequence of combining uncertainty with something like a standard exponential discounting function? That’s always seemed the most plausible explanation of hyperbolic discounting to me and it meshes with what seems to be going on when I introspect on these kinds of choices.
Most of the discussions of hyperbolic discounting I see don’t even consider how increasing uncertainty for more distant rewards should factor into preferences. Ignoring uncertainty seems like it would be a sub-optimal strategy for agents making decisions in the real world.
Perhaps, but my problem was more that I mistook a theoretical interest in physics for an interest in theoretical physics.