Taken.
Randaly
This is a combative comment which fails to back up its claims.
how surely only noble and good people ever sue over libel
if you really believe lawsuits are so awesome and wonderful
He did not say this. This is not reasonable for you to write.
you can count on one hand the sort of libel lawsuit which follows this beautiful fantasy.
This is not true. This is obviously not true. A successful and important libel case (against Giuliani) was literally headline news this week. You can exceed five such cases just looking at similar cases: Dominion v Fox; Smartmatic v Fox; Coomer v Newsmax; Khalil v Fox; Andrews v D’Souza; and Weisenbach v Project Veritas. This is extremely unreasonable for you to say.
They are cynically used to burn money based on the fact that rich people have a lot more money than poor people
Nonlinear certainly doesn’t have more money than the EA community. Nonlinear plausibly (?) doesn’t have more money than Lighthouse; at a minimum, it’s not a significant difference.
which they would generally win, BTW
<argument needed>
It’s very unclear to me whether Lighthouse would win; your confidence here seems unreasonable; but more importantly, “no, that’s not true” is just not a useful thing to say here. (You’re responding to a post that did have many good citations of cases; seems like most people think it’s plausible they’d lose.)
And a lawsuit is a way to destroy someone, not counter-argue them.
In the most blandly literal sense possible, lawsuits are arguments.
what goes on inside a court has only a questionable relationship to counterargument to begin with, which is why a decent chunk of rationality is about explaining why legal norms are so inappropriate for rational thinking
You have again not given any argument for this.
The rules under which lawsuits proceed are deliberately setup in an attempt to get at the truth. Specific requirements- from the prohibition on hearsay; to the requirement of a neutral and unbiased jury; to the requirement that both sides be able to examine and respond to evidence and arguments- are both truthseeking and not generally followed outside of the court system.
“My ingroup’s internet discussions are so great that they’re not only better than the outside society’s way of determining contested questions, they invalidate their use” is a dangerously culty belief. I think it is particularly bad in this context, since the initial post had specific failures that the legal system would have handled correctly. (eg not giving Nonlinear time to respond; it’s possible that I’ll feel like the eventual outcome here is reasonable, IDK, but the initial post had clear issues.) But at a minimum, if you’re saying that people be “shunned, demonized, and criticised” (!), you really ought to say specifically why/how the courts would be unreliable in this case.
“Any sufficiently analyzed magic is indistinguishable from SCIENCE!”
~Girl Genius
- 4 Dec 2011 6:42 UTC; 1 point) 's comment on Rationality Quotes December 2011 by (
“Acausal” is used as a contrast to Causal Decision Theory (CDT). CDT states that decisions should be evaluated with respect to their causal consequences; ie if there’s no way for a decision to have a causal impact on something, then it is ignored. (More precisely, in terms of Pearl’s Causality, CDT is equivalent to having your decision conduct a counterfactual surgery on a Directed Acyclic Graph that represents the world, with the directions representing causality, then updating nodes affected by the decision.) However, there is a class of decisions for which your decision literally does have an acausal impact. The classic example is Newcomb’s Problem, in which another agent uses a simulation of your decision to decide whether or not to put money in a box; however, the simulation took place before your actual decision, and so the money is already in the box or not by the time you’re making your decision.
“Acausal” refers to anything falling in this category of decisions that have impacts that do not result causally from your decisions or actions. One example is, as above, Newcomb’s Problem; other examples include:
Acausal romance: romances where interaction is impossible
The Prisoner’s Dilemma, or any other symmetrical game, when played against the same algorithm you are running. You know that the other player will make the same choice as you, but your choice has no causal impact on their choice.
There are a number of acausal decision theories: Evidential Decision Theory (EDT), Updateless Decision Theory (UDT), Timeless Decision Theory (TDT), and Ambient Decision Theory (ADT).
In EDT, which originates in academia, casuality is completely ignored, and only correlations are used. This leads to the correct answer on Newscomb’s Problem, but fails on others- for example, the Smoking Lesion. UDT is essentially EDT, but with an agent that has access to its own code. (There’s a video and transcript explaining this in more detail here).
TDT, like CDT, relies on causality instead of correlation; however, instead of having agents chose a decision that is implemented, it has agents first chose a platonic computation that is instantiated in, among other things, the actual decision maker; however, is is also instantiated in every other algorithm is equal, acausally, to the decision maker’s algorithm, including simulations, other agents, etc. And, given all of these instantiations, the agent then choses the utility-maximizing algorithm.
ADT...I don’t really know, although the wiki says that it is “variant of updateless decision theory that uses first order logic instead of mathematical intuition module (MIM), emphasizing the way an agent can control which mathematical structure a fixed definition defines, an aspect of UDT separate from its own emphasis on not making the mistake of updating away things one can still acausally control.”
- 22 Apr 2012 9:43 UTC; 1 point) 's comment on Hofstadter’s Superrationality by (
See here.
MIRI’s journal publications:
Carl Shulman and Nick Bostrom (2012). How Hard Is Artificial Intelligence? Evolutionary Arguments and Selection Effects. Journal of Consciousness Studies 19 (7–8): 103–130.
Kaj Sotala (2012). Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1): 275-291.
Kaj Sotala and Harri Valpola (2012). Coalescing Minds: Brain Uploading-Related Group Mind Scenarios. International Journal of Machine Consciousness 4 (1): 293–312.
(Bostrom and Shulman both work for FHI, and Bostrom doesn’t work for MIRI. I’m not sure how mainstream the International Journal of Machine Consciousness is. ETA: It was one of the original journals Luke mentioned as targets, so I assume it qualifies.)
MIRI also has a larger number of CS conference papers, which this claims are higher status in CS than journal publications; Luke was presumably biased towards journals because he had less of a background in CS.
You’re taking a very inside-view approach to analyzing something that you have no direct experience with. (Assuming you don’t.) This isn’t a winning approach. Outside view predicts that 90% of startups will fail.
Startups’ high reward is associated with high risk. But most people are risk averse, and insurance schemes create moral hazard.
Would you play a lottery with no stated odds?
Imagine another thought experiment—you’re asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?
Of course, you don’t know, because you’re not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50⁄50 shot to win $100, since you’d expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you’d expect to come out ahead by $998 each time.
But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1⁄150 and at most 1⁄10, though you could be off by a little bit. Would you accept that bet?
Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.
The reason not to play a lottery is because it is a zero-sum game in which the rules are set by the other agent; since you know that the other player’s goal is to make a profit, you should expect the rules to be set up to ensure that you lose money. Obviously, reality is not playing a zero-sum game with humanity; if one chooses a different expected payout structure- say, you have no idea what the specific odds are, but you know that your crazy uncle Bill Gates is giving away potentially all his money to family members in a lottery with 2$ tickets- then obviously it makes sense to play.
I agree- the answer given in the FAQ isn’t a complete and valid response to the critics of the Singularity. But it was never meant to be; it was meant to be “short answers to common questions.” The SI’s longer responses to critics of the Singularity are mostly in peer-reviewed research; for example, in:
Luke Muehlhauser and Anna Salamon (2012). Intelligence Explosion: Evidence and Import. In The Singularity Hypothesis, Springer. (http://singularity.org/files/IE-EI.pdf)
Carl Shulman and Nick Bostrom (2012). How Hard is Artificial Intellience?. In Journal of Consciousness Studies, Imprint Academic. (http://www.nickbostrom.com/aievolution.pdf)
Chalmers, D. (2010). “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17:7-65. (http://consc.net/papers/singularity.pdf)
Sotala, Kaj (2012) Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291. ( http://kajsotala.fi/Papers/DigitalAdvantages.pdf )
Of course, now I feel pretty bad for linking you to several hundred pages of arguments- which are often overlapping and repetitive, and which still don’t represent everything the SI has written on the subject (or even a majority, I think). If you have any specific criticisms of the SI’s ideas, it might be faster for you to post them here.
I feel even (slightly) worse, because none of the SI’s arguments have reached the level of evidence you’re comparing it to- eg Givewell’s analyses, and the disproofs of spoon-bending and new reading methods. But the SI isn’t capable of providing evidence that strong, because whether or not its claims were accurate, they would still be predictions of an abrupt future change, as opposed to claims about the efficacy of past actions. I do think that, where possible, the SI has tried to be very transparent- for example, in my opinion the SI’s last yearly progress report was around as thorough as Givewell’s last yearly progress reports- part 1, part 2, part 3, part 4.
(On a side note, it might interest you that Holden Karnofsky, co-founder of Givewell, also analyzed the SI and came to a very negative conclusion- posted here. His post is currently the most upvoted post of all time on LessWrong.)
- 6 Aug 2012 4:22 UTC; 21 points) 's comment on Self-skepticism: the first principle of rationality by (
The idea of programming as a gear is still controversial, but the specific hypothesized gear is that people who can build a consistent model of a language will be successful at programming, whereas those who can’t won’t be. This was tested by giving students a test on Java before they had been taught java; their answers were checked, not for correctness, but for consistency. See “The Camel Has Two Humps.” Even then the test is far from perfectly predictive- ~28% of the consistent group failed, ~19% of the inconsistent group passed, and membership in the groups as indicated by the tests assigned shifted over time. If you do want to test this, you can reuse the original test.
However, there have been numerous attempted replications, none of which succeeded- though none found a negative result either. They were generally either confounded by the presence of experienced programmers, setup poorly, or not statistically significant. To quote the original authors:
When we began this work we had high hopes that we had found a test that could be used as an admissions filter to reduce the regrettable waste of human effort and enthusiasm caused by high failure rates in universities’ first programming courses. We can see from the experiments reported above that our test doesn’t work if the intake is already experienced, and in experiment 3 didn’t work at all. We cannot claim to be separating the programming goats from the non-programming sheep: experiment 3 demolishes the notion that consistent subjects will for the most part learn well, and others for the most part won’t. And even in the most encouraging of our results, we find a 50% success rate in those who don’t score C0 or CM2 [ie those who were inconsistent]. None the less, some of our results indicate that there may be something going on with consistency.
HT Gwern
I recommend that you change Eliezer’s profile to first mention that he is a Research Fellow at the SI, as writing fan fiction and a blog are not high-status
The US bombing escalation in Vietnam.
Prior to the escalation in bombing in the Vietnam War, the Americans wargamed potential North Vietnamese responses in the Sigma I and II wargames. Regional experts were able to almost exactly predict the Vietnamese response, and working-level officers from the State and Defense departments, and the CIA, predicted the actual outcome. William Bundy, the guy running the games, thought the conclusion was “too harsh,” and the wargames never influenced actual policymakers. (see H R McMaster’s Dereliction of Duty)
(On a related note, something similar occurred with the Millennium Challenge 2002- the Red team used unexpected tactics to pull off unexpected early victories against the simulated US forces, so the general running the war game ‘refloated’ the sunk ships, then forced both sides to use prescripted plans of action, ignoring the unexpected initial events.)
The State Department’s Policy Planning Council published a separate study in 1964 which essentially also concluded that bombing wouldn’t work. Walt Rostow, its chairman, disagreed with its conclusions, so he work to suppress its conclusions; it did eventually influence policymakers, but only after the war had escalated, and even then its conclusions had to be bootlegged out of the council. (see David Halberstam’s The Best and the Brightest)
There’d been some discussion of why HPMOR!Hogwarts was founded around 1200, as opposed to canon Hogwarts, which was “established around the 9th or 10th century.” This chapter seems to make the reason clear: the founders were near-contemporaries of the Peverells, who kept their canon birthdates. Godric Gryffindor in particular seems likely to have been involved.
silently, making less noise than the dead leaves slithering along the pavement...
This is a quote from canon, in a scene where Harry is nearly possessed by Voldemort; it’s Voldemort’s memories of the night he died. It’s italicized, as with Harry internal conversations, suggesting that this is part of Voldemort in Harry, remembering the night he died. (?)
I think this paper is relevant; on the whole, opinion polling indicated that, aside from a brief period after the moon landing, NASA was never popular, and the majorities almost always supported a smaller NASA. Meanwhile, there’s still strong public support for some things- eg the Space Shuttles or a space station.
“Test Your God.… Test[s] cannot harm a God of Truth, but will destroy fakes. Fake gods refuse test[s].”
~ Dr. Gene Ray
- 3 Sep 2011 22:59 UTC; 7 points) 's comment on Science Doesn’t Trust Your Rationality by (
- 8 Oct 2010 1:59 UTC; 2 points) 's comment on Rationality quotes: October 2010 by (
His claim was:
(a) Everybody knew that different ethnicities had different brain sizes (b) It was an uncomfortable fact, so nobody talked about it (c) Now nobody knows that different ethnicities have different brain sizes
Pearl also put up the entire first edition of the book online, here.
From another generator:
“I’m going to solve metaethics.” “I’m going, you’re going to found the Society for infanticide.”
“”Snow is white” is failing to solve psychology.”
“Wait, wait, “this is white” is a more technical explanation?”
“My utility function includes a semantic stopsign.”
“If keeping my current job has little XML tags on it that say the Least Convenient Possible World...”″
“Sure, I’d take over the sanity waterline.”
“I’ll be the symbol with ice cream trees.”
“So after we take over the alternative universe that is the Least Convenient Possible World...”
“I want to tile the sanity waterline with the unit of a thing.”
Hey Eliezer- if you’re planning to upload your Author’s Notes to the LW wiki, it might be helpful to post that intention to your profile on Fanfiction.net. I know of at least 3 groups independently trying to collect all of the AN’s themselves:
Your discussion of Skunk Works is significantly wrong throughout. (I am not familiar with the other examples.)
The P80 was introduced in 1945; the US almost immediately decided to replace it with the F-86, introduced in 1949. The phrase “operationally used by the air force for 40 years” is only technically true because rather than scrap existing P80 production, they were modified slightly and used as training aircraft.
This is wrong. Their stealth ship wasn’t able to “blast out of the sky a sizable soviet attack force”, or to do literally anything; it was just a testbed for exploring automation and stealth hulls, totally incapable of doing anything. Skunk Works didn’t actually successfully build anything here! (The stealth design was later used on the Zumwalt class of destroyers, which had unrelated issues.)
Not sure where he got the 300 crew figure from? Even beyond the fact that the Sea Shadow wasn’t actually designed to do anything (and so would need more specialized crew to do so), the Sea Shadow was only a tenth of a the size of the frigates it’s being compared with. (The Navy has since tried to use similar automation to reduce the crew of newer ships; the Gerald Ford class of aircraft carriers represent the realistically achievable reduction in crew via automation: 3,200 → 2,600 or so, so ~20%.) (Note that this also trivially falsifies the claim that the Navy rejects automation to reduce crew sizes?)
“The Navy rejected our ship design because it was totally too good, you just gotta believe me, even though we’ve never ever successfully produced ships” is an insane thing for you to accept with zero evidence.
This is totally wrong. You are again putting forth the insane claim that people rejected Skunk Work’s technology because of how good it was, with zero actual evidence of why the SR-71 wasn’t mass produced.
The SR-71 (Mach 3.3, 85,000 feet) wasn’t significantly better than planned contemporary planes like the B-70 (Mach 3.1, 77,350 feet) or the F-108 (Mach 3, 80,100 feet). Both of those planes were cancelled, because the development of missiles meant that flying higher and faster was no longer a viable strategy; since then, military planes like the the F-18 (Mach 1.8, 50,000 feet), and the F-35 (Mach 1.6, 50,000 feet) have often been lower and slower. This is a deliberate choice: unmanned, one-way missiles can always go faster than a manned plane. Most of the SR-71′s advantages come not from it being inherently better than any possible missile, but from it being faster and higher than the planes early SAM’s were intended to target; mass production and usage in other roles would inherently make this go away.
(To be clear, Skunk Works was successful and build many things successfully; it’s specifically your claims and examples that are wrong. In particular, you left out most of their successful planes like the F-117.)