Just because you two are arguing, doesn’t mean one of you is right.
Maurog: http://forums.xkcd.com/viewtopic.php?f=9&t=14222
Just because you two are arguing, doesn’t mean one of you is right.
Maurog: http://forums.xkcd.com/viewtopic.php?f=9&t=14222
I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.
Paul Dirac
Two very different attitudes toward the technical workings of mathematics are found in the literature. Already in 1761, Leonhard Euler complained about isolated results which “are not based on a systematic method” and therefore whose “inner grounds seem to be hidden.” Yet in the 20′th Century, writers as diverse in viewpoint as Feller and de Finetti are agreed in considering computation of a result by direct application of the systematic rules of probability theory as dull and unimaginative, and revel in the finding of some isolated clever trick by which one can see the answer to a problem without any calculation.
[...]
Feller’s perception was so keen that in virtually every problem he was able to see a clever trick; and then gave only the clever trick. So his readers get the impression that:
Probability theory has no systematic methods; it is a collection of isolated, unrelated clever tricks, each of which works on one problem but not on the next one.
Feller was possessed of superhuman cleverness.
Only a person with such cleverness can hope to find new useful results in probability theory.
Indeed, clever tricks do have an aesthetic quality that we all appreciate at once. But we doubt whether Feller, or anyone else, was able to see those tricks on first looking at the problem. We solve a problem for the first time by that (perhaps dull to some) direct calculation applying our systematic rules. After seeing the solution, we may contemplate it and see a clever trick that would have led us to the answer much more quickly. Then, of course, we have the opportunity for gamesmanship by showing others only the clever trick, scorning to mention the base means by which we first found.
E. T. Jaynes “Probability Theory, The Logic of Science”
Then there is the famous fly puzzle. Two bicyclists start twenty miles apart and head toward each other, each going at a steady rate of 10 m.p.h. At the same time a fly that travels at a steady 15 m.p.h. starts from the front wheel of the southbound bicycle and flies to the front wheel of the northbound one, then turns around and flies to the front wheel of the southbound one again, and continues in this manner till he is crushed between the two front wheels. Question: what total distance did the fly cover ?
The slow way to find the answer is to calculate what distance the fly covers on the first, northbound, leg of the trip, then on the second, southbound, leg, then on the third, etc., etc., and, finally, to sum the infinite series so obtained. The quick way is to observe that the bicycles meet exactly one hour after their start, so that the fly had just an hour for his travels; the answer must therefore be 15 miles.
When the question was put to von Neumann, he solved it in an instant, and thereby disappointed the questioner: “Oh, you must have heard the trick before!”
“What trick?” asked von Neumann; “all I did was sum the infinite series.”
An anecdote concerning von Neumann, here told by Halmos.
If you think something’s supposed to hurt, you’re less likely to notice if you’re doing it wrong.
Paul Graham
A paradox arises when two seemingly airtight arguments lead to contradictory conclusions—conclusions that cannot possibly both be true. It’s similar to adding a set of numbers in a two-dimensional array and getting different answers depending on whether you sum up the rows first or the columns. Since the correct total must be the same either way, the difference shows that an error must have been made in at least one of the two sets of calculations. But it remains to discover at which step (or steps) an erroneous calculation occurred in either or both of the running sums. There are two ways to rebut an argument. We might call them countering and invalidating.
+To counter an argument is to provide another argument that establishes the opposite conclusion.
+To invalidate an argument, we show that there is some step in that argument that simply does not follow from what precedes it (or we show that the argument’s premises—the initial steps—are themselves false).
If an argument starts with true premises, and if every step in the argument does follow, then the argument’s conclusion must be true. However, invalidating an argument—identifying an incorrect step somewhere—does not show that the argument’s conclusion must be false. Rather, the invalidation merely removes that argument itself as a reason to think the conclusion true; the conclusion might still be true for other reasons. Therefore, to firmly rebut an argument whose conclusion is false, we must both invalidate the argument and also present a counterargument for the opposite conclusion.
In the case of a paradox, invalidating is especially important. Whichever of the contradictory conclusions is incorrect, we’ve already got an argument to counter it—that’s what makes the matter a paradox in the first place! Piling on additional counterarguments may (or may not) lead to helpful insights, but the counterarguments themselves cannot suffice to resolve the paradox. What we must also do is invalidate the argument for the false conclusion—that is, we must show how that argument contains one or more steps that do not follow.
Failing to recognize the need for invalidation can lead to frustratingly circular exchanges between proponents of the conflicting positions. One side responds to the other’s argument with a counterargument, thinking it a sufficient rebuttal. The other side responds with a counter- counterargument—perhaps even a repetition of the original argument— thinking it an adequate rebuttal of the rebuttal. This cycle may persist indefinitely. With due attention to the need to invalidate as well as counter, we can interrupt the cycle and achieve a more productive discussion.
Gary Drescher (Good and Real)
I don’t blame them; nor am I saying I wouldn’t similarly manipulate the truth if I thought it would save lives, but I don’t lie to myself. You keep two books, not no books. [Emphasis mine]
The Last Psychiatrist (http://thelastpsychiatrist.com/2010/10/how_not_to_prevent_military_su.html)
On a similar theme:
Fiction often mixes up logical with other concepts … For one thing, authors sometimes say “illogical” when they mean “counter-intuitive.” Correct logic is very often counter-intuitive, however, which is to be expected, as logic is meant to prevent errors caused by relying on intuition.
TV Tropes
See here https://conwaylife.com/forums/viewtopic.php?f=7&t=1234&sid=90a05fcce0f1573af805ab90e7aebdf1 and here https://discord.com/channels/357922255553953794/370570978188591105/834767056883941406 for discussion of this topic by Life hobbyists who have a good knowledge of what’s possible and not in Life.
What we agree on is that the large random region will quickly settle down into a field of ‘ash’: small stable or oscillating patterns arranged at random. We wouldn’t expect any competitior AIs to form in this region since an area of 10^120 will only be likely to contain arbitrary patterns of sizes up to log(10^120), which almost certainly isn’t enough area to do anything smart.
So the question is whether our AI will be able to cut into this ash and clear it up, leaving a blank canvas for it to create the target pattern. Nobody knows a way to do this, but it’s also not known to be impossible.
Recently I tried an experiment where I slowly fired gliders at a field of ash, along twenty adjacent lanes. My hope had been that each collision of a glider with the ash would on average destroy more ash than it created, thus carving a diagonal path of width 20 into the ash. Instead I found that the collisions created more ash, and so a stalagmite of ash grew towards the source at which I was creating the gliders.
EDIT: There’s been a development of new GoL tech that might be able to clear ash: https://www.conwaylife.com/forums/viewtopic.php?p=135539#p135539
I can’t find the comment of Eliezer that inspired this but:
The “If-you-found-out-that-God-existed scale of ambition”.
1) “Well obviously if I found out God exists I’d become religious, go to church on Sundays etc.”
2) “Actually, most religious people don’t seem to really believe what their religion says. If I found out that God existed I’d have to become a fundamentalist, preaching to save as many people from hell as I could.”
3) “Just because God exists, doesn’t mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn’t give up until I’d thought about the problem and tried all possible courses of action.”
4) “God is massively powerful. Sure I’d kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God’s power and use it to do good.”
I’m confused because it was Eliezer who taught me this.
(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P.
EDIT: I’m now resisting the temptation to tell Eliezer to “read the sequences”.
...when you do have a deep understanding, you have solved the problem and it is time to do something else. This makes the total time you spend in life reveling in your mastery of something quite brief. One of the main skills of research scientists of any type is knowing how to work comfortably and productively in a state of confusion.
(emphasis mine)
Proposal: Make it look less spammy by making it look official. Something like this:
About the author:
(thumbnail picture linking to profile.) Kaj is a blank working at [blank]. (S)he is
currently working on a [novel]. If you liked this post you can Flattr Kaj [here].
Where [stuff] denotes an appropriate link. If we create a standard template for such things then everybody (who wants to) can sign their posts in such a way. This will make the site look more professional, and increase flow to everyone’s other projects.
(See also Vladmir_M’s post)
See also here.
I feel it would be useful to develop a standard hour long into to LessWrong. People who have done talks could help by providing feedback on what went down well.
For some reason no one does the obvious cancellation to end up in m^2. This even has an intuitive meaning, it’s the cross-section that a line of fuel would need so that as you travelled along it you’d be “picking it up” at the same rate you were burning it.
Eliezer uses “Traditional Rationality” to mean something like “Rationality, as practised by scientists everywhere, especially the ones who read Feynman and Popper”. It refers to the rules that scientists follow.
A surely incomplete list of deficiencies:
The practitioners only use it within some small domain.
Maybe they even believe that one can only be rational in this domain.
Designed to work for groups, not for individuals. Telling someone to use Science to become smart is like telling them to use Capitalism to become rich.
It doesn’t tell you how to create hypotheses, only how to test them.
Imprecise understanding of probability and knowledge (which are the same thing).
Bizarre fetishisation of “falsification”.
Failure to concentrate on the important problems.
I knew this would happen. Now I need a charity to tell me which of GiveWell and AidGrade is most effective!
It doesn’t have any content. It’s just a news bulletin (which we would have all seen on TV anyway) with some emotions pinned on.
EDIT: Things rarely stay downvoted for long though. They tend to reach a minimum pretty quickly and then climb back up into the positive.
Nice idea! We can show directly that each term provides information about the next.
The density function of the distribution of the fractional part in the continued fractional algorithm converges to 1/[(1+x) ln(2)] (it seems this is also called the Gauss-Kuzmin distribution, since the two are so closely associated). So we can directly calculate the probability of getting a coefficient of n by integrating this from 1/(n+1) to 1/n, which gives -lg(1-1/(n+1)^2) as you say above. But we can also calculate the probability of getting an n followed by an m, by integrating this from 1/(n+1/m) to 1/(n+1/(m+1)), which gives -lg(1-1/(mn+1)(mn+m+n+2)). So dividing one by the other gives P(m|n) = lg(1-1/(mn+1)(mn+m+n+2))/lg(1-1/(n+1)^2), which is rather ugly, but the point is that it does depend on n.
This turns out to be an anticorrelation. High numbers are more likely to by followed by low numbers, and vice-versa. The probability of getting a 1 given you’ve just had a 1 is 36.6%, whereas if you’ve just had a very high number the probability of getting a 1 will be very close to 50% (since the distribution of the fractional part is tending to uniform).
Posts and comments containing predictions by Eliezer:
http://lesswrong.com/lw/7rc/particles_break_lightspeed_limit/4wmb
http://lesswrong.com/lw/383/the_trolley_problem_dodging_moral_questions/32lt
http://lesswrong.com/lw/1p5/outside_view_as_conversationhalter/1o6n
http://lesswrong.com/lw/1ss/babies_and_bunnies_a_caution_about_evopsych/1nwa
http://lesswrong.com/lw/1lx/reference_class_of_the_unclassreferenceable/1f3w
http://lesswrong.com/lw/1la/new_years_predictions_thread/1dqf
http://lesswrong.com/lw/1la/new_years_predictions_thread/1dvx
http://lesswrong.com/lw/1dt/open_thread_november_2009/17xb
http://lesswrong.com/lw/7o/on_dollars_utility_and_crack_cocaine/582
http://lesswrong.com/lw/bfo/harry_potter_and_the_methods_of_rationality/6b2x
http://lesswrong.com/lw/b9/welcome_to_less_wrong/5iw6
http://lesswrong.com/lw/7u2/edward_nelson_claims_proof_of_inconsistency_in/4xi5
http://lesswrong.com/lw/1ir/you_be_the_jury_survey_on_a_current_event/1beu
http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/piy
http://lesswrong.com/lw/r/no_really_ive_deceived_myself/vrw
http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2a0x
http://lesswrong.com/lw/1ud/rationality_quotes_march_2010/1p1p
http://lesswrong.com/lw/4sr/rationalist_lord_of_the_rings_fanfiction_newly/3pmc
http://lesswrong.com/lw/44n/convergence_theories_of_metaethics/3i10
http://lesswrong.com/lw/169/the_sword_of_good/3b6e
http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/386m
http://lesswrong.com/lw/2gg/room_for_rent_in_north_berkeley_house/2a8e
http://lesswrong.com/lw/9c/mandatory_secret_identities/6d8
http://lesswrong.com/lw/2l/closet_survey_1/1pp
Posts where Eliezer claims to have predicted something in advance:
http://lesswrong.com/lw/a60/quantified_health_prize_results_announced/5wnd
http://lesswrong.com/lw/37k/rationality_quotes_december_2010/34pq
http://lesswrong.com/lw/2mq/luminosity_twilight_fanfic_discussion_thread/2up1
http://lesswrong.com/lw/20/the_apologist_and_the_revolutionary/15o
Not all of these are testable. Other people can sort them though, because I’ve been looking at his comments for about five hours and my brain has turned to mush.