I’m from Bashkortostan, but now I live in Dolgoprudny, Moscow Oblast because, you know, I’m a student and my university is located there.
BT_Uytya
It remembered me the elementary particles of monarchy (the “kingons” ) of Terry Pratchett.
Since each kingdom can have one and only one king, in the case of death of king his heir becomes a new king instantly. So, if you carefully torture a king, you can use those particles to send a message faster than the speed of light.
Also, there is an article by Dawes, Faust and Meehl. Despite the fact it was published 7 years prior to House of Cards, it contains some information not described in the chapter 3 of House of Cards.
For example, the awesome result by Goldberg: linear models of human judges were more accurate than human judges themselves:
in cases of disagreement, the models were more often correct than the very judges on whom they were based.
lesswrong.ru domain for translation project?
It is useful to me. Thank you!
Moscow meetup: Saturday 6 PM
Meetup : Moscow 11 February meetup
I believe Eliezer’s point wasn’t that any algoritm should be de-randomized. His point was that it is impossible to extract useful work from noise; useful work is lawful. It is more philosophical statement than practical advice. To admire beauty of entropy is to worship your own ignorance and so on.
Putting aside the computational complexity, space-time tradeoff, you can easily see it: if something, in principle, can be done with help of random number generator, than the same result can be obtained without using random number generator.
Random is a void; optimization processes are closing that gap by ordered patterns. If you don’t know how to fill in that hole, it means that you don’t have enough information about some problem; it doesn’t mean that entropy is magic key to this problem.
And yes, in practice the cost of obtaining that information may render the derandomized algorithm useless in practice; for example, select of the “true median” (median can be found in O(n) time) as pivot in quicksort will slow it down.
“Philosophical statements” about AI algorithms are useful: not for algorithms themselves, but for AI researchers.
AI researcher shouldn’t be mistaken about Mysterious Power Of The Entropy. The “We don’t understand human intelligence, hence if I wish to create an artificial one, I must not understand it either” attitude is wrong.
So if I was to summarize this post, the summary was something like “Noise can’t be inherently good; if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem.”
I think it is a part of more general principle “Never leave behind parts that work, but you have no slightest idea why they work”.
Two days ago Scott Aaronson have commented on this topic. At this moment, his answer has as many upvotes as the Ron Mainmon’s one (former most upvoted one).
Scott enjoyed the sequence and thinks that it is “exactly what you should and must do if your goal is to explain QM to an audience of non-physicists”. However, he gives two criticisms of Yudkowsky, both connected to the Eliezer’s claim that MWI vs CI debate is completely one-sided.
Hello.
I think you aren’t aware of already existing Russian translation project. You can view it here.
What about a line of retreat for the psychologists?
A presentation about Cox’s Theorem made for my English class
It’s the OpenDocument Presentation. I haven’t onverted it to the .ppt because LibreOffice seems to be not able to do the conversion correctly: for some reason it makes some of the pictures disappear, and the thought bubbles are weird. I’ve restored the missing pictures, but I’m not sure that there are no more surprises.
But text seems to be fine, so why not?
https://docs.google.com/file/d/0BwJocL_GupTsVlNJdGVUMzI1Q1U/edit
Fixed, thanks.
I’ve tried to do something similar with odds once, but the assumption about (AB|C) = F[(A|C), (B|AC)] made me give up.
Indeed, one can calculate O(AB|C) given O(A|C) and O(B|AC) but the formula isn’t pretty. I’ve tried to derive that function but failed. It was not until I appealed to the fact that O(A)=P(A)/(1-P(A)) that I managed to infer this unnatural equation about O(AB|C), O(A|C) and O(B|AC).
And this use of classical probabilities, of course, completely defeats the point of getting classical probabilities from the odds via Cox’s Theorem!
Did I miss something?
By the way, are there some other interesting natural rules of inference besides odds and log odds which are isomorphic to the rules of probability theory? (Judea Pearl mentioned something about MYCIN certainty factor, but I was unable to find any details)
EDIT: You can view the CF combination rules here, but I find it very difficult to digest. Also, what about initial assignment of certainty?
EDIT2: Nevermind, I found an adequate summary ( http://www.idi.ntnu.no/~ksys/NOTES/CF-model.html ) of the model and pdf ( http://uai.sis.pitt.edu/papers/85/p9-heckerman.pdf ) about probabilistic interpretations of CF. It seems to be an interesting example of not-obviously-Bayesian system of inference, but it’s not exactly an example you would give to illustrate the point of Cox’s theorem.
I took the survey.
Guys, you are seriously need to start using metric system or at least include the necessary number in the meters. Going to Google twice in order to calculate the relevant numbers was… frustrating.
(By the way, I have never donated to any charity before, but I sworn in a grand manner that it will be in the list of the first five things I will do with my PayPal account when I get one)
Nate Silver will do an AMA on Reddit on Tuesday
Yes, but his Bayesian ideology makes him especially interesting for this community.
Hello, good time of day.
My name is Victor, I’m 19. I’m a student of computer science from Russia (so my English is far from perfect, and probably there will be lack of articles; please excuse me).
There wasn’t any bright line between rationalist!Victor and ordinary!Victor. If I remember correctly, five years ago I was interested in paranormal phenomena like UFO, parallel worlds or the Bermuda Triangle (I’m not sure I truly believed in it, probably I just had fun thinking about it: but I might have confessed the cached thought about scientists not knowing important things about the world) and liked reading the pop-science books at the same time. Then I realized that there is a beauty, honesty and courage in the scientific worldview and shortly thereafter, I became a person from the Light Side: not because science was true, but because it was fun.
But at least I rejected the Bermuda Triangle. I was too honest to leave inconsistencies in my pool of beliefs; so long, pseudoscience!
Maybe at the same time I discovered the concept of the utility function and blog of a psychologist arguing that there is nothing wrong with an egoism. Something clicked in my mind; the explanation of human behaviour was beautiful in it’s simplicity, and there were some interesting implications of this explanation. Then Dawkins and realization that evolution is just a natural continuation of the laws governing non-organic matter. Evolution was fun, and also it was true. I became an Guardian Of The Evolution, and I was fighting superstitions. It was point of no return: it was impossible to defend telepathy again (why there aren’t any telepathic wolves?).
There was moment of marvel, when I realized that there wasn’t any reason to expect any intellectual feats from a naked ape living in town; our brain wasn’t adapted to the current environment, but it is still working, and it is working much better than you should reasonably expect. Intelligence is fragile, and humanity is the underdog I should root for. At that time, I had already known about cognitive biases, but my feelings towards this topic became different after this insight.
I don’t remember when I started reading LW. I might have learned about utility functions here, but I’m not sure. LW was changing me gradually. In the course of two or three years I have been noticing some small changes: I started admiring the scientific method, I understood the power of the intelligence, sometimes I withdrew from an argument because there wasn’t any disagreement about anticipated experience there, et cetera.
I don’t know where to draw a line between “non-rational age” and “rational age”. But I sure as hell I’m with you guys now.