I read the website before the book existed. Actually, I argued that it should be turned into a book, because books in general have higher status than websites. Then I read the book, and translated it to Slovak language.
My opinion on reading the comments is… they are interesting, but the added value per minute spent is significantly lower than reading the book. (Some of the comments are awesome, but most are not, and there is a lot of them.) Thus, if you have anything useful to do, reading the comments after you have read the book is probably a waste of time. (Perhaps, if you have specific questions or objections to specific chapters, you should only read the comments in those chapters.)
Your time would probably be better spent reading high-karma articles which are not part of the book (is there a way to see the highest-karma articles? if not, look here), and… you know, going outside and actually doing things.
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like “you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random”.
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn’t seem relevant to me. Saying that an AI is not “truly intelligent” unless it can handle the impossible task of skillfully navigating completely random universes… that’s trying to win a debate by using silly criteria.
The effectivity of truth and lying depends on environment. For example, imagine a culture where political debates on TV would be immediately followed by impartial fact checking. Or a culture where politicians have to make predictions about future events (“I don’t know” also counts as a valid prediction), and these are later publicly reviewed and evaluated. And, importantly, where the citizens actually care about the results. I suppose such environment would bring more truth in politics.
But this is a chicken-and-egg problem, because changing the environment, that’s kinda what politics is about. Also, there are many obvious counter-strategies, such as having loyal people do the “fact checking” in your tribe’s favor. (For example, when a politician says something that is approximately correct, like saying that some number is 100, when in reality it is 96, it would be evaluated as “a correct approximation → TRUE” when your side does it, or as “FALSE” when your opponent does it. You could evaluate opponent’s metaphorical statements literally, but the other way round for your allies; etc.)
Could someone be completely honest and still be effective?
That mostly depends on other people. Such as voters (whether they bother to check facts) and media (whether they report on the fact that your statements are more likely to be true). If instead the media decide to publish a completely made up story about you, and most readers accept the story uncritically, you are screwed.
(There are also ways to hurt 100% honest people without lying about them, such as making them publicly answer a question where the majority of the population believes a wrong answer and gets offended by hearing the correct one. “Is God real?“)
I agree with your disclaimers that not all people go crazy when they start talking politics, and not always the predicted bad things happen. Problem is, I already see how most people would react to a text saying that sometimes, some people go crazy when talking politics: “Meh, ‘some people’, that definitely doesn’t apply to me. Now let me start screaming about why unconditionally supporting my faction is the most important thing ever, and why everyone who doesn’t join us is inherently evil and deserves to die painfully.” Or just keep inserting their political beliefs in every other discussion endlessly, because “hey, my political beliefs are rational (unlike political beliefs of those idiots who disagree with me), and this is a website about rationality, therefore it is important for people here to discuss and accept my political beliefs. If they disagree with me, they fail at rationality forever.”
We tried to debate politics here; it usually failed. Apparently, believing in one’s own rationality is not enough.
(There is also another way how political topics can destroy rational debate: they attract people who don’t really care about the main topic of this website, but only came here to fight for a specific political belief.)
From my perspective, the main problem of “rationality vs politics” is that in a political fight, being transparent about your beliefs is usually not a winning strategy. (Saying “I am 80% sure I am right” is not going to bring masses to your side. Neither is replying to slogans and tweets by peer-reviewed articles full of numbers.) If you had a completely honest debate about politics, it would have to be done in private, because the participants would have to write things that could ruin their political careers if quoted publicly. (Imagine things like: “Yeah, I know that this specific important person in our party is a criminal, or that this specific popular argument is actually a lie, but I still support them because the future where they prevail seems like a lesser evil compared to the alternatives, for the following reasons: …“) So you get the multiplayer Prisoners’ Dilemma with high motivation to defect, because breaking the rules of the game in favor of doing the right thing (which is how acting on a strong political belief feels from inside) seems like the right thing to do.
The LW karma obviously has its flaws, per Goodhart’s law. It is used anyway, because the alternative is having other problems, and for the moment this seems like a reasonable trade-off.
The punishment for “heresies” is actually very mild. As long as one posts respected content in general, posting a “heretical” comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose only purpose here is to post “heresies”. Also, LW karma does not prevent anyone from posting “heresies” on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics).
Blackmail typically attacks you in real life, i.e. you can’t limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one’s behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the difference between norms in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite unlike LW karma.
> If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term.
To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people (“the bad guys”, “gangsters”) and their intentions, like we could get a better outcome instead by simply replacing the government or something.
I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation… but in a prisoners’ dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps… in the hypothetical state of perfect information… the only stable equilibrium is when you spend so much on defense that there is almost nothing left to steal from you.
And there is nothing special about the “bad guys” other than the fact that, statistically, they exist. Actually, if the hypothesis is correct, then… in the hypothetical state of perfect information… the bad guys would themselves end up in the very same situation, having to spend almost all successfully stolen resources to defend themselves against theft by other bad guys.
To defend yourself from the ordinary thieves, you need police. The police needs some money to be able to do their job. But what prevents them from abusing their power to take more from you? So you have the government to protect you from the police, but the government also needs money to do their job, and it is also tempted to take more. In the democratic government, politicians compete against each other… and the good guy who doesn’t want to take more of your money than he actually needs to do his job, may be outcompeted by a bad guy who takes more of your resources and uses the surplus to defeat the good guy. Also, different countries expend resources on defending against each other. And you have corruption inside all organizations, including the government, the police, the army. The corruption costs resources, and so does fighting against it. It is a fractal of burning resources.
So… perhaps there is an economical law saying that this process continues until the available resources are exhausted (because otherwise, someone would be tempted to take some of the remaining resources, and then more resources would have to be spent to stop them). Unless there is some kind of “friction”, such as people not knowing exactly how much money you have, or how exactly would you react if pushed further (where exactly is your “now I have nothing to lose anymore” point, when instead of providing the requested resources you start doing something undesired, even if doing so is likely to hurt you more); or when it becomes too difficult for the government to coordinate to take each available penny (because their oversight and money extraction also have a cost). And making the situation more transparent reduces this “friction”.
It this model, the difference between the “good guy” and the “bad guy” becomes smaller than you might expect, simply because the good guy still needs (your) resources to fight against the bad guy, so he can’t leave you alone either.
The idea of “purposefully telling people incorrect information to make them learn even faster than by giving them correct information” feels like rationalization. I strongly doubt that people who claim to use this method actually bother measuring its efficiency. It is probably more like: “I gave them wrong information, some students came to the right conclusion anyway, which proves that I am a fantastic teacher, and other students came to a wrong conclusion, which proves that those students were stupid and unworthy of my time.” Congratulations, now the teacher can do nothing wrong!
The goal of abstruse writing (if done intentionally, as opposed to merely lacking the skill to write clearly) is to avoid falsification. If my belief is never stated explicitly, and I only give you vague hints, you can never prove me wrong. Even if you guess correctly that I believe X, and then you write an argument about why X is false, I still have an option to deny believing X, and can in turn accuse you of strawmaning me (and being too stupid to understand the true depths of my thinking). If my writing becomes popular, I can let other people steelman my ideas, and then wisely smile and say “yes, that was a part of the deep wisdom I wanted to convey, but it goes even deeper than that”, taking credit for their work and making them happy by doing so.
Even people who don’t believe in singularity?
More articles, fewer comments per article—perhaps these two are connected. ;)
In general, I agree that I would also prefer deeper debates below the articles, and more smart people to participate at them. However, I am afraid that the number of smart people on internet is quite limited (perhaps more than even the most pessimistic of us would imagine), and they usually have other things to do with higher priority than commenting on LW.
Also, LW is no longer new and exciting—the people who wanted to say something, often already said it; the people who would be attracted to LW probably already found it; the people able and willing to write high-quality content typically already have their personal blogs. Of course this does not stop the discussion here completely; it just slows it down.
Learning = changing in a way that allows you to solve (a certain class of) problems more efficiently (on average).
Not learning = either not changing, or changing in a way that does not make you more efficient at solving problems.
(Note: I am saying “on average”, because… suppose your original algorithm for solving math problems is simply yelling “five!” regardless of the problem. Now you learn math, and it makes you better at solving math problems in general… but it makes you slower at solving those problems where “five” actually happens to be the correct answer.)
I feel it’s like “A → likely B” being an evidence for “B → likely A”; generally true, but it could be either very strong or very weak evidence depending on the base rates of A and B.
Not having knowledgeable criticism against position “2 + 2 = 4” is strong evidence, because many people are familiar with the statement, many use it in their life or work, so if it is wrong, it would be likely someone would already offer some solid criticism.
But for statements that are less known or less cared about, it becomes more likely that there are good arguments against them, but no one noticed them yet, or no one bothered to write a solid paper about them.
An important aspect is that people disagree about which (if any) X-risks are real.
That makes it quite different from the usual scenario, where people agree that situation sucks but each of them has individual incentives to contribute to making it worse. Such situation allows solutions like collectively agreeing to impose a penalty on people who make things worse (thus changing their individual incentive gradient). But if people disagree, imposing the penalty is politically not possible.
Another frequent feature of a mind hack is that suddenly there is an important authority, which wasn’t important before (probably because you were not even aware of its existence).
In case of manipulation, the new authority would be your new guru, etc.
But in case of healthy growth, for example if you start studying mathematics or something, it would be the experts in given area.
It doesn’t have to be always like this, but it seems to me that the process of conversion often includes installing some kind of threat. “If you stop following the rules, all these wonderful and friendly people will suddenly leave you alone, and also you will suffer horrible pain in the hell.” So a mind of a converted person automatically adds a feeling of danger to sinful thoughts.
The process of deconversion would then mean removing those threats. For example, by being gradually exposed to sinful thoughts and seeing that there is no horrible consequence. Realizing that you have close friends outside the religious community who won’t leave you if you stop going to the church, etc.
More generally: less freedom vs more freedom. (An atheist is free to pray or visit a church, they just see it as a waste of time. A religious person can skip praying or church, but it comes with a feeling of fear or guilt.)
Seems to me that an important aspect of culture is how it organizes “zero-sum” games between its members. I am using scare quotes because a game which is zero-sum (or negative-sum) for its two active players can still generate positive or negative externalities for the rest of the tribe. And because some resources are scarce, and there will be a competition for them, it is nice when the energy of the competition can be channeled into some benefit for the rest of the tribe.
For example, in the hacker culture, one gains status by contributing quality code, so whoever wins more imaginary “most awesome coder” point, billions of people will get free software. Or there are cultures where the traditional way to signal wealth is to donate stuff to other members. An opposite would be a culture where people signal wealth e.g. by wearing expensive watches. (Although it could be argued that this creates some positive externalities too, e.g. job opportunities for the watchmakers.)
I am not sure about this, but I have a feeling that if you want to design a culture that is a nice place to live in, you should encourage pro-social activities as the recommended way to do costly signaling.
Sports are probably also an example of this, when people translate their desire to win (as individuals or teams) into entertainment for others. As opposed to e.g. street fighting which would put lives and property of others in risk. (But people are already aware that sports are “violence made harmless”; my suggestion is to focus on competition in abstract, not only physical violence as its one specific form.)
Now be careful, and don’t get killed by stupid people.
I notice some similarities between what you wrote, and what other people wrote about similar experiences. You focus on technical details that don’t fit. It makes sense, of course, if the discussed text is supposed to be flawless. But it means that you are still at the beginning on the long way out of religion. You don’t believe it, but you still kinda respect it. I mean, you consider those technical details worthy of your time and attention.
Imagine that we would be discussing some other religion, e.g. Hinduism. And I would say that the 1234th word in the Whatever Veda could not be original, because it contains a consonant that didn’t exist thousands of years ago. You would probably feel like “yeah, whatever, who cares about a consonant; the whole story about blue people with four arms leading armies of 10^10 monkeys from other planets is completely ridiculous!” At the end of the road, you may feel the same about the religion you grew up with. The technical details that now seem important to you will feel unimportant compared with the utter falseness of the whole thing.
I think that the important thing to see the big picture is reductionism. Like, let’s not talk about the holy texts and evidence; instead tell me what is your God composed of. Is it build of atoms? Of something else, e.g. some mysterious “spiritual atoms”? When it becomes angry or happy, does it literally have such hormones in its bloodstream? When it thinks or remembers, are its “spiritual neurons” exchanging the “spiritual atoms”? Hey, I am not denying your God, I am actually eager to listen to your story about it… as long as you can focus on the technical details and keep making sense. I want to have a sufficiently good model of your God so that I could build one in my laboratory (given enough resources, hypothetically).
And here is when people jump to some bullshit. The Christian version is like “He is not made of ordinary matter; He is outside of the universe”, and I am like: okay, let’s talk about the non-ordinary matter that His non-ordinary neurons and non-ordinary brain are built of, in His reality-outside-the-universe. But, you know, to be able to think or feel, there needs to be some kind of metabolism—even if it’s a 13-dimensional metabolism built from dark matter—right? Then the more sophisticated crap is like “but actually God is the most simple possible thing” or something like that, and I am like: dude, just read something about Kolmogorov komplexity, and come back when you realize how ridiculous you sound.
Of course, such complicated dialogs only happen in my imagination :D because… in real life, when you start asking questions, the typical answer is just “this is all very mysterious stuff that humans like us can’t even begin to understand”, and it doesn’t go far beyond that. Also “read these thousand books, they contain answers to all your questions” (spoiler: they don’t; this is just an attempt to make you tired and give up).
For most people, however, religion is not about making sense. It is about belonging to a community. If they start doubting it, they will feel alone. Humans have a desire to associate with those who “believe” the same things. It is unfortunate that sometimes the fairy tales they associate around compel them to do horrible things...
I have read the trilogy, I enjoyed it a lot, and I have only two objections: the happy ending, and the lack of serious effort to kill Luo Ji. The latter is especially weird coming from aliens who would have no problem with exterminating half of the human population.
My impression is that the Three Body trilogy is essentially a universe-sized meditation on Moloch.
I am however completely surprised at your indignation at how the book depicts humans. Because I find it quite plausible, at least the parts about how “no good deed goes unpunished”. Do we live in so different bubbles?
I see politicians gaining votes for populism, and losing votes for solving difficult problems. I see clickbait making tons of money, and scientists desperately fighting for funding. There was a guy who landed a rocket on a comet, or something like that, and then a mob of internet assholes brought him to tears because he had a tacky shirt. There are scientists who write books explaining psychometric research, and end up physically attacked and called Nazis. With humans like this, what is so implausible about a person who would literally save humanity from annihilation, being sentenced to death? Just imagine that it brings ad clicks or votes from idiots or whatever is the mob currency of the future, and that’s all the incentive you need for this to happen.
As the beginning of the trilogy shows, we do not need to imagine a fictionally evil or fictionally stupid humanity to accomplish this. We just need to imagine exactly the same humanity that brought us the wonders of Nazism and Communism. The bell curve where the people on one end wear Che shirts and cry “but socialism has never been tried”, and on the other end we have Noam “Pol Pot did nothing wrong” Chomsky in academia. Do you feel safe living on the same planet as these people? Do you trust them to handle the future x-threats in a sane way? I definitely don’t.
The unrealistic part perhaps is that these future (realistically stupid and evil) people are too consistent, and have things too much under control. I would expect more randomness, e.g. one person who saves the world would be executed, but another would be celebrated, for some completely random reason unrelated to saving the world. Also, I would expect that despite making the suicide pact the official policy of the humankind, some sufficiently powerful people would prepare an exit for themselves anyway. (But maybe the future has better surveillance which makes going against the official policy impossible.)
After brief reading, seems like the conclusion is: “At market prices, most people would not use the anti-malaria nets; this is empirically verified. Therefore, we provide the nets for free, and we give the nets instead of cash to buy the nets.”
The obvious question is why are people unwilling to buy the nets?
Is there a rational reason, such as “the money is needed to prevent more immediate dangers, such as starvation”? Or is it an irrational one, such as underestimating the danger of malaria, not understanding how malaria spreads, or fatalism about diseases?
I am skeptical about armchair Econ-101 reasoning unless it is also supported by empirical data. Many things can go wrong. (Also, it has a flavor of “map over territory”.) For example:
The models are based on some assumptions, which is necessary to create models, but in real life the assumptions may be wrong so much that it changes the outcome. The players are supposed to be 100% rational and all-knowing; the transactions are supposed to be completely friction-less; it is assumed that the market is the only game in town. So when this perfect market notices that e.g. there is an opportunity to sell more food, -- POOF! -- and there is instantly a new farm with food to sell. In reality, people may be slow to notice, risk-averse in face of uncertainty, they may be tons of bribes or paperwork necessary to start a new farm, growing the food may require a lot of time, and if too many food producers happen to belong to a minority ethnicity it might result in their genocide. When Econ 101 says “there shall be a balance”, it usually does not specify how long do we have to wait for its coming: days, weeks, years, or centuries? (“The market can stay irrational longer than you can stay solvent.” In case of Africa, longer than you and your clan can survive.)
It is easy to notice some relevant forces, and miss others. (The archetypal example.)
Seems to me that some armchair conclusions can be weakened or even reverted simply by reasoning “one level higher”. Is increasing human capital a good thing? Sounds uncontroversial, ceteris paribus, but suppose I wave a magical wand and every African magically acquires a PhD in anti-malaria net making. I would still suppose they would have a problem to feed their families. And I wouldn’t be too surprised to learn afterwards that there are still not enough anti-malaria nets produced.
Sorry for providing a fully general counter-argument. But this is exactly my point: with enough sophistication, you can make Econ-101 arguments either way. I have already seen a clever Econ-101 argument against the anti-malaria nets. What I need is a reality check.