I would like to see a page like TalkOrigins, but about IQ. So that any time someone confused but generally trying to argue in good faith posts something like “but wasn’t the idea of intelligence disproved scientifically?” or “intelligence is a real thing, but IQ is not” or “IQ is just an ability to solve IQ tests” or “but Taleb’s article/tweet has completely demolished the IQ pseudoscience” or one of the many other versions… I could just post this link. Because I am tired of trying to explain, and the memes are going to stay here for a foreseeable future.
I’d like a page like this just so I can learn about IQ without having to dig through lots of research myself.
Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.
But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it’s mostly your actions. I am not trying to disagree here (I honestly don’t know), just saying that people may legitimately have either model, or a mix thereof.
If your model is “your life is mostly determined by your actions”, then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.
If your model is “your life is mostly determined by forces beyond your control”, then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn’t have any actual impact on winning.
The mixed model “your life is partially determined by your actions, and partially by forces beyond your control” is more tricky. On one hand, it makes sense to focus on the part that you can change, because that’s where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.
Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn’t win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the “winners at life”, but on the other hand, don’t blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.
But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.
So the steelman of the objection in the mixed model would be something like: “Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck.”
A toy model to illustrate my intuition about the relationship between strategy and luck:
Imagine that there are four switches called A, B, C, D, and you can put each of them into position “on” or “off”. After you are done, a switch A, B, C, D in a position “on” gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you −1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position “off” always gives you 0 points. (The points are proportional to utility.)
Also, let’s assume that most people in this universe are risk-averse, and only set D to “on” and the remaining three switches to “off”.
What happens in this universe?
The entire genre of “let’s find the most successful people and analyze their strategy” will insist that the right strategy is to turn all four switches to “on”. Indeed, there is no other way to score +4 points.
The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.
The optimal strategy—setting A and B to “off”, C and D to “on” -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.
It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them “off”, and the people you know from self-help literature set both of them “on” and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.
(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn’t prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)
In reality it has to be a mixture right? So many parts of my day are absolutely in my control, at least small things for sure. Then there are obviously a ton of things that are 100% out of my control. I guess the goal is to figure out how to navigate the two and find some sort of serenity. After all isn’t that the old saying about serenity? I often think about what you have said as an addict. I personally don’t believe addiction to be a disease, my DOC is alcohol, and I don’t buy into the disease model of addiction. I think it is a choice and maybe a disorder of the brain and semantics on the word “disease”. But I can’t imagine walking into a cancer ward full of children and saying me too! People don’t just get to quit cancer cold turkey. I also understand like you’ve pointed out, and I reaffirmed that it is both. I have a predisposition to alcoholism because of genetics and it’s also something I am aware of and a choice. I thought I’d respond to your post since you were so kind as to reply to my stuff. I find this forum very interesting and I am not nearly as intelligent as most here but man it’s fun to bounce ideas!
In reality it has to be a mixture right?
Yeah, this is usually the right answer. Which of course invites additional questions, like which part is which...
With addiction, I also think it is a mixture of things. For example, trivially, no one would abuse X if X were literally impossible to buy, duh. But even before “impossible”, there is a question of “how convenient”. If they sell alcohol in the same shop you visit every day to buy fresh bread, it is more tempting than if you had to visit a different shop, simply because you get reminded regularly about the possibility.
For me, it is sweet things. I eat tons of sugar, despite knowing it’s not good for my health. But fuck, I walk around that stuff every time I go shopping, and even if I previously didn’t think about it, now I do. And then… well, I am often pretty low on willpower. I wish I had some kind of augmented reality glasses which would simply censor the things in the shop I decide I want to live without. Like I would see the bread, butter, white yoghurt, and some shapeless black blobs between that. Would be so easier. (Kind of like an ad-blocker for offline world. This may become popular in the future.)
Another thing that contributes to addiction is frustration and boredom. If I am busy doing something interesting, I forget the rest of the world, including my bad habits. But if the day sucks, the need to get “at least something pleasant, now” becomes much stronger.
Then it is about how my home is arranged and what habits I create. Things that are “under my control in long term”, like you don’t build the good habit overnight, but you can start building it today. For example, with a former girlfriend I had a deal that there is one cabinet that I will never open, and she needs to keep all her sweets there; never leave them exposed on the table, so that I would not be tempted.
America is now what anthropologists call a Kardashian Type Three civilisation: more than fifty percent of GDP is in the attention economy.
Stories by Greg Egan are generally great, but this one is… well, see for yourselves: In the Ruins
I was thinking about which possible parts of economy are effectively destroyed in our society by having an income tax (as an analogy to Paul Graham’s article saying that wealth tax would effectively destroy startups; previous shortform). And I think I have an answer; but I would like an economist to verify it.
Where I live, the marginal income tax is about 50%. Well, only a part of it is literally called “tax”, the other parts are called health insurance and social insurance… which in my opinion is misleading, because it’s not like the extra coin of income increases your health or unemployment risk proportionally; it should be called health tax and social tax instead… anyway, 50% is the “fraction of your extra coin the state will automatically take away from you” which is what matters for your economical decisions about making that extra coin.
In theory, by the law of comparative advantage, whenever you are better at something than your neighbor, you should be able to arrange a trade profitable for both sides. (Ignoring the transaction costs.) But if your marginal income is taxed at 50%, such trade would be profitable only if you are more than 2× better than your neighbor. And that still ignores the fixed costs (you need to study the law, do some things to comply with it, study the tax consequences, fill the tax report or pay someone to do it for you, etc.), which are significant if you trade in small amounts, so in practice you sometimes need to be even 3× or 4× better than your neighbor to make a profit.
This means that the missing part of economy are all those people who are better at something than their neigbors, but not 2×, 3×, or 4× better; at least not reliably. In an alternative tax system without income tax, they could engage in profitable trade with their neighbors; in our system, they don’t. And “being slightly better, but not an order of magnitude better at something” probably describes a majority of population, which suggests there is a huge amount of possible value that is not being created, because of the income tax.
Even worse, this “either you are an order of magnitude better, or go away” system creates barriers to entry in many places in the society. Unqualified vs qualified workers. Employees vs entrepreneurs. Whenever there is a jump required (large upfront investment for uncertain gain), fewer people cross the line than if they could walk across it incrementally: learn a bit, gain an extra coin, learn another bit, gain two extra coins… gradually approaching the limit of your abilities, and getting an extra income along the way to cover the costs of learning. The current system is demotivating for people who are not confident they could make the jump successfully. And it contributes to social unfairness, because some people can easily afford to risk a large upfront investment for uncertain gain, some would be ruined by a possible failure, and some don’t even have the resources necessary to try.
To reverse this picture, I imagine that in a society without income tax, many people would have multiple sources of income: they could have a job (full-time or part-time) and make some extra money helping their neighbors. The transition from an employer to an entrepreneur would be gradual, many would try it even if they don’t feel confident about going the entire way, because going halfway would already be worth it. And because more people would try, more would succeed; also, some of them would not have the skills to go the entire way at the beginning, but would slowly develop them along the way. Being an entrepreneur would not be stressful the same way it is now, and this society would have a lot of small entrepreneurs.
...and this kind of “bottom-up” economy feels healthier to me than the “top-down” economy, where your best shot at success is creating a startup for the purpose of selling it to a bigger fish. I suppose the big fish, such as Paul Graham, would disagree, but that’s the entire point: in a world without barriers to entry, you wouldn’t need to write motivational speeches for people to try their luck, they could advance naturally, following their incentives.
I think this is insightful, but my guess is that a society without income tax would not in fact be nearly as much better at providing opportunities for people who are kinda-OK-ish at things as you conjecture, and I further guess that more people than you think are at least 2x better at something than someone they can trade with, and furthermore (though it doesn’t make much difference to the argument here) I think something’s fundamentally iffy about this whole model of when people are able to find work.
Second point first. For there to be opportunities for you to make money by working, in a world with 50% marginal income tax, what you need is to be able to find someone you’re 2x better than at something, and then offer to do that thing for them.
… Actually, wait, isn’t the actual situation nicer than that? Roll back the income tax for a moment. You can trade profitably with someone else provided your abilities are not exactly proportional to one another, and that’s the whole point of “comparative advantage”. If you’re 2x worse at doing X than I am and 3x worse at doing Y, then there are profitable trades where you do some X for me and I do some Y for you. (Say it takes me one day to make either a widget or a wadget, and it takes you two days to make a widget and three days to make a wadget, and both of us need both widgets and wadgets. If we each do our own thing, then maybe I alternate between making widgets and wadgets, and get one of each every 2 days, and you do likewise and get one of each every 5 days. Now suppose that you only make widgets, making one every 2 days, and you give 3⁄5 of them to me so that on average you get one of your own widgets every 5 days, same as before. I am now getting 0.6 widgets from you every 2 days without having to do any work for them. Now every 2 days I spend 0.4 days making widgets, so I now have a total of one widget per 2 days, same as before. I spend another 1 day making one wadget for myself, so I now have a total of one wadget per 2 days, same as before; and another 0.2 days making one wadget for you, so you have one wadget per 5 days, same as before. At this point we are exactly where we were before, except that I have 10% of my time free, which I can use to make some widgets and/or wadgets for us both, leaving us both better off.
I haven’t thought it through but I guess the actual condition under which you can work profitably if there’s 50% income tax might be “there’s someone else, and two things you can both do, such that [(your skill at A) / (your skill at B)] / [(their skill at A) / (their skill at B)] is at least 2”, whereas without the tax the only requirement is that the ratio be bigger than 1.
Anyway, that’s a digression and I don’t think it matters that much for present purposes. (If what you want is not merely to “earn a nonzero amount” but to “earn enough to be useful”, then probably you do need something more like absolute advantage rather than merely comparative advantage.) The point is that what you need is a certain kind of skill disparity between you and someone else, and the income tax means that the disparity needs to be bigger for there to be an employment opportunity.
But if you’re any good at anything, and if not everyone else is really good at everything—or, considering comparative advantage again, if you’re any good at anything relative to your other abilities, and not everyone else is too, then there’s an opportunity. And it seems to me that if you have learned any skill at all, and I haven’t specifically learned that same skill, then almost certainly you’ve got at least a 2x comparative advantage there. (If you haven’t learned any skills at all and are equally terrible at everything, and I have learned some skills, then you have a comparative advantage doing something I haven’t learned. But, again, that’s probably not going to be enough to earn you enough to be any use.)
OK, so that was my second point: surely 2x advantages are commonplace even for not-very-skilled workers. Only a literally unskilled worker is likely to be unable to find anything they can do 2x better than someone.
Moving on to my (related) first point, let’s suppose that there are some people who have only tiny advantages over anyone else. In principle, they’re screwed in a world with income tax, and doing fine in a world without, because in the latter they can find someone they’re a bit better at and work for them. But in practice I’m pretty sure that almost everyone who is doing work that isn’t literally unskilled is (perhaps only by virtue of on-the-job training) doing it well more than 2x better than someone completely untrained, and I suspect that actually finding and exploiting “1.5x” opportunities would be pretty difficult. If someone’s barely better than completely-unskilled, it’s probably hard to tell that they’re not completely unskilled, so how do they ever get the job, even in a world without income tax.
Finally, the third point. A few times above I’ve referred to “literally unskilled” workers. In point of fact, I think there are literally unskilled workers. That ought to be impossible even in a world without income tax. What’s going on? Answer: work isn’t only about comparative or absolute advantage in skills. Suppose I am rich and I need two things done; one is fun and one is boring. I happen to be very good at both tasks. But I don’t wanna do the boring one. So instead I pay you (alas, you are poor) to do the boring task. Not because of any relevant difference in skill, but just because we value money differently because I’m rich and you’re poor, and you’re willing to do the boring job for a modest amount of money and I’m not. Everybody wins. Or suppose there’s no difference in wealth or skill between us, and we both need to do two things 100x each. Either of us will do better if we pick one thing and stick with it so we don’t incur switching costs and get maximal gains from practice. So you do Thing One for me and I do Thing Two for you. I think income taxes still produce the same sort of friction, and require the advantages (how much more willing you are to do boring work than me on account of being poor, how much we gain from getting more practice and avoiding switching costs) to be larger roughly in inverse proportion to how much of your income isn’t taxed, so this point is merely a quibble that doesn’t make much difference to your argument.
Thinking about relation between enlightenment and (cessation of) signaling.
I know that enlightenment is supposed to be about cessation of all kinds of cravings and attachments, but if we assume that signaling is a huge force in human thinking, then cessation of signaling is a huge part of enlightenment.
Some random thoughts in that direction:
The paradoxical role of motivation in enlightenment—enlightenment is awesome, but a desire to be awesome is the opposite of enlightenment.
Abusiveness of the Zen masters towards their students: typically, the master tries to explain the nature of enlightenment using an unhelpful metaphor (I suppose, because most masters suck at explaining). Immediately, a student does something obviously meant to impress the master. The master goes berserk. Sometimes, as a consequence, the student achieves enlightenment. -- My interpretation is that realizing (System 1) that the master is an abusive asshole who actually sucks at teaching, removes the desire to impress him; and because in this social setting the master was perceived as the only person worth impressing, this removes (at least temporarily) the desire to impress people in general.
A few koans are of the form: “a person A does X, a person B does X, the master says: A did the right thing, but B did the wrong thing”—the surface reading is that the first person reacted spontaneously, and the second person just (correctly) realized that X will probably be rewarded and tried to copy the motions. A more Straussian reading is that this story is supposed to confirm to the savvy reader that masters really don’t have any coherent criteria and their approval is pointless.
(There are more Straussian koans I can’t find right now, where a master says “to achieve enlightenment, you must know at least one thousand koans” and someone says “but Bodhidharma himself barely knew three hundred” and the master says “honestly I don’t give a fuck”… well, using more polite words, but the impression is that the certification of enlightenment is completely arbitrary and maybe you just shouldn’t care about being certified.)
Quite straightforward in Nansen’s cat—the students try to signal their caring and also their cleverness, and thus (quite predictably) fail to actually save the cat. (Joshu’s reaction to hearing this is probably an equivalent of facepalm.)
Stopping the internal speech in meditation—internal speech is practicing of talking to others, which is mostly done to signal something. The first step towards cessasion of signalling is to try spending 20 minutes without (practicing) signalling, which is already a difficult task for most people.
Meditation skills reducing suffering from pain—this gives me the scary idea that maybe we unconsciously increase our perception of pain, in order to better signal our pain. From a crude behaviorist perspective, if people keep rewarding your expression of pain (by their compassion and support), they condition you to express more pain; and because people are good at detecting fake emotions, the most reliable way to express more pain is to actually feel more pain. The scary conclusion is that a compassionate environment can actually make your life more painful… and the good news is that if you learn to give up signaling, this effect can be reversed.
Out of curiosity (about constructivism) I started reading Jean Piaget’s Language and Thought of the Child. I am still at the beginning, so this comment is mostly meta:
It is interesting (kinda obvious in hindsight), how different a person sounds when you read a book written by them, compared to reading a book about them. This distortion by textbooks seems to happen in a predictable direction:
People sound more dogmatic than they really were, because in their books there is enough space for disclaimers, expressing uncertainty, suggesting alternative explanations, providing examples of a different kind, etc.; but a textbook will summarize this all as “X said that Y is Z”.
People sound less empirical and more like armchair theorists, because in their books there is enough space to describe various experience and experiments that led them to their conclusions, but the textbook will often just list the conclusions.
People sound more abstract and boring, because the interesting parts get left out in the textbooks, replaced by short abstract definitions.
(I guess the lesson is that if you learn about someone from a textbook and conclude “this guy is just another boring dogmatic armchair theorist”, you should consider the possibility that this is simply what textbooks do to people they describe, and try reading their most famous book to give them a chance.)
So my plan was to find out how exactly did Piaget mean his abstract conclusion that kids “construct” models of reality in their heads… and instead here is this experiment how two researchers observed two 6-years old boys at elementary school for one month and wrote down every single thing they said (plus the context), and then made a statistic of how often when one kid says something to another, there is no response, and it is okay because no response was really expected, because small kids are mostly talking to themselves even when they address other people… and I am laughing because I just returned from playground with my kids, and this is so true for the 3-years old. -- More disturbingly, then I start thinking about whether blogging, or even me writing this specific comment now, is really fundamentally different. Piaget classifies speech acts primarily by whether you expect or don’t expect a response; but with blogging, you always may get a response, or you may get silence, and you will only find out much later.
a large number of people, whether from the working classes or the more absent-minded of the intelligentsia, are in the habit of talking to themselves, of keeping up an audible soliloquy. This phenomenon points perhaps to a preparation for social language. The solitary talker invokes imaginary listeners, just as the child invokes imaginary playfellows.
I started reading as a research, now I read because it is fun.
To understand qualia better, I think it would help to get a new sensory input. Get some device, for example a compas or an infrared camera, and connect it to your brain. After some time, the brain should adapt and you should be able to “feel” the inputs from the device.
Congratulations! Now you have some new qualia that you didn’t have before. What does it feel like? Does this experience feel like a sufficient explanation to say that the other qualia you have are just like this, only acquired when you were a baby?
After reading the Progress & Poverty review at ACX, it seems to me that land is the original Bitcoin. Find a city that has a future, buy some land, and HODL.
If you can rent the land (the land itself, not the structures that stand on it), you even have a passive income that automatically increases over time… forever. This makes it even better than Bitcoin.
So, the obvious question is why so many people are angry about the Bitcoin, but so few (only the Georgists, it seems) are angry about the land.
EDIT: A possible explanation is that land is ancient and associated with high status, Bitcoin is new and low-status. Therefore problems associated with Bitcoin can be criticized openly, while problems associated with land are treated as inevitable.
While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:
Land also has use-value, Bitcoin doesn’t
Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don’t value Bitcoin’s advantages...
Full-Georgist ‘land’ taxes disincentivise searching for superior uses (IMO still better than most current taxes, worse than Pigou-style taxes on negative externalities)
Oh, that’s an interesting point: in Georgist system, if you invent a better use of your land, the rational thing to do is shut up, because making it known would increase your tax!
I wonder what would happen in an imperfectly Georgist system, with a 50% or 90% land value tax. Someone smarter than me probably already thought about it.
Also, people can brainstorm about the better use of their neighbor’s land. No one would probably spend money to find out whether there is oil under your house. But cheap ideas like “your house seems like a perfect location to build a restaurant” would happen.
Maybe in Georgist societies people would build huge fences around their land, to discourage neighbors from even thinking about it.
When you tell people which food contains given vitamins, also tell them how much of the food would they need to eat in order to get their recommended daily intake of given vitamin from that source.
As an example, instead of “vitamin D can be found in cod liver oil, or eggs” tell people “to get your recommended intake of vitamin D, you should eat every day 1 teaspoon of cod liver oil, or 10 eggs”.
The reason is that without providing quantitative information, people may think “well, vitamin X is found in Y, and I eat Y regularly, so I got this covered”, while in fact they may be eating only 1⁄10 or 1⁄100 of the recommended daily intake. When you mention quantities, it is easier for them to realize that they don’t eat e.g. half kilogram of spinach each day on average (therefore, even eating spinach quite regularly doesn’t mean you got your iron intake covered).
The quantitative information is typically provided in micrograms or international units, which of course is something that System 1 doesn’t understand. To get an actionable answer, you need to make a calculation like “an average egg has 60 grams of yolk… a gram of cooked egg yolk contains 0.7 IU of vitamin D… the recommended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country… that means, 9-14 eggs a day, assuming I only get the vitamin D from eggs”. I can’t make the calculation in my head, because there is no way I would remember all these numbers, plus the numbers for other vitamins and minerals. But with some luck, I could remember “1 teaspoon of cod liver oil, or 10 eggs, for vitamin D”.
Obvious problem: the recommended daily intake differs by country, eggs come in different sizes, and probably contain different amounts of vitamin D per gram. Which is why giving the answer in eggs will feel irresponsible, and low status (you are exposing yourself to all kinds of nitpicking). Yes; true. But ultimately, the eggs (or whatever is the vegan equivalent of food) are what people actually eat.
commended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country
This assumes that the RDA that those organization publish are trustworthy. You have other organization like the Encodrine society that recommend an order of magnitude more vitamin D.
If the RDA of 400 or 600 IU would be sensible you also could solve it by being a lot in the sun once every two weeks.
Have you tried using Cronometer or a similar nutrition-tracking service to quickly find these relationships? I’ve found Cronometer in particular to be useful because it displays each nutrient in terms of a percent of the recommended daily value for one’s body weight. For example, I can see that a piece of salmon equals over 100% of the recommended amount of omega-3 fatty acids for the day, while a handful of sunflower seeds only equals 20% of one’s daily value of vitamin E. Therefore, I know that a single piece of fish is probably enough, but that I should probably eat a larger portion of sunflower seeds than I would otherwise.
I suppose a percentage system like this one is just the reciprocal of saying something like “10 eggs contain the recommended daily amount of vitamin D.”
Thank you for the link! Glad to see someone uses the intuitive method. My complaint was about why this isn’t the standard approach. Like, recently I was reading a textbook on nutrition (the actual school textbook for cooks; I was curious what they learn), where the information was provided in the form of “X is found in A, B, C, D, also in E” without any indication how often are you supposed to eat any of these.
(If I said this outside of Less Wrong, I would expect the response to be: “more is better, of course, unless it is too much, of course; everything in moderation”, which sounds like an answer, but is not much.)
And with corona and the articles on vitamin D, I opened the Wikipedia, saw “cod liver” as the top result, thought it was no problem they sell it in the shop and it’s not expensive and it tastes okay, I just need to know how much, then I ran the numbers… and then I realized “shit, 99% of people will not do this, even if they get curious and read the Wikipedia page”. :(
I noticed recently that I almost miss the Culture War debates (on internet in general, nothing specific about Less Wrong). I remember that in the past they seemed to be everywhere. But in recent months, somehow...
I don’t use Twitter. I don’t really understand the user interface, and I have no intention to learn it, because it is like the most toxic website ever.
Therefore most Culture War content in English came to me in the past via Reddit. But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
Slate Star Codex has no new content. Yeah, there are “slatestarcodex” and “motte” debates on Reddit, but… I already mentioned Reddit.
Almost all newspaper articles in my native language are paywalled these days. No, I am not going to pay for your clickbait.
So… I am vaguelly aware that Trump was an American president and now it is Biden (or is it still Trump, and Biden will be later? dunno), and there were (still are?) BLM protests in USA. And in my country, the largest political party recently split in two, and I don’t even know the name of the new one, and I don’t even care because what’s the point, the next election is in 3 years. Other than this… blissful ignorance.
And I am not asking you to fix my ignorace—neither do I try to protect it; I just don’t want to invite political content to LW—just commenting on how weird this feels. And I didn’t even notice how this happened, only recently my wife asked me “so what is the latest political controversy you read about online”, and it was a shock to realize that I actually have no idea.
OK, here is the question: is this just about my bubble, or is it a global consequence of COVID-19 taking away attention from corona-unrelated topics?
This is your bubble, because in the relevant spaces they have largely incorporated COVID into the standard fighting and everything, not turned down the fighting at all. I think your bubble sounds great in lots of ways, and am glad to hear you have space from it all.
I guess in my ontology these new debates simply do not register as proper Culture Wars.
I mean, the archetypal Culture Was is a conflict of values (“we should do X”, “no, we should do Y”) where I typically care to some degree about both, so it is a question of trade-offs; combined with different models of the world (“if we do A, B will happen”, “no, C will happen”); about topics that are already discussed in some form for a few decades or centuries, and that concern many people. Or something like that; not sure I can pinpoint it. It’s like, it must feel like a grand philosophical topic, not just some technical question.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War). To some degree it is interesting to steelman it, like to question when people die having ten serious health problems at the same time, how do we choose the official reason of death; or if we just look at total deaths, how to distinguish the second-order effects, such as more depressed people committing suicides, but also fewer traffic deaths… but at the end of the day, you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu. (Or you could believe that the ventilators are just a hoax promoted by government.) At the moment when even Putin’s regime officially admitted it is not a flu, I no longer see any reason to pay attention to this opinion.
Then we have this “lockdown” vs whatever is the current euphemism for just letting people die, which at least is the proper value conflict. And maybe this is about my privilege… that when people have to decide whether they’d rather lose their jobs or lose their parents, I am not that emotionally involved, because I think there is a high chance I can keep both regardless of what the nation decides to do collectively: I can work remotely; and my family voluntarily socially isolates… I am such a lucky selfish bastard, and apparently, so is my entire bubble. I mean, if you ask me, I am on the side of not letting people die, even if it means lower profits for one year. But then I hear those people complaining about how inconvenient it is to wear face masks, and how they just need to organize huge weddings, go to restaurants and cinemas and football matches… and then I realize that no one cares about my opinion how to survive best, because apparantly no one cares about surviving itself.
What else? There was this debate about whether Sweden is this magical country that doesn’t do anything about COVID-19 and yet COVID-19 avoids it completely, but recently I don’t even hear about them anymore. Maybe they all died, who knows.
Lucky bubble. Or maybe Facebook finally fixing their algorithm so that it only shows me what I want to see.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War).
My sense is “it’s just a flu” is a conflict of values; there are people for whom regular influenza is cause for alarm and perhaps changing policies (about a year ago, I had proposed to friends the thought experiment of an annual quarantine week, wondering whether it would actually reduce the steady-state level of disease or if I was confused about how that dynamical system worked), and there are people who think that cowardice is unbecoming and illness is an unavoidable part of life. That is, some think the returns to additional worry and effort are positive; others think they are negative.
you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu.
Often people describe medications as “safer than aspirin”, but this is sort of silly because aspirin is one of the more dangerous medications people commonly take, grandfathered in by being discovered early. In a normal year, influenza is responsible for over half of deaths due to infectious disease in the US; the introduction of a second flu would still be a public health tragedy, from my perspective.
(Most people, I think, are operating off the case fatality rate instead of the mortality per 100k; in 2018, influenza killed about 2.5X as many people as AIDS in the US, but people are much more worried about AIDS than the flu, and for good reason.)
But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
If—if there were a way to use the old Reddit UI, would you want to know about it?
Gur byq.erqqvg.pbz fhoqbznva yrgf lbh hfr gur byq vagresnpr.
Thank you; yes, I already know about it. But the fact that I have to remember, and keep switching when I click on a link found somewhere, is annoying enough already. (It would be less anoying with a browser plugin that does it automatically for me, and I am aware such plugins exist, but I try to keep my browser plugins at minimum.) So, at the end of the day, I am aware that a solution exists, and I am still annoyed that I would need to do take action to achieve something that used to be the default option. Also, this alternative will probably be removed at some point in the future, so I would just be delaying the inevitable.
remember, and keep switching when I click on a link found somewhere
remember, and keep switching when I click on a link found somewhere
(Only if you’re not logged in: there’s a user-preferences setting to use the old UI.)
Elsevier found a new method to extract money! If you send an article to their journal from a non-English-speaking country, it will be rejected because of your supposed mistakes in English language. To overcome this obstacle, you can use Elsevier’s “Language Editing services” starting from $95. Only afterwards will the article be sent to the reviewers (and possibly rejected).
This happens also if you had your article already checked by a native English speaker who found no errors. On the other hand, if you let your co-author living in an English-speaking country submit the article, the grammar will always be okay.
Based on anecdotal evidence from a few scientists I know. Though some of them have similar experience with other journals who do not use their own language services, so maybe this is not about money but about being primed to check for “bad English” of authors from non-English-speaking countries.
1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.
(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows—the one that was there and the hypothetical two in the future—completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)
After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future… how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get… a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.
Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.
2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.
I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into “we are doomed, eat all your marshmallows now” mode.
This seems likely to me, although I’m not sure “superstimulus” is the right word for this observation.
It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future
Anthropic Chesterton fence:
You know why the fence was built. The original reason no longer applies, or maybe it was a completely stupid reason. Yes, you should tear down the stupid fence.
And yet, there is a worry… might the fact that you see this stupid fence be an anthropic evidence that in the Everett branches without this stupid fence you are already dead?
As with many anthropic considerations, there is a serious problem determining the reference class here. Generally an appropriate reference class is “somebody sufficiently like you”, and then compute weightings for some parameter that varies between universes and affects the number and/or probability of observers.
The trouble is that “sufficiently like you” is a uselessly vague specification. The most salient reference class seems to be “people considering removing a fence very much like this one”. But that’s no help at all! People in other universes who already removed their universe’s fence are excluded regardless of whether they lived or died.
Okay, what about “people who have sufficiently close similarity to my physical and mental make-up at (time now)”? That’s not much help either: almost all of them probably have nothing to do with the fence. Whether or not the fence is deadly will have negligible effect on the counts.
Maybe consider “people with my physical and mental make-up who considered removing this fence between (now minus one day) and (now), and are still alive”. At this point I consider that I am probably stretching a question to get a result I want. What’s more, it still doesn’t help much. Even comparing universes with p=0 of death to p=1, there’s at most a factor of 2 difference in counts for the median observer. Given such a loaded question, that’s a pretty weak update from an incredibly tiny prior.
I noticed that some people use “skeptical” to mean “my armchair reasoning is better than all expert knowledge and research, especially if I am completely unfamiliar with it”.
Example (not a real one): “I am skeptical about the idea that objects would actually change their length when their speed approaches the speed of light.”
The advantage of this usage is that it allows you to dismiss all expertise you don’t agree with, while making you sound a bit like an expert.
I suspect you’re reacting to the actual beliefs (disbelief in your example), rather than the word usage. In common parlance, “skeptical” means “assign low probability”, and that usage is completely normal and understandable.
The ability to dismiss expertise you don’t like is built into humans, not a feature of the word “skeptical”. You could easily replace “I am skeptical” with “I don’t believe” or “I don’t think it’s likely” or just “it’s not really true”.
I think that “skeptical” works better as a status move. If I say I don’t believe you, that makes us two equals who disagree. If I say I am skeptical… I kinda imply that you are not. Similarly, a third party now has the options to either join the skeptical or the non-skeptical side of the debate.
(Or maybe I’m just overthinking things, of course.)
Today I learned that our friends at RationalWiki dislike effective altruism, to put it mildly. As David Gerard himself says, “it is neither altruistic, nor effective”.
In section Where “Effective Altruists” actually send their money, the main complaint seems to be that among (I assume) respectable causes such as fighting diseases and giving money to poor people, effective altruists also support x-risk organisations, veganism, and meta organisations… or, using the language of RationalWiki, “sending money to Eliezer Yudkowsky”, “feeling bad when people eat hamburgers”, and “complaining when people try to solve local problems”.
Briefly looking at numbers of donors in the surveys and trying to group the charities into categories (chances are I misclassified something), it seems like disease charities got 211+114+43+16=384, poverty charities 101, Yudkowsky charities 77+45=122, meta charities 46+21+14+10+10=101, animal charities 27+22=49, and Leverage 7 donors. So even if you think that only disease charities and poverty charities are truly altruistic, it would still be 63% of donors giving money to truly altruistic charities. Uhm, could be worse, I guess.
Also, this is a weird complaint:
GiveWell has also recommended that people spam the Against Malaria Foundation (AMF) with all (except if they are billionaires, obviously) the money they have set aside to donate, on the grounds that they think it’s the best charity, even at the risk of exhausting the AMF’s room for more funding, amongst other dubious decisions.
Like, without any evidence that AMF’s room for funding was actually exhausted, this all reduces to: “we hate EAs because they do not send money to best charities, and also because they send them more money than they can handle”. But sneering was never supposed to be consistent, I guess.
One would also think that the ‘risk’ of ‘exhausting the AMF’s room for more funding’ would be something to celebrate.
Is RationalWiki still mostly “David Gerrard’s Thoughts and Notes”? This kind of writeup shouldn’t come as a surprise.
There are over 100 edits in this article. Many, especially of the large ones are made by David Gerard, but there is also Greenrd and others.
It would be nice to have better tools for exploring wiki history, for example, if I could select a sentence or two, and get a history of this specific sentence, like only the edits that modified it, and preferably get all the historical versions of that sentence on a single page along with the user names and links to edits, so that I do not need to click on each edit separately and look for the sentence.
It is also interesting to compare Wikipedia and RationalWiki articles on the same topic.
Wikipedia narrative is that EA is a high-status “philosophical and social movement” responsible for over $400 000 000 donations in 2019, based on principles of “impartiality, cause neutrality, cost-effectiveness, and counterfactual reasoning”, and its prominent causes are “global poverty, animal welfare, and risks to the survival of humanity over the long-term future”.
Rationalist community is mentioned briefly:
A related group that attracts some effective altruists is the rationalist community.
In addition, the Machine Intelligence Research Institute is focused on the more narrow mission of managing advanced artificial intelligence.
Other contributions were [...] the creation of internet forums such as LessWrong.
Furthermore, Machine Intelligence Research Institute is included in the “Effective Altruism” infobox at the bottom of the page. Mention of Eliezer Yudkowsky was removed as not properly sourced (fair point, I guess). The Wikiquote page on EA quotes Scott Alexander and Eliezer Yudkowsky.
RationalWiki narrative is that “The philosophical underpinnings mostly come from philosopher Peter Singer [but] This did not start the effective altruism subculture”. “The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong” “The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from MIRI and its friends in the LessWrong community”, but the problem is that rationalists believed that MIRI is an effective charity, which is a form of Pascal’s Mugging.
“effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers, and sending money to Eliezer Yudkowsky, respectively.”
...so, my impression is that according to Wikipedia, EA is high-status and mostly unrelated to the rationalist community; and according to RationalWiki, EA was effectively started by rationalist community and is low-status.
Paul Graham’s article Modeling a Wealth Tax says:
The reason wealth taxes have such dramatic effects is that they’re applied over and over to the same money. Income tax happens every year, but only to that year’s income. Whereas if you live for 60 years after acquiring some asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.
But wait, isn’t income tax also applied over and over to the same money? I mean, it’s not if I keep the money for years, sure. But if I use it to buy something from another person, then it becomes the other person’s income, gets taxed again; then the other person uses the remainder to buy something from yet another person, where the money gets taxed again; etc.
Now of course there are many differences. The wealth tax is applied at constant speed—the income tax depends on how fast the money circulates. The wealth tax is paid by the same person over and over again—the income tax is distributed along the flow of the money.
Not sure what exactly is my thesis here. I just got a feeling that the income tax could actually have similar effect, except distributed throughout the society, which makes it more difficult to notice and describe.
Also, affecting different types of people: wealth tax hits hardest the people who accumulate large wealth in short time and then keep it for long time; income tax hits hardest the people who circulate the money fastest. Or maybe the greatest victims of income tax are invisible—some hypothetical people who would circulate money extremely fast in an alternate reality where even 1% income tax is frowned upon, but who don’t exist in our reality because the two-digit income tax would make this behavior clearly unprofitable.
Am I just imagining things here, or does this correspond to something economists already have a name for? I vaguely remember something about tax, inflation, and multipliers. But who are those fast-circulators our tax system hits hardest? Graham’s article isn’t merely about how money affects money, but how it affects motivation and human activity (wealth tax → startups less profitable → fewer startups). What motivation and human activity is similarly affected by the recursive applications of the income tax?
To avoid misunderstanding, I am not asking the usual question: how many kids we could feed by taxing the startups more. I am asking, what kind of possible economical activity is suppressed by having a tax system that is income-based rather than wealth-based? In the trade-off, where one option would destroy the startups, what exactly is being destroyed by having the opposite option?
I would very much like to see a society where money circulates very quickly. I expect people will have many reasons to be happier and suffer less than they do now.
As you observe, income taxes encourage slowing down circulation of money, while wealth taxes speed up circulation of money (and creation of value), but I think there are better ways of assessing tax than those two. I suspect heavily taxing luxury goods which serve no functional purpose, other than to signal wealth, is a good direction to shift taxes towards, although there may be better ways I haven’t thought of yet.
Not answering your question, just some thoughts based on your post
In the meanwhile I remembered reading long ago about some alternative currencies. (Paper money; this was long before crypto.) If I remember it correctly, the money was losing value over time, but you paid no income tax on it. (It was explained that exactly because the money lost value, it was not considered real money, so getting it wasn’t considered a real income, therefore no tax. This sounds suspicious to me, because governments enjoy taxing everything, put perhaps just no one important noticed.)
As a result, people tried to get rid of this money as soon as possible, so it circulated really quickly. It was in a region with very high unemployment, so in absence of better opportunities people also accepted payment in this currency, but then quickly spent it. And, according to the story, it significantly improved the quality of life in the region—people who otherwise couldn’t get a regular job, kept working for each other like crazy, creating a lot of value.
But this was long ago, and I don’t remember any more details. I wonder what happened later. (My pessimistic guess is that the government finally noticed, and prosecuted everyone involved for tax evasion.)
Ah, good ol’ Freigeld
David Gerard (the admin of RationalWiki) doxed Scott Alexander on Twitter, in response to Arthur Chu’s call “if all the hundreds of people who know his real last name just started saying it we could put an end to this ridiculous farce”.
Dude, we already knew you were uncool, but this is a new low.
There is no movement, said the bearded sage.The other remained silent, and began to walk before him.He could not have argued more strongly;Everyone praised the clever answer.But, gentlemen, this funny caseBrings another example to my mind:After all, every day the Sun walks before us,Yet the stubborn Galileo is right.
There is no movement, said the bearded sage.
The other remained silent, and began to walk before him.
He could not have argued more strongly;
Everyone praised the clever answer.
But, gentlemen, this funny case
Brings another example to my mind:
After all, every day the Sun walks before us,
Yet the stubborn Galileo is right.
-- A. S. Pushkin (source)
Project idea: ELI5pedia. Like Wikipedia, but optimized for being accessible for lay audience. If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
Of course it would be even better if Wikipedia itself was written like this, but… well, for whatever reason, it is not.
That is “(Simple English) Wikipedia”, not “Simple (English Wikipedia)”.
I will check it later. The articles that prompted me to write this, they don’t exist in the simple-English version, so I can’t quickly compare how much the reduction of vocabulary actually translates into simple exposition of ideas.
I think that simple might actually be transitive I’m this case.
If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
Wasn’t Arbital pretty much supposed to be this?
Yes. Not sure if its vision was to ultimately cover everything (like Wikipedia) or only MIRI-related topics. But yes, that is the spirit.
EDIT: After reading the entire postmortem… oh, this made me really sad! It seems like a great idea that I didn’t understand/appreciate at the moment.
One Thousand and One Nights is actually a metaphor for web browsing.
You start with a firm decision that it will be only one story and then it is over. But there is always an enticing hyperlink at the end of each story which makes you click, sometimes a hyperlink in the middle of a story that you open in a new tab… and when you finally stop reading, you realize that three years have passed and you have three new subscriptions.
Technically, Chesterton fence means that if something exists for no good reason, you are never allowed to remove it.
Because, before you even propose the removal, you must demonstrate your understanding of a good reason why the thing exists. And if there is none...
More precisely, it seems to me there is a motte and bailey version of Chesterton fence: the motte is that everything exists for a reason; the bailey is that everything exists for a good reason. The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
On one hand, such explanations feel cheap. A conspiracy theorist could explain literally everything by “because evil outgroup did it to hurt people, duh”. On the other hand, yes, sometimes things happen because people are stupid or selfish; what exactly am I supposed to do if someone calls a Chesterton fence on that?
The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
If a fence is build because of regulatory capture, it’s usually the case that the lobbyists who argued for the regulation made a case for the law that isn’t just about their own self-interest.
It takes effort to track down the arguments that were made for the regulation that goes beyond what reasons you come up thinking about the issue yourself.
“Someone made a mistake” or “because a bad person did it to harm someone” are only valid answers if a single person could put up the fence without cooperation from other people. That’s not the case for any larger fence.
When laws and regulations get passed there’s usually a lot of thought going into them being the way they are that isn’t understood by everybody who criticizes them. It might be the case that everybody who was involved in the creation is now dead and they left no documentation for their reasons, but plenty of times it’s just a lack of research effort that results in not having a better explanation then “because of regulatory capture”.
Since when does it say you have to demonstrate your understanding of a good reason? The way I use and understand it, you just have to demonstrate your understanding of the reason it exists, whether it’s good or bad.
But I do think that people tend to miss subtleties with Chesterton’s fence. For example recently someone told me Chesterton’s fence requires justifications for why to remove something not for why it exists—Which is close, but not it. It talks about understanding, not about justification.
At its core, it’s a principle against arguing from ignorance—arguments of the form “x should be removed because i don’t know why it’s there”.
I think people confuse it to be about justification because usually if something exists there’s a justification (else usually someone would have already removed it), and because a justification is a clearer signal of actual understanding, instead of plain antagonism, then a historic explanation.
My case was somewhat like this:
“X is wrong.”
“Use Chesterton fence. Why does X exist?”
“X exists because of incentives of the people who established it. They are rewarded for X, and punished for non-X, therefore...”
“That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.”
And, of course, maybe I am uncharitable and motivated. Happens to people all the time, why should I expect myself to be immune?
But at the same time I noticed how the seemingly neutral Chesterton fence can become a stronger rhetorical weapon if you are allowed to specify further criteria the proper answers must pass.
Right. I don’t think “That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.” is a valid response when talking about Chesterton’s fence. You only have to show your understanding of why something exists is complete enough—That’s easier to signal with good reasons for why it exists, but if there aren’t any then historic explanations are sufficient.
Chesterton’s fence might need a few clear Schelling fences so people don’t move the goalposts without understanding why they’re there ;)
Could you recommend me a good book on first-order logic?
My goal is to understand the difference between first-order and second-order logic, preferably deeply enough to develop an intuition for what can be done and what can’t be done using first-order logic, and why exactly it is so.
I am confused about metaantifragility.
It seems like there are a few predictions that the famous antifragility literature got wrong (and if you point it out on Twitter, you get blocked by Taleb).
But the funny part starts when you consider the consequences of such failed predictions on the theory of antifragility itself.
One possible interpretation is that, ironically, antifragility itself is an example of a Big Intellectual Idea that tries to explain everything, and then fails horribly when you start relying on it. From this perspective, Taleb lost the game he tried to play.
Another possible interpretation is that the theory of antifragility itself is a great example of antifragility. It does not matter how many wrong predictions it makes, as long as it makes one famous correct prediction that people will remember while ignoring the wrong ones. From this perspective, Taleb wins.
Going further meta, the first perspective seems like something an intellectual would prefer, as it considers the correctness or incorrectness of a theory; while the second perspective seems like something a practical person would prefer, as it considers whether writing about theory of antifragility brings fame and profit. Therefore, Taleb wins… by being wrong… about being right when others are wrong.
I imagine a truly marvelous “galaxy brain” meme of this, which this margin is too narrow to contain.
So I was watching random YouTube videos, and suddenly YouTube is like: “hey, we need to verify you are at least 18 years old!”
“Okay,” I think, “they are probably going to ask me about the day of my birth, and then use some advanced math to determine my age...”
...but instead, YouTube is like: “Give me your credit card data, I swear I am totally not going to use it for any evil purpose ever, it’s just my favorite way of checking people’s age.”
Thanks, but I will pass. I believe that giving my credit card data to strangers I don’t want to buy anything from is a really bad policy. The fact that all changes in YouTube seem to be transparently driven by a desire to increase revenue, does not increase my trust. I am not sure what exactly could happen, but… I will rather wait for a new months, and then read a story about how it happened to someone else.
And that’s why I don’t know how Tangled should have ended.
(What, you thought I was trying to watch some porn? No thanks, that would probably require me to give the credit card number, social security number, scans of passport and driving license, and detailed data about my mortgage.)
YouTube lets me watch the video (even while logged out). Is it a region thing?? (I’m in California, USA). Anyway, the video depicts
dirt, branches, animals, &c. getting in Rapunzel’s hair as it drags along the ground in the scene when she’s frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, “Okay, this is getting weird; I’m just gonna go.”
If you want to know how it really ends, check out the sequel series!
What is the easiest and least frustrating way to explain the difference between the following two statements?
X is good.
X is bad, but your proposed solution Y only makes things worse.
Does fallacy to distinguish between these two have a standard name? I mean, when someone criticizes Y, and the reponse is to accuse them of supporting X.
Technically, if Y is proposed as a cure for X, then opposing Y is evidence for supporting X. Like, yeah, a person who supports X (and believes that Y reduces X) would probably oppose Y, sure.
It becomes a problem when this is the only piece of evidence that is taken into account, and any explanations of either bad side effects of Y, or that Y in fact does not reduce X at all, are ignored, because “you simply like X” becomes the preferred explanation.
A discussion of actual consequences of Y then becomes impossible, among the people who oppose X, because asking this question already becomes a proof of supporting X.
More generally, a difference between models of the world is explained as a difference in values. The person making the fallacy not only believes that their model is the right one (which is a natural thing to believe), but finds it unlikely that their opponent could have a different model. Or perhaps they have a very strong prior that differences in values are much more likely than differences in models.
From inside, this probably feels like: “Things are obvious. But bad actors fake ignorance / confusion, so that they can keep plausible deniability while opposing proposed changes towards good. They can’t fool me though.”
Which… is not completely unfounded, because yes, there are bad actors in the world. So the error is in assuming that it is impossible for a good actor to have a different model. (Or maybe assuming too high base rate of bad actors.)
Sounds like a complex equivalence that simultaneously crosses the is-ought gap.
When internet becomes fast enough and data storage cheap enough so that it will be possible to inconspicuously capture videos of everyone’s computer/smartphone screens all the time and upload them to the gigantic servers of Google/Microsoft/Apple, I expect that exactly this will happen.
I wouldn’t be too surprised to learn that it already happens with keystrokes.
If smart people are more likely to notice ways to save their lives that cost some money, in statistics this may appear as a negative correlation between smartness and wealth. That’s because dead people are typically not included in the data.
As a toy model to illustrate what I mean, imagine a hypothetical population consisting of 100 people; 50 rational and 50 irrational; each starting with $100,000 of personal wealth. Let’s suppose that exactly half of each group gets seriously sick. A sick irrational person spends $X on homeopathy and dies. A sick rational person spends $40,000 on surgery and survives. At the end, we have 25 living irrational people, owning $100,000 each, and 50 living rational people, owning $80,000 on average (half of them $100,000, the other half $60,000).
What is the actual relation between heterodoxy and crackpots?
A plausibly sounding explanation is that “disagreeing with the mainstream” can easily become a general pattern. You notice that the mainstream is wrong about X, and then you go like “and therefore the mainstream is probably also wrong about Y, Z, and UFOs, and dinosaurs.” Also there are the social incentives; once you become famous for disagreeing with the mainstream, you can only keep your fame by disagreeing more and more, because your new audience is definitely not impressed by “sheeple”.
On the other hand, there is a notable tendency of actual mainstream experts to start talking nonsense confidently about things that are outside their area of expertise. Which suggests an alternative model, that perhaps it is natural for all smart people (including the ones who succeeded to become mainstream experts at some moment of their lives) to become crackpots… it’s just that some of them stumble upon an important heterodox truth on their way.
So is it more like: “heterodoxy leads to crackpottery” or more like: “heterodoxy sometimes happens as a side effect on the universal way to crackpottery”?
Apparently, crackpots are overconfident about their ability to find truth. Heterodox fame can easily contribute to such overconfidence, but is its effect actually significantly different from mainstream fame?
On the other hand, there is a notable tendency of actual mainstream experts to start talking nonsense confidently about things that are outside their area of expertise.
Any particular examples, or statistics that might shed some light on how common it is?
If it’s just, some people can think of a few really famous people, that seems to point more in the direction of ‘extreme fame has side effects’ (or it’s the opposite, benefits of confidence). But there are a lot of experts, so if the phenomenon was common...
Sadly, I have no statistics, just a few anecdotes—which is unhelpful to answer the question.
After more thinking, maybe this is a question of having a platform. Like, maybe there are many experts who have crazy opinions outside their area of expertise, but we will never know, because they have proper channels for their expertise (publish in journals, teach at universities), but they don’t have equivalent channels for their crazy opinions. Their environment filters their opinions: the new discoveries they made will be described in newspapers and encyclopedias, but only their friends on Facebook will hear their opinions on anything else.
Heterodox people need to find or create their own alternative platforms. But those platforms have weaker filters, or no filters at all. Therefore their crazy opinions will be visible along their smart opinions.
So if you are a mainstream scientist, the existing system will publish your expert opinions, and hide everything else. If you are not mainstream, you either remain invisible, or if you find a way to be visible, you will be fully visible… including those of your opinions that are stupid.
But as you say, fame will have the side effect that now people pay attention to whatever you want to say (as opposed to what the system allows to pass through), and some of that is bullshit. For a heterodox expert, the choice is either fame or invisibility.
There is this meme about Buddhism being based on experience, where you can verify everything firsthand, etc. I challenge the fans of Buddhism to show me how they can walk through walls, walk on water, fly, remember their past lives, teleport across a river, or cause an earthquake.
He wields manifold supranormal powers. Having been one he becomes many; having been many he becomes one. He appears. He vanishes. He goes unimpeded through walls, ramparts, & mountains as if through space. He dives in & out of the earth as if it were water. He walks on water without sinking as if it were dry land. Sitting cross-legged he flies through the air like a winged bird. With his hand he touches & strokes even the sun & moon, so mighty & powerful.He recollects his manifold past lives, i.e., one birth, two births, three births, four, five, ten, twenty, thirty, forty, fifty, one hundred, one thousand, one hundred thousand, many aeons of cosmic contraction, many aeons of cosmic expansion, many aeons of cosmic contraction & expansion, [recollecting], ‘There I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose there. There too I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose here.’ Thus he recollects his manifold past lives in their modes & details.(Digha Nikaya 12)But when the Blessed One came to the river Ganges, it was full to the brim, so that crows could drink from it. And some people went in search of a boat or float, while others tied up a raft, because they desired to get across. But the Blessed One, as quickly as a strong man might stretch out his bent arm or draw in his outstretched arm, vanished from this side of the river Ganges, and came to stand on the yonder side.This great earth, Ananda, is established upon liquid, the liquid upon the atmosphere, and the atmosphere upon space. And when, Ananda, mighty atmospheric disturbances take place, the liquid is agitated. And with the agitation of the liquid, tremors of the earth arise. [...] when an ascetic or holy man of great power, one who has gained mastery of his mind [...] develops intense concentration on the delimited aspect of the earth element, and to a boundless degree on the liquid element, he, too, causes the earth to tremble, quiver, and shake.(Digha Nikaya 16)
He wields manifold supranormal powers. Having been one he becomes many; having been many he becomes one. He appears. He vanishes. He goes unimpeded through walls, ramparts, & mountains as if through space. He dives in & out of the earth as if it were water. He walks on water without sinking as if it were dry land. Sitting cross-legged he flies through the air like a winged bird. With his hand he touches & strokes even the sun & moon, so mighty & powerful.
He recollects his manifold past lives, i.e., one birth, two births, three births, four, five, ten, twenty, thirty, forty, fifty, one hundred, one thousand, one hundred thousand, many aeons of cosmic contraction, many aeons of cosmic expansion, many aeons of cosmic contraction & expansion, [recollecting], ‘There I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose there. There too I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose here.’ Thus he recollects his manifold past lives in their modes & details.
(Digha Nikaya 12)
But when the Blessed One came to the river Ganges, it was full to the brim, so that crows could drink from it. And some people went in search of a boat or float, while others tied up a raft, because they desired to get across. But the Blessed One, as quickly as a strong man might stretch out his bent arm or draw in his outstretched arm, vanished from this side of the river Ganges, and came to stand on the yonder side.
This great earth, Ananda, is established upon liquid, the liquid upon the atmosphere, and the atmosphere upon space. And when, Ananda, mighty atmospheric disturbances take place, the liquid is agitated. And with the agitation of the liquid, tremors of the earth arise. [...] when an ascetic or holy man of great power, one who has gained mastery of his mind [...] develops intense concentration on the delimited aspect of the earth element, and to a boundless degree on the liquid element, he, too, causes the earth to tremble, quiver, and shake.
(Digha Nikaya 16)
IANAB, but the first half almost sounds like a metaphor for something like “all enlightened beings have basically the same desires/goals/personality, so they’re basically the same person and time/space differences of their various physical bodies aren’t important.” Not sure about the second half though.
I started a new blog on Substack. The first article is not related to rationality, just some ordinary Java programming: Using Images in Java.
Outside view suggests that I start many projects, but complete few. If this blog turns out to be an exception, the expected content of the blog is mostly programming and math, but potentially anything I find interesting.
The math stuff will probably be crossposted to LW, the programming stuff probably not—the reason is that math is more general and I am kinda good at it, while the programming articles will be narrowly specialized (like this one) and I am kinda average at coding. The decision will be made per article anyway.
When I started learning programming as a kid, my dream was to make computer games. Other than a few very simple ones I made during high school, I didn’t seriously follow in this direction. Maybe it’s time to restart the childhood dream. Game programming is different from the back-end development I usually do, so I will have to learn a few things. But maybe I can write about them while I learn. Then the worst case is that I will never make the games I imagine, but someone else with a similar dream may find my articles useful.
The math part will probably be about random topics that provoked my curiosity at the moment, with no overarching theme. At this moment, I have a half-written introduction to nonstandard natural numbers, but don’t hold your breath, because I am really slow at writing articles.
Prediction markets could create inadvertent assassination markets. No ill intention is needed.
Suppose we have fully functional prediction markets working for years or decades. The obvious idiots already lost most of their money (or learned to avoid prediction markets), most bets are made by smart players. Many of those smart players are probably not individuals, but something like hedge funds—people making bets with insane amounts of money, backed by large corporations, probably having hundreds of experts at their disposal.
Now imagine that something like COVID-19 happened, and people made bets on when it will end. The market aggregated all knowledge currently available to the humankind, and specified the date almost exactly, most of the bets are only a week or two away from each other.
Then someone unexpectedly finds a miracle cure.
Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
The stock market is already a prediction market and there’s potentially profit to be made by assignating a CEO of a company. We don’t see that happening much.
Then someone unexpectedly finds a miracle cure.Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
Taffix might very well be a miracle treatment that prevents people from getting infected by COVID19 if used properly.
We live in an enviroment where already nobody listens to people providing supplements like that and people like Winfried Stoecker get persecuted instead of getting support to get their treatment to people.
Given that it takens 8-9 figures to provide the evidence for any miracle cure to be taken seriously, it’s not something that someone can just unexpectactedly find in a way that moves existing markets in the short term.
There is an article from 2010 arguing that people may emotionally object to cryonics because cold is metaphorically associated with bad things.
Did the popularity of the Frozen movie change anything about this?
Well, there is the Facebook group “Cryonics Memes for Frozen Teens”...