Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.
But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it’s mostly your actions. I am not trying to disagree here (I honestly don’t know), just saying that people may legitimately have either model, or a mix thereof.
If your model is “your life is mostly determined by your actions”, then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.
If your model is “your life is mostly determined by forces beyond your control”, then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn’t have any actual impact on winning.
The mixed model “your life is partially determined by your actions, and partially by forces beyond your control” is more tricky. On one hand, it makes sense to focus on the part that you can change, because that’s where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.
Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn’t win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the “winners at life”, but on the other hand, don’t blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.
But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.
So the steelman of the objection in the mixed model would be something like: “Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck.”
A toy model to illustrate my intuition about the relationship between strategy and luck:
Imagine that there are four switches called A, B, C, D, and you can put each of them into position “on” or “off”. After you are done, a switch A, B, C, D in a position “on” gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you −1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position “off” always gives you 0 points. (The points are proportional to utility.)
Also, let’s assume that most people in this universe are risk-averse, and only set D to “on” and the remaining three switches to “off”.
What happens in this universe?
The entire genre of “let’s find the most successful people and analyze their strategy” will insist that the right strategy is to turn all four switches to “on”. Indeed, there is no other way to score +4 points.
The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.
The optimal strategy—setting A and B to “off”, C and D to “on” -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.
It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them “off”, and the people you know from self-help literature set both of them “on” and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.
(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn’t prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)
In reality it has to be a mixture right? So many parts of my day are absolutely in my control, at least small things for sure. Then there are obviously a ton of things that are 100% out of my control. I guess the goal is to figure out how to navigate the two and find some sort of serenity. After all isn’t that the old saying about serenity? I often think about what you have said as an addict. I personally don’t believe addiction to be a disease, my DOC is alcohol, and I don’t buy into the disease model of addiction. I think it is a choice and maybe a disorder of the brain and semantics on the word “disease”. But I can’t imagine walking into a cancer ward full of children and saying me too! People don’t just get to quit cancer cold turkey. I also understand like you’ve pointed out, and I reaffirmed that it is both. I have a predisposition to alcoholism because of genetics and it’s also something I am aware of and a choice. I thought I’d respond to your post since you were so kind as to reply to my stuff. I find this forum very interesting and I am not nearly as intelligent as most here but man it’s fun to bounce ideas!
In reality it has to be a mixture right?
Yeah, this is usually the right answer. Which of course invites additional questions, like which part is which...
With addiction, I also think it is a mixture of things. For example, trivially, no one would abuse X if X were literally impossible to buy, duh. But even before “impossible”, there is a question of “how convenient”. If they sell alcohol in the same shop you visit every day to buy fresh bread, it is more tempting than if you had to visit a different shop, simply because you get reminded regularly about the possibility.
For me, it is sweet things. I eat tons of sugar, despite knowing it’s not good for my health. But fuck, I walk around that stuff every time I go shopping, and even if I previously didn’t think about it, now I do. And then… well, I am often pretty low on willpower. I wish I had some kind of augmented reality glasses which would simply censor the things in the shop I decide I want to live without. Like I would see the bread, butter, white yoghurt, and some shapeless black blobs between that. Would be so easier. (Kind of like an ad-blocker for offline world. This may become popular in the future.)
Another thing that contributes to addiction is frustration and boredom. If I am busy doing something interesting, I forget the rest of the world, including my bad habits. But if the day sucks, the need to get “at least something pleasant, now” becomes much stronger.
Then it is about how my home is arranged and what habits I create. Things that are “under my control in long term”, like you don’t build the good habit overnight, but you can start building it today. For example, with a former girlfriend I had a deal that there is one cabinet that I will never open, and she needs to keep all her sweets there; never leave them exposed on the table, so that I would not be tempted.
Thinking about relation between enlightenment and (cessation of) signaling.
I know that enlightenment is supposed to be about cessation of all kinds of cravings and attachments, but if we assume that signaling is a huge force in human thinking, then cessation of signaling is a huge part of enlightenment.
Some random thoughts in that direction:
The paradoxical role of motivation in enlightenment—enlightenment is awesome, but a desire to be awesome is the opposite of enlightenment.
Abusiveness of the Zen masters towards their students: typically, the master tries to explain the nature of enlightenment using an unhelpful metaphor (I suppose, because most masters suck at explaining). Immediately, a student does something obviously meant to impress the master. The master goes berserk. Sometimes, as a consequence, the student achieves enlightenment. -- My interpretation is that realizing (System 1) that the master is an abusive asshole who actually sucks at teaching, removes the desire to impress him; and because in this social setting the master was perceived as the only person worth impressing, this removes (at least temporarily) the desire to impress people in general.
A few koans are of the form: “a person A does X, a person B does X, the master says: A did the right thing, but B did the wrong thing”—the surface reading is that the first person reacted spontaneously, and the second person just (correctly) realized that X will probably be rewarded and tried to copy the motions. A more Straussian reading is that this story is supposed to confirm to the savvy reader that masters really don’t have any coherent criteria and their approval is pointless.
(There are more Straussian koans I can’t find right now, where a master says “to achieve enlightenment, you must know at least one thousand koans” and someone says “but Bodhidharma himself barely knew three hundred” and the master says “honestly I don’t give a fuck”… well, using more polite words, but the impression is that the certification of enlightenment is completely arbitrary and maybe you just shouldn’t care about being certified.)
Quite straightforward in Nansen’s cat—the students try to signal their caring and also their cleverness, and thus (quite predictably) fail to actually save the cat. (Joshu’s reaction to hearing this is probably an equivalent of facepalm.)
Stopping the internal speech in meditation—internal speech is practicing of talking to others, which is mostly done to signal something. The first step towards cessasion of signalling is to try spending 20 minutes without (practicing) signalling, which is already a difficult task for most people.
Meditation skills reducing suffering from pain—this gives me the scary idea that maybe we unconsciously increase our perception of pain, in order to better signal our pain. From a crude behaviorist perspective, if people keep rewarding your expression of pain (by their compassion and support), they condition you to express more pain; and because people are good at detecting fake emotions, the most reliable way to express more pain is to actually feel more pain. The scary conclusion is that a compassionate environment can actually make your life more painful… and the good news is that if you learn to give up signaling, this effect can be reversed.
America is now what anthropologists call a Kardashian Type Three civilisation: more than fifty percent of GDP is in the attention economy.
Stories by Greg Egan are generally great, but this one is… well, see for yourselves: In the Ruins
I was thinking about which possible parts of economy are effectively destroyed in our society by having an income tax (as an analogy to Paul Graham’s article saying that wealth tax would effectively destroy startups; previous shortform). And I think I have an answer; but I would like an economist to verify it.
Where I live, the marginal income tax is about 50%. Well, only a part of it is literally called “tax”, the other parts are called health insurance and social insurance… which in my opinion is misleading, because it’s not like the extra coin of income increases your health or unemployment risk proportionally; it should be called health tax and social tax instead… anyway, 50% is the “fraction of your extra coin the state will automatically take away from you” which is what matters for your economical decisions about making that extra coin.
In theory, by the law of comparative advantage, whenever you are better at something than your neighbor, you should be able to arrange a trade profitable for both sides. (Ignoring the transaction costs.) But if your marginal income is taxed at 50%, such trade would be profitable only if you are more than 2× better than your neighbor. And that still ignores the fixed costs (you need to study the law, do some things to comply with it, study the tax consequences, fill the tax report or pay someone to do it for you, etc.), which are significant if you trade in small amounts, so in practice you sometimes need to be even 3× or 4× better than your neighbor to make a profit.
This means that the missing part of economy are all those people who are better at something than their neigbors, but not 2×, 3×, or 4× better; at least not reliably. In an alternative tax system without income tax, they could engage in profitable trade with their neighbors; in our system, they don’t. And “being slightly better, but not an order of magnitude better at something” probably describes a majority of population, which suggests there is a huge amount of possible value that is not being created, because of the income tax.
Even worse, this “either you are an order of magnitude better, or go away” system creates barriers to entry in many places in the society. Unqualified vs qualified workers. Employees vs entrepreneurs. Whenever there is a jump required (large upfront investment for uncertain gain), fewer people cross the line than if they could walk across it incrementally: learn a bit, gain an extra coin, learn another bit, gain two extra coins… gradually approaching the limit of your abilities, and getting an extra income along the way to cover the costs of learning. The current system is demotivating for people who are not confident they could make the jump successfully. And it contributes to social unfairness, because some people can easily afford to risk a large upfront investment for uncertain gain, some would be ruined by a possible failure, and some don’t even have the resources necessary to try.
To reverse this picture, I imagine that in a society without income tax, many people would have multiple sources of income: they could have a job (full-time or part-time) and make some extra money helping their neighbors. The transition from an employer to an entrepreneur would be gradual, many would try it even if they don’t feel confident about going the entire way, because going halfway would already be worth it. And because more people would try, more would succeed; also, some of them would not have the skills to go the entire way at the beginning, but would slowly develop them along the way. Being an entrepreneur would not be stressful the same way it is now, and this society would have a lot of small entrepreneurs.
...and this kind of “bottom-up” economy feels healthier to me than the “top-down” economy, where your best shot at success is creating a startup for the purpose of selling it to a bigger fish. I suppose the big fish, such as Paul Graham, would disagree, but that’s the entire point: in a world without barriers to entry, you wouldn’t need to write motivational speeches for people to try their luck, they could advance naturally, following their incentives.
I think this is insightful, but my guess is that a society without income tax would not in fact be nearly as much better at providing opportunities for people who are kinda-OK-ish at things as you conjecture, and I further guess that more people than you think are at least 2x better at something than someone they can trade with, and furthermore (though it doesn’t make much difference to the argument here) I think something’s fundamentally iffy about this whole model of when people are able to find work.
Second point first. For there to be opportunities for you to make money by working, in a world with 50% marginal income tax, what you need is to be able to find someone you’re 2x better than at something, and then offer to do that thing for them.
… Actually, wait, isn’t the actual situation nicer than that? Roll back the income tax for a moment. You can trade profitably with someone else provided your abilities are not exactly proportional to one another, and that’s the whole point of “comparative advantage”. If you’re 2x worse at doing X than I am and 3x worse at doing Y, then there are profitable trades where you do some X for me and I do some Y for you. (Say it takes me one day to make either a widget or a wadget, and it takes you two days to make a widget and three days to make a wadget, and both of us need both widgets and wadgets. If we each do our own thing, then maybe I alternate between making widgets and wadgets, and get one of each every 2 days, and you do likewise and get one of each every 5 days. Now suppose that you only make widgets, making one every 2 days, and you give 3⁄5 of them to me so that on average you get one of your own widgets every 5 days, same as before. I am now getting 0.6 widgets from you every 2 days without having to do any work for them. Now every 2 days I spend 0.4 days making widgets, so I now have a total of one widget per 2 days, same as before. I spend another 1 day making one wadget for myself, so I now have a total of one wadget per 2 days, same as before; and another 0.2 days making one wadget for you, so you have one wadget per 5 days, same as before. At this point we are exactly where we were before, except that I have 10% of my time free, which I can use to make some widgets and/or wadgets for us both, leaving us both better off.
I haven’t thought it through but I guess the actual condition under which you can work profitably if there’s 50% income tax might be “there’s someone else, and two things you can both do, such that [(your skill at A) / (your skill at B)] / [(their skill at A) / (their skill at B)] is at least 2”, whereas without the tax the only requirement is that the ratio be bigger than 1.
Anyway, that’s a digression and I don’t think it matters that much for present purposes. (If what you want is not merely to “earn a nonzero amount” but to “earn enough to be useful”, then probably you do need something more like absolute advantage rather than merely comparative advantage.) The point is that what you need is a certain kind of skill disparity between you and someone else, and the income tax means that the disparity needs to be bigger for there to be an employment opportunity.
But if you’re any good at anything, and if not everyone else is really good at everything—or, considering comparative advantage again, if you’re any good at anything relative to your other abilities, and not everyone else is too, then there’s an opportunity. And it seems to me that if you have learned any skill at all, and I haven’t specifically learned that same skill, then almost certainly you’ve got at least a 2x comparative advantage there. (If you haven’t learned any skills at all and are equally terrible at everything, and I have learned some skills, then you have a comparative advantage doing something I haven’t learned. But, again, that’s probably not going to be enough to earn you enough to be any use.)
OK, so that was my second point: surely 2x advantages are commonplace even for not-very-skilled workers. Only a literally unskilled worker is likely to be unable to find anything they can do 2x better than someone.
Moving on to my (related) first point, let’s suppose that there are some people who have only tiny advantages over anyone else. In principle, they’re screwed in a world with income tax, and doing fine in a world without, because in the latter they can find someone they’re a bit better at and work for them. But in practice I’m pretty sure that almost everyone who is doing work that isn’t literally unskilled is (perhaps only by virtue of on-the-job training) doing it well more than 2x better than someone completely untrained, and I suspect that actually finding and exploiting “1.5x” opportunities would be pretty difficult. If someone’s barely better than completely-unskilled, it’s probably hard to tell that they’re not completely unskilled, so how do they ever get the job, even in a world without income tax.
Finally, the third point. A few times above I’ve referred to “literally unskilled” workers. In point of fact, I think there are literally unskilled workers. That ought to be impossible even in a world without income tax. What’s going on? Answer: work isn’t only about comparative or absolute advantage in skills. Suppose I am rich and I need two things done; one is fun and one is boring. I happen to be very good at both tasks. But I don’t wanna do the boring one. So instead I pay you (alas, you are poor) to do the boring task. Not because of any relevant difference in skill, but just because we value money differently because I’m rich and you’re poor, and you’re willing to do the boring job for a modest amount of money and I’m not. Everybody wins. Or suppose there’s no difference in wealth or skill between us, and we both need to do two things 100x each. Either of us will do better if we pick one thing and stick with it so we don’t incur switching costs and get maximal gains from practice. So you do Thing One for me and I do Thing Two for you. I think income taxes still produce the same sort of friction, and require the advantages (how much more willing you are to do boring work than me on account of being poor, how much we gain from getting more practice and avoiding switching costs) to be larger roughly in inverse proportion to how much of your income isn’t taxed, so this point is merely a quibble that doesn’t make much difference to your argument.
When you tell people which food contains given vitamins, also tell them how much of the food would they need to eat in order to get their recommended daily intake of given vitamin from that source.
As an example, instead of “vitamin D can be found in cod liver oil, or eggs” tell people “to get your recommended intake of vitamin D, you should eat every day 1 teaspoon of cod liver oil, or 10 eggs”.
The reason is that without providing quantitative information, people may think “well, vitamin X is found in Y, and I eat Y regularly, so I got this covered”, while in fact they may be eating only 1⁄10 or 1⁄100 of the recommended daily intake. When you mention quantities, it is easier for them to realize that they don’t eat e.g. half kilogram of spinach each day on average (therefore, even eating spinach quite regularly doesn’t mean you got your iron intake covered).
The quantitative information is typically provided in micrograms or international units, which of course is something that System 1 doesn’t understand. To get an actionable answer, you need to make a calculation like “an average egg has 60 grams of yolk… a gram of cooked egg yolk contains 0.7 IU of vitamin D… the recommended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country… that means, 9-14 eggs a day, assuming I only get the vitamin D from eggs”. I can’t make the calculation in my head, because there is no way I would remember all these numbers, plus the numbers for other vitamins and minerals. But with some luck, I could remember “1 teaspoon of cod liver oil, or 10 eggs, for vitamin D”.
Obvious problem: the recommended daily intake differs by country, eggs come in different sizes, and probably contain different amounts of vitamin D per gram. Which is why giving the answer in eggs will feel irresponsible, and low status (you are exposing yourself to all kinds of nitpicking). Yes; true. But ultimately, the eggs (or whatever is the vegan equivalent of food) are what people actually eat.
commended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country
This assumes that the RDA that those organization publish are trustworthy. You have other organization like the Encodrine society that recommend an order of magnitude more vitamin D.
If the RDA of 400 or 600 IU would be sensible you also could solve it by being a lot in the sun once every two weeks.
Have you tried using Cronometer or a similar nutrition-tracking service to quickly find these relationships? I’ve found Cronometer in particular to be useful because it displays each nutrient in terms of a percent of the recommended daily value for one’s body weight. For example, I can see that a piece of salmon equals over 100% of the recommended amount of omega-3 fatty acids for the day, while a handful of sunflower seeds only equals 20% of one’s daily value of vitamin E. Therefore, I know that a single piece of fish is probably enough, but that I should probably eat a larger portion of sunflower seeds than I would otherwise.
I suppose a percentage system like this one is just the reciprocal of saying something like “10 eggs contain the recommended daily amount of vitamin D.”
Thank you for the link! Glad to see someone uses the intuitive method. My complaint was about why this isn’t the standard approach. Like, recently I was reading a textbook on nutrition (the actual school textbook for cooks; I was curious what they learn), where the information was provided in the form of “X is found in A, B, C, D, also in E” without any indication how often are you supposed to eat any of these.
(If I said this outside of Less Wrong, I would expect the response to be: “more is better, of course, unless it is too much, of course; everything in moderation”, which sounds like an answer, but is not much.)
And with corona and the articles on vitamin D, I opened the Wikipedia, saw “cod liver” as the top result, thought it was no problem they sell it in the shop and it’s not expensive and it tastes okay, I just need to know how much, then I ran the numbers… and then I realized “shit, 99% of people will not do this, even if they get curious and read the Wikipedia page”. :(
I noticed recently that I almost miss the Culture War debates (on internet in general, nothing specific about Less Wrong). I remember that in the past they seemed to be everywhere. But in recent months, somehow...
I don’t use Twitter. I don’t really understand the user interface, and I have no intention to learn it, because it is like the most toxic website ever.
Therefore most Culture War content in English came to me in the past via Reddit. But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
Slate Star Codex has no new content. Yeah, there are “slatestarcodex” and “motte” debates on Reddit, but… I already mentioned Reddit.
Almost all newspaper articles in my native language are paywalled these days. No, I am not going to pay for your clickbait.
So… I am vaguelly aware that Trump was an American president and now it is Biden (or is it still Trump, and Biden will be later? dunno), and there were (still are?) BLM protests in USA. And in my country, the largest political party recently split in two, and I don’t even know the name of the new one, and I don’t even care because what’s the point, the next election is in 3 years. Other than this… blissful ignorance.
And I am not asking you to fix my ignorace—neither do I try to protect it; I just don’t want to invite political content to LW—just commenting on how weird this feels. And I didn’t even notice how this happened, only recently my wife asked me “so what is the latest political controversy you read about online”, and it was a shock to realize that I actually have no idea.
OK, here is the question: is this just about my bubble, or is it a global consequence of COVID-19 taking away attention from corona-unrelated topics?
This is your bubble, because in the relevant spaces they have largely incorporated COVID into the standard fighting and everything, not turned down the fighting at all. I think your bubble sounds great in lots of ways, and am glad to hear you have space from it all.
I guess in my ontology these new debates simply do not register as proper Culture Wars.
I mean, the archetypal Culture Was is a conflict of values (“we should do X”, “no, we should do Y”) where I typically care to some degree about both, so it is a question of trade-offs; combined with different models of the world (“if we do A, B will happen”, “no, C will happen”); about topics that are already discussed in some form for a few decades or centuries, and that concern many people. Or something like that; not sure I can pinpoint it. It’s like, it must feel like a grand philosophical topic, not just some technical question.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War). To some degree it is interesting to steelman it, like to question when people die having ten serious health problems at the same time, how do we choose the official reason of death; or if we just look at total deaths, how to distinguish the second-order effects, such as more depressed people committing suicides, but also fewer traffic deaths… but at the end of the day, you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu. (Or you could believe that the ventilators are just a hoax promoted by government.) At the moment when even Putin’s regime officially admitted it is not a flu, I no longer see any reason to pay attention to this opinion.
Then we have this “lockdown” vs whatever is the current euphemism for just letting people die, which at least is the proper value conflict. And maybe this is about my privilege… that when people have to decide whether they’d rather lose their jobs or lose their parents, I am not that emotionally involved, because I think there is a high chance I can keep both regardless of what the nation decides to do collectively: I can work remotely; and my family voluntarily socially isolates… I am such a lucky selfish bastard, and apparently, so is my entire bubble. I mean, if you ask me, I am on the side of not letting people die, even if it means lower profits for one year. But then I hear those people complaining about how inconvenient it is to wear face masks, and how they just need to organize huge weddings, go to restaurants and cinemas and football matches… and then I realize that no one cares about my opinion how to survive best, because apparantly no one cares about surviving itself.
What else? There was this debate about whether Sweden is this magical country that doesn’t do anything about COVID-19 and yet COVID-19 avoids it completely, but recently I don’t even hear about them anymore. Maybe they all died, who knows.
Lucky bubble. Or maybe Facebook finally fixing their algorithm so that it only shows me what I want to see.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War).
My sense is “it’s just a flu” is a conflict of values; there are people for whom regular influenza is cause for alarm and perhaps changing policies (about a year ago, I had proposed to friends the thought experiment of an annual quarantine week, wondering whether it would actually reduce the steady-state level of disease or if I was confused about how that dynamical system worked), and there are people who think that cowardice is unbecoming and illness is an unavoidable part of life. That is, some think the returns to additional worry and effort are positive; others think they are negative.
you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu.
Often people describe medications as “safer than aspirin”, but this is sort of silly because aspirin is one of the more dangerous medications people commonly take, grandfathered in by being discovered early. In a normal year, influenza is responsible for over half of deaths due to infectious disease in the US; the introduction of a second flu would still be a public health tragedy, from my perspective.
(Most people, I think, are operating off the case fatality rate instead of the mortality per 100k; in 2018, influenza killed about 2.5X as many people as AIDS in the US, but people are much more worried about AIDS than the flu, and for good reason.)
But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
If—if there were a way to use the old Reddit UI, would you want to know about it?
Gur byq.erqqvg.pbz fhoqbznva yrgf lbh hfr gur byq vagresnpr.
Thank you; yes, I already know about it. But the fact that I have to remember, and keep switching when I click on a link found somewhere, is annoying enough already. (It would be less anoying with a browser plugin that does it automatically for me, and I am aware such plugins exist, but I try to keep my browser plugins at minimum.) So, at the end of the day, I am aware that a solution exists, and I am still annoyed that I would need to do take action to achieve something that used to be the default option. Also, this alternative will probably be removed at some point in the future, so I would just be delaying the inevitable.
remember, and keep switching when I click on a link found somewhere
remember, and keep switching when I click on a link found somewhere
(Only if you’re not logged in: there’s a user-preferences setting to use the old UI.)
1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.
(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows—the one that was there and the hypothetical two in the future—completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)
After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future… how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get… a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.
Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.
2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.
I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into “we are doomed, eat all your marshmallows now” mode.
This seems likely to me, although I’m not sure “superstimulus” is the right word for this observation.
It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future
Paul Graham’s article Modeling a Wealth Tax says:
The reason wealth taxes have such dramatic effects is that they’re applied over and over to the same money. Income tax happens every year, but only to that year’s income. Whereas if you live for 60 years after acquiring some asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.
But wait, isn’t income tax also applied over and over to the same money? I mean, it’s not if I keep the money for years, sure. But if I use it to buy something from another person, then it becomes the other person’s income, gets taxed again; then the other person uses the remainder to buy something from yet another person, where the money gets taxed again; etc.
Now of course there are many differences. The wealth tax is applied at constant speed—the income tax depends on how fast the money circulates. The wealth tax is paid by the same person over and over again—the income tax is distributed along the flow of the money.
Not sure what exactly is my thesis here. I just got a feeling that the income tax could actually have similar effect, except distributed throughout the society, which makes it more difficult to notice and describe.
Also, affecting different types of people: wealth tax hits hardest the people who accumulate large wealth in short time and then keep it for long time; income tax hits hardest the people who circulate the money fastest. Or maybe the greatest victims of income tax are invisible—some hypothetical people who would circulate money extremely fast in an alternate reality where even 1% income tax is frowned upon, but who don’t exist in our reality because the two-digit income tax would make this behavior clearly unprofitable.
Am I just imagining things here, or does this correspond to something economists already have a name for? I vaguely remember something about tax, inflation, and multipliers. But who are those fast-circulators our tax system hits hardest? Graham’s article isn’t merely about how money affects money, but how it affects motivation and human activity (wealth tax → startups less profitable → fewer startups). What motivation and human activity is similarly affected by the recursive applications of the income tax?
To avoid misunderstanding, I am not asking the usual question: how many kids we could feed by taxing the startups more. I am asking, what kind of possible economical activity is suppressed by having a tax system that is income-based rather than wealth-based? In the trade-off, where one option would destroy the startups, what exactly is being destroyed by having the opposite option?
I would very much like to see a society where money circulates very quickly. I expect people will have many reasons to be happier and suffer less than they do now.
As you observe, income taxes encourage slowing down circulation of money, while wealth taxes speed up circulation of money (and creation of value), but I think there are better ways of assessing tax than those two. I suspect heavily taxing luxury goods which serve no functional purpose, other than to signal wealth, is a good direction to shift taxes towards, although there may be better ways I haven’t thought of yet.
Not answering your question, just some thoughts based on your post
In the meanwhile I remembered reading long ago about some alternative currencies. (Paper money; this was long before crypto.) If I remember it correctly, the money was losing value over time, but you paid no income tax on it. (It was explained that exactly because the money lost value, it was not considered real money, so getting it wasn’t considered a real income, therefore no tax. This sounds suspicious to me, because governments enjoy taxing everything, put perhaps just no one important noticed.)
As a result, people tried to get rid of this money as soon as possible, so it circulated really quickly. It was in a region with very high unemployment, so in absence of better opportunities people also accepted payment in this currency, but then quickly spent it. And, according to the story, it significantly improved the quality of life in the region—people who otherwise couldn’t get a regular job, kept working for each other like crazy, creating a lot of value.
But this was long ago, and I don’t remember any more details. I wonder what happened later. (My pessimistic guess is that the government finally noticed, and prosecuted everyone involved for tax evasion.)
Ah, good ol’ Freigeld
David Gerard (the admin of RationalWiki) doxed Scott Alexander on Twitter, in response to Arthur Chu’s call “if all the hundreds of people who know his real last name just started saying it we could put an end to this ridiculous farce”.
Dude, we already knew you were uncool, but this is a new low.
Technically, Chesterton fence means that if something exists for no good reason, you are never allowed to remove it.
Because, before you even propose the removal, you must demonstrate your understanding of a good reason why the thing exists. And if there is none...
More precisely, it seems to me there is a motte and bailey version of Chesterton fence: the motte is that everything exists for a reason; the bailey is that everything exists for a good reason. The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
On one hand, such explanations feel cheap. A conspiracy theorist could explain literally everything by “because evil outgroup did it to hurt people, duh”. On the other hand, yes, sometimes things happen because people are stupid or selfish; what exactly am I supposed to do if someone calls a Chesterton fence on that?
The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
If a fence is build because of regulatory capture, it’s usually the case that the lobbyists who argued for the regulation made a case for the law that isn’t just about their own self-interest.
It takes effort to track down the arguments that were made for the regulation that goes beyond what reasons you come up thinking about the issue yourself.
“Someone made a mistake” or “because a bad person did it to harm someone” are only valid answers if a single person could put up the fence without cooperation from other people. That’s not the case for any larger fence.
When laws and regulations get passed there’s usually a lot of thought going into them being the way they are that isn’t understood by everybody who criticizes them. It might be the case that everybody who was involved in the creation is now dead and they left no documentation for their reasons, but plenty of times it’s just a lack of research effort that results in not having a better explanation then “because of regulatory capture”.
Since when does it say you have to demonstrate your understanding of a good reason? The way I use and understand it, you just have to demonstrate your understanding of the reason it exists, whether it’s good or bad.
But I do think that people tend to miss subtleties with Chesterton’s fence. For example recently someone told me Chesterton’s fence requires justifications for why to remove something not for why it exists—Which is close, but not it. It talks about understanding, not about justification.
At its core, it’s a principle against arguing from ignorance—arguments of the form “x should be removed because i don’t know why it’s there”.
I think people confuse it to be about justification because usually if something exists there’s a justification (else usually someone would have already removed it), and because a justification is a clearer signal of actual understanding, instead of plain antagonism, then a historic explanation.
My case was somewhat like this:
“X is wrong.”
“Use Chesterton fence. Why does X exist?”
“X exists because of incentives of the people who established it. They are rewarded for X, and punished for non-X, therefore...”
“That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.”
And, of course, maybe I am uncharitable and motivated. Happens to people all the time, why should I expect myself to be immune?
But at the same time I noticed how the seemingly neutral Chesterton fence can become a stronger rhetorical weapon if you are allowed to specify further criteria the proper answers must pass.
Right. I don’t think “That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.” is a valid response when talking about Chesterton’s fence. You only have to show your understanding of why something exists is complete enough—That’s easier to signal with good reasons for why it exists, but if there aren’t any then historic explanations are sufficient.
Chesterton’s fence might need a few clear Schelling fences so people don’t move the goalposts without understanding why they’re there ;)
Could you recommend me a good book on first-order logic?
My goal is to understand the difference between first-order and second-order logic, preferably deeply enough to develop an intuition for what can be done and what can’t be done using first-order logic, and why exactly it is so.
I am confused about metaantifragility.
It seems like there are a few predictions that the famous antifragility literature got wrong (and if you point it out on Twitter, you get blocked by Taleb).
But the funny part starts when you consider the consequences of such failed predictions on the theory of antifragility itself.
One possible interpretation is that, ironically, antifragility itself is an example of a Big Intellectual Idea that tries to explain everything, and then fails horribly when you start relying on it. From this perspective, Taleb lost the game he tried to play.
Another possible interpretation is that the theory of antifragility itself is a great example of antifragility. It does not matter how many wrong predictions it makes, as long as it makes one famous correct prediction that people will remember while ignoring the wrong ones. From this perspective, Taleb wins.
Going further meta, the first perspective seems like something an intellectual would prefer, as it considers the correctness or incorrectness of a theory; while the second perspective seems like something a practical person would prefer, as it considers whether writing about theory of antifragility brings fame and profit. Therefore, Taleb wins… by being wrong… about being right when others are wrong.
I imagine a truly marvelous “galaxy brain” meme of this, which this margin is too narrow to contain.
So I was watching random YouTube videos, and suddenly YouTube is like: “hey, we need to verify you are at least 18 years old!”
“Okay,” I think, “they are probably going to ask me about the day of my birth, and then use some advanced math to determine my age...”
...but instead, YouTube is like: “Give me your credit card data, I swear I am totally not going to use it for any evil purpose ever, it’s just my favorite way of checking people’s age.”
Thanks, but I will pass. I believe that giving my credit card data to strangers I don’t want to buy anything from is a really bad policy. The fact that all changes in YouTube seem to be transparently driven by a desire to increase revenue, does not increase my trust. I am not sure what exactly could happen, but… I will rather wait for a new months, and then read a story about how it happened to someone else.
And that’s why I don’t know how Tangled should have ended.
(What, you thought I was trying to watch some porn? No thanks, that would probably require me to give the credit card number, social security number, scans of passport and driving license, and detailed data about my mortgage.)
YouTube lets me watch the video (even while logged out). Is it a region thing?? (I’m in California, USA). Anyway, the video depicts
dirt, branches, animals, &c. getting in Rapunzel’s hair as it drags along the ground in the scene when she’s frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, “Okay, this is getting weird; I’m just gonna go.”
If you want to know how it really ends, check out the sequel series!
What is the easiest and least frustrating way to explain the difference between the following two statements?
X is good.
X is bad, but your proposed solution Y only makes things worse.
Does fallacy to distinguish between these two have a standard name? I mean, when someone criticizes Y, and the reponse is to accuse them of supporting X.
Technically, if Y is proposed as a cure for X, then opposing Y is evidence for supporting X. Like, yeah, a person who supports X (and believes that Y reduces X) would probably oppose Y, sure.
It becomes a problem when this is the only piece of evidence that is taken into account, and any explanations of either bad side effects of Y, or that Y in fact does not reduce X at all, are ignored, because “you simply like X” becomes the preferred explanation.
A discussion of actual consequences of Y then becomes impossible, among the people who oppose X, because asking this question already becomes a proof of supporting X.
More generally, a difference between models of the world is explained as a difference in values. The person making the fallacy not only believes that their model is the right one (which is a natural thing to believe), but finds it unlikely that their opponent could have a different model. Or perhaps they have a very strong prior that differences in values are much more likely than differences in models.
From inside, this probably feels like: “Things are obvious. But bad actors fake ignorance / confusion, so that they can keep plausible deniability while opposing proposed changes towards good. They can’t fool me though.”
Which… is not completely unfounded, because yes, there are bad actors in the world. So the error is in assuming that it is impossible for a good actor to have a different model. (Or maybe assuming too high base rate of bad actors.)
Sounds like a complex equivalence that simultaneously crosses the is-ought gap.
Prediction markets could create inadvertent assassination markets. No ill intention is needed.
Suppose we have fully functional prediction markets working for years or decades. The obvious idiots already lost most of their money (or learned to avoid prediction markets), most bets are made by smart players. Many of those smart players are probably not individuals, but something like hedge funds—people making bets with insane amounts of money, backed by large corporations, probably having hundreds of experts at their disposal.
Now imagine that something like COVID-19 happened, and people made bets on when it will end. The market aggregated all knowledge currently available to the humankind, and specified the date almost exactly, most of the bets are only a week or two away from each other.
Then someone unexpectedly finds a miracle cure.
Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
The stock market is already a prediction market and there’s potentially profit to be made by assignating a CEO of a company. We don’t see that happening much.
Then someone unexpectedly finds a miracle cure.Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
Taffix might very well be a miracle treatment that prevents people from getting infected by COVID19 if used properly.
We live in an enviroment where already nobody listens to people providing supplements like that and people like Winfried Stoecker get persecuted instead of getting support to get their treatment to people.
Given that it takens 8-9 figures to provide the evidence for any miracle cure to be taken seriously, it’s not something that someone can just unexpectactedly find in a way that moves existing markets in the short term.
There is an article from 2010 arguing that people may emotionally object to cryonics because cold is metaphorically associated with bad things.
Did the popularity of the Frozen movie change anything about this?
Well, there is the Facebook group “Cryonics Memes for Frozen Teens”...