In a perfect situation, it would be possible to achieve meaningful experiences without pain, but usually it is not possible. A person who optimizes for short-term pain avoidance, will not reach the meaningful experience. Because optimizing for short-term pain avoidance is natural, we have to remind ourselves to overcome this instinct.
So, what now?
There are many free textbooks online (1, 2, 3...), so maybe choose math, download something that starts simple, read it, if it is too complicated put it away and download something else. Download the textbooks to a book reader, so you can read them while you travel, etc. If you get stuck somewhere, search the answer online, if that fails ask in Less Wrong shortform.
You lost some time, but it is not too late to learn. If you start now, after ten years you will be happy that you did.
I used to be good at math at high school, but then I chose computer science at university, and ended up making stupid websites for 20 years. A few months ago, I decided to give it another try, and downloaded a few books. (I think I still have solid high-school knowledge, so I decided to go ahead and chose set theory.) First time I read a book, I didn’t understand most of it. Then I read it again and did some of the exercises, and suddenly it made much more sense. Now I understand even some Wikipedia articles which are definitely not written for beginners. (The intersection between “understands an esoteric topic”, “can explain it clearly”, and “willing to edit Wikipedia articles” is small, sometimes nonexistent.) I don’t have much free time with job and kids, but I try to regularly find an hour or two. But I am also picky; if a book doesn’t work for me, I throw it away and take another. That’s the advantage of free downloading. (Ahem.)
People usually substantially increase the amount of work they do, and generally report higher levels of engagement and very rarely just give up.
In short term, sure. In long term? I look around me and see people so tired of taking precautions against COVID-19 that they would rather die than spend another day wearing a face mask.
In the book, the time intervals were much longer, given the distances in universe, and the speed of light. People were capable of dramatic decisions when the threat was detected. A few years later, with threat still on the way, they were already burned out. Sounds realistic to me.
And the “cosmic sociology” is Meditations on Moloch turned up to eleven.
I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.
Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D
Now I’ve started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. [...] you have to accept feeling stupid all the time. But I still look down that old road and I’m glad I’m not walking down it anymore.
Yeah, same here.
Building reputation by repeated interaction.
But it needs to be the type of interaction where you notice and remember the author. For example, if you go to LessWrong, you are more likely to associate “I read this on LessWrong” with the information, than if you just visited LessWrong articles from links shared on social networks. (And it is probably easier to remember Zvi than an average author at LessWrong, because Zvi recently posted a sequence of articles, which is easier to remember than an equal number of articles on unrelated topics.) You need to notice “articles by Zvi” as a separate category first, and only then your brain can decide to associate trust with this category.
(Slate Star Codex takes this a bit further, because for my brain it is easier to remember “I read this on SSC” than to remember the set of articles written by Scott on LessWrong. This is branding. If your quality is consistently high, making the fact “this was written by me” more noticeable increases your reputation.)
The flip side of the coin is that the culture of sharing hyperlinks on social networks destroys trust. If you read hundred articles from hundred different sources every day, your brain has a problem to keep tabs. Before internet, when you regularly read maybe 10 different journals, you gradually noticed that X is reliable and Y is unreliable. Because sometimes you read ten reliable stories on one day, and ten unreliable stories on a different day, and it felt differently. But on internet, there are hundred websites, and you switch between them, so even if a few of the are notoriously bad, it is hard to notice. Even harder, because the same website can have multiple authors with wildly different quality. A scientist and a crackpot can have a blog on the same domain. With paper sources, the authors within one source were more balanced. (LessWrong is also kinda balanced, especially if you only consider the upvoted articles.)
If the adblockers become too popular, websites will update to circumvent them. It will be a lot of work at the beginning, but probably possible.
The straightforward solution would be to move ad injection to the server side. The PHP (or whatever language) code generating the page would contact the ad server, download the ad, and inject it into the generated HTML file. From the client perspective, it is now all coming from the same domain; it is even part of the same page. The client cannot see the interaction between server and third party.
Problem with this solution is that it is too easy for the server to cheat; to download thousand extra ads without displaying them to anyone. The advertising companies must find a way to protect themselves from fraud.
But if smart people start thinking about it, they will probably find a solution. The solution doesn’t have to work perfectly, only statistically. For example, the server displaying the ad could also take the client’s fingerprint and send it to the advertising company. Now this fingerprint can of course either be real, or fictional if the server is cheating. But the advertising company could cross-compare fingerprints coming from thousand servers. If many different servers report having noticed the same identity, the identity is probably real. If a server reports too many identities that no one else have ever seen, the identities are probably made up. The advertising company would suspect fraud if the fraction of unique identities reported by one server exceeds 20%. Something like this.
My opinion on marriage is conservative—people should get married when they want to have kids. They don’t sacrifice to each other; they together pay the costs of creating a good environment for their kids to grow up in.
If you don’t want to have kids, you can have sex or live together also without marriage, and divorce made marriage kinda useless as a signal of commitment. (Okay, there are other reasons, too, such as tax benefits.)
From this perspective, I am quite surprised that you see marriage as an opposite of growth mindset. Making a commitment to radically change your everyday life for the next 20 years, and taking responsibility for challenges you never experienced before, knowing that there is no way to stop this train without someone getting hurt...
Similarly, strategically making a sacrifice counts as “growth” in my books. (Jordan Peterson agrees.)
Not knowing your friend’s buddy of course makes it impossible for me to guess whether his decision was a result of maturity or… something completely different.
They don’t want to take the risky leap in becoming fishermen. As long as they keep receiving enough fish, they’ll tolerate the misery.
Who knows what would happen if the risk became smaller, e.g. thanks to the UBI. You seem to assume that people who don’t accept risk now, they simply are the type of person who would never take a risk. But maybe many people consider some smaller levels of risk acceptable (e.g. “there is a chance I will spend three years working on something that ultimately fails, and if I switch to a regular career later, I will be three years behind my peers”), and some higher levels of risk unacceptable (e.g. “there is a chance I will lose my lifelong savings and live in poverty, or get sick without having good healthcare”). And maybe too many people live in a situation where trying something revolutionary would require the unacceptable levels of risk.
By the way, some people work in corporations because they need to accumulate the capital necessary for starting their own company. And some people work in corporations because their company failed and now they have to pay their debts. Both of these can take many years.
Yes, this is a motte of “emergence”.
The problematic part is when you turn the concept of “despite understanding the rules of all little pieces, it is still difficult for a human to predict some patterns of their interaction” into a noun, and then kinda suggest that it refers to a mysterious thing that many difficult-to-predict patterns have in common, and that there is a way to study this mysterious thing itself, and by doing so gain insight (going beyond “yep, complex things with many parts are often difficult to predict”) into all these difficult-to-predict patterns.
In other words, if you make it seem as if understanding of e.g. gliders and biological evolution (two examples of “emergence”) allows you to better predict stock markets (another example of “emergence”… therefore, they all should have something in common, and you can study that).
Quoting Eliezer: (source)
Taken literally, that description fits every phenomenon in our universe above the level of individual quarks [...] There’s nothing wrong with saying “X emerges from Y,” where Y is some specific, detailed model with internal moving parts. [...] Gravity arises from the curvature of spacetime, according to the specific mathematical model of General Relativity. Chemistry arises from interactions between atoms, according to the specific model of quantum electrodynamics.The phrase “emerges from” is acceptable, just like “arises from” or “is caused by” are acceptable, if the phrase precedes some specific model to be judged on its own merits. However, this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right.
Taken literally, that description fits every phenomenon in our universe above the level of individual quarks [...] There’s nothing wrong with saying “X emerges from Y,” where Y is some specific, detailed model with internal moving parts. [...] Gravity arises from the curvature of spacetime, according to the specific mathematical model of General Relativity. Chemistry arises from interactions between atoms, according to the specific model of quantum electrodynamics.
The phrase “emerges from” is acceptable, just like “arises from” or “is caused by” are acceptable, if the phrase precedes some specific model to be judged on its own merits. However, this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right.
Similar here. Reading the title, thinking “explaining how exponential complexity is worse than linear will be a piece of cake”. Reading the text, thinking “okay, how is this different from cybernetics?”
Even Wikipedia just says “study of complexity and complex systems”, and then points towards computational complexity and systems theory. Wikipedia has its flaws, but...
Even among the resources linked as “some courses/primers/introductions”, half of them do not contain words “complexity theory” or “complexity science”. Which makes me doubt:
It is at least not 100% crackpottery, since some books are published by Princeton university press and Oxford university press.
Just because those books contain the word “complex” or “complexity”, doesn’t mean they support the idea of “complexity science”.
either A) most LW’ers aren’t investing in stocks
Does LW 2.0 still have the functionality to make polls in comments? (I don’t remember seeing any recently.) This seems like the question that could be easily answered by a poll.
Seems that we mostly agree here, the major disagreement is about terminology.
I disagree about too wide use of “shit-testing” to include… maybe not testing in general, but still more than the narrow meaning in the PUA literature… which is approximately “purposefully annoying your partner, in order to find out whether the partner is good at keeping their boundaries”.
I agree that if there are incompatibilities between people, it’s better to find them sooner rather than later. And that sometimes you need to search for the possible incompatibilities actively.
Ironically, Drawing on the Right Side of the Brain recommends as an exercise to draw the picture upside-down, so that the “forest” does not distract you from getting the details right.
(But it is not assumed that the resulting picture will be beautiful, and there is also no grid that would introduce artificial line bends.)
Perhaps a metaphor could be made for that, too, that sometimes focusing on the big picture prevents you from noticing that you got the details wrong, which can also ruin the outcome.
The phrase “If you can’t handle me at my worst, you don’t deserve me at my best” is sometimes the idea.
I would assume that your current “worst” is the best predictor of your future behavior. And frankly, shouldn’t I? I think it is a consensus among the people who use the word that shit-tests never end.
I am ambivalent about the whole idea of shit-testing. On one hand, it makes sense to test your partner’s reaction to your bad behavior. Because, if you stay together for a long time, sooner or later some bad behavior will happen; life will throw a lot of stress on you, and you will snap. You need the kind of partner who can survive it gracefully. If it is someone who would collapse, or go nuclear, that is a time-bomb; better avoid that.
On the other hand, if someone occassionally behaves badly even when everything goes fine, it doesn’t exactly give me confidence that the person will try their best when things get hard. When a life-or-death situation happens (and by the same logic, sooner or later it will), would you want your partner to choose exactly that moment for their next shit-test? And what makes you so sure they wouldn’t, if they already do it habitually?
So… shit-testing allows you to select a better partner… but at the same time, “being the kind of person who shit-tests their partner” makes you a worse partner. (Which is kinda your partner’s problem, not yours, but still...)
It’s like those “if you really love me, you will do X for me” situations, when someone demands an arbitrary sacrifice X as a proof of love. If you are too focused on signaling your love, you may miss the larger picture, which is that a person who loves you would not ask you to make arbitrary sacrifices. So you are setting yourself up for a one-sided relationship; and the right answer would be to walk away, and find someone else who is willing to reciprocate your love. (Even if you believe that sufficiently strong one-sided love may eventually elicit the same feelings in the other party, it still makes more sense to choose someone who will not abuse you before that happens, assuming it happens at all.)
How to get out of this dilemma? Arbitrarily testing your partner is bad, leaving them untested is dangerous...
Perhaps, if you could observe your partner in tests that life throws at them naturally. That would require to spend a lot of time together. If you want to speed it up, you could choose a situation that increases stress levels naturally, for some good reason. For example, spend a vacation in mountains together. Or something else that gets you tired and uncomfortable, but for reasons better than one person choosing to annoy the other.
(I wonder if shit-testing was also so frequent in the past, or whether it is an adaptation to the modern dating market where you have to test your partners quickly.)
Rationalists may be less likely than average to want kids, but that doesn’t mean none of us are having them.
Many people don’t want to have kids in their 20s, and change their mind later. Ten years later, I could imagine that many rationalists will feel ambiguous, and then something can start a chain reaction of having kids.
Actually, I think it would be super cool to have a generation of kids of approximately the same age, whose parents are rationalists living next to each other and can coordinate on school choice / homeschooling / providing extra lessons in free time.
Firms impose higher effort demands on workers; workers have to complete more tasks (for a higher wage) or be fired.
This sounds correct, but I thought it was specific for IT. I mean the popular trends of being “full-stack developer” and “dev-ops”, which in my opinion both mean: -- Why should I hire two or three specialists, when one person could do everything alone? And if the project size requires hiring two or three people anyway, at least this will make them more replaceable, and I can immediately move one of them to another project when the worst crisis is over. And if being unable to maintain top-level expertise at too many things at the same time makes them feel like impostors, at least it will keep them humble.
Do you suggest it also happens in other industries? Your articles has “technology” in its title, but seems to talk about economy in general. Could you perhaps provide more specific information about other industries? Unfortunately, I am not qualified to comment on the second part of your article.
I will admit that the claim American workers have become Stakhanovites is a bold one. It’s the sort of claim that immediately raises all sorts of objections and questions, like: how is that even possible in a capitalist economy, and why hasn’t it also happened outside of the U.S?
I don’t have enough data, but is it possible that this is more about IT (and also in other countries) than about Americans? Because the hypothesis “nerds suck at negotiation, and are easily brainwashed” would explain a few things we see. I mean, even comrade Stakhanov didn’t spend his free time improving his Github portfolio.
There are things like “lying for a good cause”, which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the “do not wear face masks, they are useless” expert advice for COVID-19, which was a “clever” dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice.
(It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.)
Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn’t they, given their expressed belief that this is the right thing to do?)
The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: “Hey, we are going to spread the lie X for instrumental reasons, don’t tell anyone!” :)
Then there are things like “using techniques-orthogonal-to-truth to promote true things”. Here I am quite guilty myself, because I have long ago advocated turning the Sequences into a book, reasoning, among other things, that for many people, a book is inherently higher-status than a website. Obviously, converting a website to a book doesn’t increase its truth value. This comes with smaller risks, such as getting high on your own supply (convincing ourselves that articles in the book are inherently more valuable than those that didn’t make it for whatever reason, e.g. being written after the book was published), or wasting too many resources on things that are not our goal.
But at least, in this category, one can openly and correctly describe their beliefs and goals.
Metaphorically, reason is traditionally associated with vision/light (e.g. “enlightenment”), ignorance and deception with blindness/darkness. The “dark side” also references Star Wars, which this nerdy audience is familiar with. So, if the use of the term itself is an example of dark arts (which I suppose it is), at least it is the type where I can openly explain how it works and why we do it, without ruining its effect.
But does it make us update too far against the use of deception? Uhm, I don’t know what is the optimal amount of deception. Unlike Kant, I don’t believe it’s literally zero. I also believe that people err on the side of lying more than is optimal, so a nudge in the opposite direction is on average an improvement, but I don’t have a proof for this.
In this example it is assumed that the entire island is literally owned by one person. So, if you wish, this person may be a metaphor for a strong centralized government.
Destroying your production capacity is a strategic mistake, and exposes you to blackmail in the future. A smart owner (or a smart centralized government) would not let that happen. If you want to give me free bananas, okay, I will take them; but I will still keep my banana plantation ready. That way, I get free bananas today and keep my ability to produce bananas tomorrow.
(And the other side of the same coin is that a smart owner—or centralized government—will try to expand their future production capacities. For example, if today it is for me more profitable to grow bananas than to write computer software, I might strategically decide to write software anyway, at least part-time, because two or three years later my software-writing skills are likely to increase dramatically, while my banana-growing skills would probably remain the same. So the comparative advantage of tomorrow may reward me for writing software, but in order to get there, I need to accept some disadvantage today.)
That said, another question is whether subsidies are the best way to keep your production capacity, and what amount of subsidies is optimal. (Of course, the farmers will always say “more is better” for obvious reasons.) If we discuss real-life agriculture, I would even challenge which types of products should we subsidize: if the goal is to prevent starving, we probably do not need to protect our meat production—if the other countries keep giving us cheap meat, let them; and if they suddenly stop doing that (in the unlikely case that all meat-subsidizing countries would coordinate to do this in the same year), we may have a year or two of mostly vegetarian diet, but no one is going to die.
In other words, although some protection of production capacity is strategically important, it doesn’t necessarily follow that the farming subsidies, as we know them now, are anywhere near the optimal solution. (Specifically, I think that subsidies of meat production are completely unnecessary—it is unlikely that all other countries would stop subsidizing meat at the same year, and in the unlikely case that would happen, we would survive anyway.)
In general, yes, but there can be other factors that reduce the possibility to interact with many possible partners.
Geographical local monopolies—there are thousands of islands in the ocean, but most of them are too far from your home. You could replace your nearest trade partner with someone further away, at an extra cost; and if your nearest trade partner pushes you too far, you will do it. But within that interval, the negotiation is important.
Upfront transaction costs—even if the trade partners are equivalent, but it is costly to start interacting with another one (you have to do a complicated background check, you need to adapt to their specifics), this again creates an extra cost of switching, and an interval within which it is about negotiation.
Both can apply at the same time.
There is also a gray line between “cartel” and “people doing the same thing, acting selfishly, but updating on their competitors’ past actions”. To make it simple, imagine that that a fair price for a ton of bananas is $100. (Fair price = what would be the market balance if anyone could trade with anyone, in a world with zero transaction costs.) But there is a $8 cost for trading with someone who is not your geographically nearest trade partner. In this situation, the banana buyers can individually precommit to buy at e.g. $95, because they know that you will prefer to sell them for $95 rather than sell someone else for $100, pay $8 for transit, and only keep $92.
Now imagine the banana buyers have a website, where they publicly share their experience. (This is perfectly legal, right?) And there is this highly upvoted article called: “Don’t buy bananas for $100, you can get them for $95 using game theory”. It becomes common knowledge that the banana sellers suck at negotiation (they don’t have an analogical website), and that most banana buyers only pay $95. -- Armed with this knowledge, you can now precommit to only pay $90 for a ton of bananas next year, because now it is known that the best price your neighbor can get from anyone else is $95.
How many iterations can happen, depends on the exact shape of diminishing returns. For example, even if I was willing to pay $100 for my first ton of bananas, but using my power of precommitment I already got them from my neighbor for $85, I am probably not willing to pay $100 for the second ton of bananas. Suppose the second ton of bananas is only worth $90 to me. But to obtain it, from someone who is not my neighbor, I would have to pay $85 + $8, which is more. So I will not defect against the new equilibrium. -- Here I act almost like a cartel member (my first ton of bananas is worth $100 to me, and at the end I only buy one ton, and yet I precommitted to not pay more than $85), but I am still only following my selfish incentives, and at no point I am sacrificing a potential extra profit in favor of keeping the balance.
I feel like I am reinventing here the Marxist class conflict, in a more general form, with emphasis on sharing negotiation tactics. The essence is that one side shares their negotiation tricks, which work individually even if no one else is using them (this is what makes it not a cartel), but quickly become a new standard if shared; and the new standard—and the common knowledge thereof—becomes a more powerful leverage (this is what makes it cartel-like in effect) in the following iteration of negotiation. The power to say: “Yes, you noticed that I am using this dirty trick against you, but we both know that all my competitors use exactly the same trick, so you cannot punish me by switching to another. And it is perfectly legal, because we coordinated this publicly. Your side as a whole sucks at negotiation, my side successfully turned it into a global leverage, and you as an individual face an uphill battle here.”
1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.
(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows—the one that was there and the hypothetical two in the future—completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)
After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future… how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get… a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.
Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.
2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.
I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into “we are doomed, eat all your marshmallows now” mode.
A systematically oppressed group can still be wrong. Being oppressed gives you an experience other people don’t have, but doesn’t give you epistemic superpowers. You can still derive wrong conclusions, despite having access to special data.
Anecdote time: When I was a kid, I was bullied by someone who did lots of sport. As a result, I developed an unconscious aversion to sport. (Because I didn’t want to be like him, and I didn’t want to participate in things that reminded me of him.) Obviously, this only further reduced the quality of my life. Years later, I found some great friends, who also did lots of sport. Soon, the aversion disappeared. My unconsciousness decided it was actually okay to be like them.
Maybe I am generalizing my experience too much, but looking at some groups, it seems like they follow the same algorithm (sometimes except for the happy ending, yet). At some moment in history, your group happens to be at the bottom of the social ladder. Others—the bad guys—have the money, the education, the institutions, etc. Your group starts associating money, education, and institutions with the bad things that were done to them. The difference is that when this happens on a group level, the belief gets reinforced culturally, because your friends and family all had the same experience.
A few decades or centuries later, your group also gets an access to education, money, and institutions. (And I am not necessarily talking about equal access here; just about some access, as opposed to your ancestors who had none.) But now everyone knows that these are things your people traditionally don’t have, and whoever aspires to get them is perceived as a traitor, as someone who wants to join the bad guys. You cannot discuss rationally whether getting more education, more money, and more of your people in institutions is actually a good thing for your group, because it increases your individual and collective power. The group as a whole is flinching away from the painful experience in the collective memory, and the individuals who go against the grain get punished.
(An example would be black people policing each other against “acting white”, but a similar mechanism applies in situations where one group of white people was historically oppressed by another group of white people, because of different language or religion or whatever.)
But of course, there may be also legitimate reasons to distrust strategies that work for other people. For example, education means acquiring debt in return for higher expected income in the future. If you know that the “higher income” is not going to happen, e.g. because of racism, then education is not as profitable for you as it would be for the majority.