Okay! It wasn’t intended as prescriptive but I can see it as being implicitly that.
What do you think I’m rationalizing?
Okay! It wasn’t intended as prescriptive but I can see it as being implicitly that.
What do you think I’m rationalizing?
That’s a pseudonym Duncan used at one point, see e.g. the first line of this comment.
That makes sense to me, though I feel unclear about whether you think this post is an example of that pattern / whether your comment has some intent aimed at me?
There’s something about this framing that feels off to me and makes me worry that it could be counterproductive. I think my main concerns are something like:
1) People often figure out what they want by pursuing things they think they want and then updating on the outcomes. So making them less certain about their wants might prevent them from pursuing the things that would give them the information for actually figuring it out.
2) I think that people’s wants are often underdetermined and they could end up wanting many different things based on their choices. E.g. most people could probably be happy in many different kinds of careers that were almost entirely unlike each other, if they just picked one that offered decent working conditions and committed to it. I think this is true for a lot of things that people might potentially want, but to me the framing of “figure out what you want” implies that people’s wants are a lot more static than this.
I think this 80K article expresses these kinds of ideas pretty well in the context of career choice:
The third problem [with the advice of “follow your passion”] is that it makes it sound like you can work out the right career for you in a flash of insight. Just think deeply about what truly most motivates you, and you’ll realise your “true calling”. However, research shows we’re bad at predicting what will make us happiest ahead of time, and where we’ll perform best. When it comes to career decisions, our gut is often unreliable. Rather than reflecting on your passions, if you want to find a great career, you need to go and try lots of things.
The fourth problem is that it can make people needlessly limit their options. If you’re interested in literature, it’s easy to think you must become a writer to have a satisfying career, and ignore other options.
But in fact, you can start a career in a new area. If your work helps others, you practice to get good at it, you work on engaging tasks, and you work with people you like, then you’ll become passionate about it. The ingredients of a dream job we’ve found are most supported by the evidence, are all about the context of the work, not the content. Ten years ago, we would have never imagined being passionate about giving career advice, but here we are, writing this article.
Many successful people are passionate, but often their passion developed alongside their success, rather than coming first. Steve Jobs started out passionate about zen buddhism. He got into technology as a way to make some quick cash. But as he became successful, his passion grew, until he became the most famous advocate of “doing what you love”.
Comment retracted because right after writing it, I realized that the “leastwrong” is a section on LW, not its own site. I thought there was a separate leastwrong.com or something. In this case, I have much less of a feeling that it makes a global claim.
Edit: An initial attempt is “The LeastWrong” feels a bit like a global claim of “these are the least wrong things on the internet”.
This is how it feels to me.
Whether you can find a logic in which that interpretation is not coherent doesn’t seem relevant to me. You can always construct a story according to which a particular association is actually wrong, but that doesn’t stop people from having that association. (And I think there are reasonable grounds for people to be suspicious about such stories, in that they enable a kind of motte-and-bailey: using a phrasing that sends the message X, while saying that of course we don’t mean to send that message and here’s an alternative interpretation that’s compatible with that phrasing. So I think that a lot of the people who’d find the title objectionable would be unpersuaded by your alternative interpretation, even assuming that they bothered to listen to it, and they would not be unreasonable to reject it.)
Software/internet gives us much better ability to find.
And yet...
The past few decades have recorded a steep decline in people’s circle of friends and a growing number of people who don’t have any friends whatsoever. The number of Americans who claim to have “no close friends at all” across all age groups now stands at around 12% as per the Survey Center on American Life.
The percentage of people who say they don’t have a single close friend has quadrupled in the past 30 years, according to the Survey Center on American Life.1
It’s been known that friendlessness is more common for men, but it is nonetheless affecting everyone. The general change since 1990 is illustrated below.
Taken from “Adrift: America in 100 Charts” (2022), pg. 223. As a detail, note the drastic drop of people with 10+ friends, now a small minority.
The State of American Friendship: Change, Challenges, and Loss (2021), pg. 7
Although these studies are more general estimates of the entire population, it looks worse when we focus exclusively on generations that are more digitally native. When polling exclusively American millennials, a pre-pandemic 2019 YouGov poll found 22%have “zero friends” and 30% had “no best friends.” For those born between 1997 to 2012 (Generation Z), there has been no widespread, credible study done yet on this question — but if you’re adjacent to internet spaces, you already intuitively grasp that these same online catalysts are deepening for the next generation.
Still, the fact that individual companies, for instance, develop layers of bureaucracy is not an argument against having a large economy.
This is true in principle, but population growth has led to the creation of larger companies in practice. ChatGPT when I asked it what proportion of the economy is controlled by the biggest 100 companies:
For a rough estimate, consider the market capitalization of the 100 largest public companies relative to GDP. As of early 2023, the market capitalization of the S&P 100, which includes the 100 largest U.S. companies by market cap, was several trillion USD, while the U.S. GDP was about 23 trillion USD. This suggests a significant but not dominant share, with the caveat that market cap doesn’t directly translate to economic contribution.
And if the population in every country would grow, then we’d end up with larger governments even if we kept the current system and never established a world government. To avoid governments getting bigger, you’d need to actively break up countries into smaller ones as their population increased. That doesn’t seem like a thing that’s going to happen.
A possible countertrend would be something like diseconomies of scale in governance. I don’t know the right keywords to find the actual studies on this. Still, it generally seems to me like smaller nations and companies are better run than bigger ones, as the larger ones develop more middle management and organizational layers mainly incentivized to manipulate themselves rather than to do the thing they’re supposedly doing. This does not just waste the resources of the government itself, it also damages everyone else as the legislation they enact starts getting worse and worse. And the larger the system becomes, the harder any attempts to reform it become.
Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage.
I’m skeptical of this; “better matching” implies “better ability to find”. But just increasing the size of the population does not imply a better chance to find the best matches, given that it also increases the number of non-matches proportionally. And I think it’s already the case that the ability to find the people is a much bigger bottleneck than just their existence.
It’s also worth noting that as the population grows, so does the number of competitors. Maybe a 100x bigger population would have 100x the lyricists, but it may also have 100x the people wanting to hire those lyricists for themselves.
(Similar points also apply to the other “better matching” items.)
Which religion claims nothing supernatural at all happened?
Secular versions of Buddhism, versions of neo-paganism that interpret themselves to ultimately be manipulating psychological processes, religions whose conception of the divine is derived from scientific ideas, etc. More generally, many religions that define themselves primarily through practice rather than belief can be compatible with a lack of the supernatural (though of course aren’t necessarily).
Agree. The advice I’ve heard for avoiding this is, instead of saying “try X”, ask “what have you already tried” and then ideally ask some follow-up questions to further probe why exactly the things they’ve tried haven’t worked yet. You might then be able to offer advice that’s a better fit, and even if it turns out that they actually haven’t tried the thing, it’ll likely still be better received because you made an effort to actually understand their problem first. (I’ve sometimes used the heuristic, “don’t propose any solutions until you could explain to a third party why this person hasn’t been able to solve their problem yet”.)
At that moment, he was enlightened.
I somehow felt fuzzy and nice reading this; it’s so distinctly your writing style and it’s nice to have you around, being you and writing in your familiar slightly quirky style. (It also communicated the point well.)
While he doesn’t explicitly use the word “prediction” that much in the post, he does talk about “anticipated experiences”, which around here is taken to be synonymous with “predicted experiences”.
I don’t fully understand the actual math of it so I probably am not fully getting it. But if the core idea is something like “you can at every timestep take new experiences and then choose how to integrate them into a new you, with the particulars of that choice (and thus the nature of the new you) drawing on everything that you are at that timestep”, then I like it.
I might quibble a bit about the extent to which something like that is actually a conscious choice, but if the “you” in question is thought to be all of your mind (subconsciousness and all) then that fixes it. Plus making it into more of a conscious choice over time feels like a neat aspirational goal.
… now I do feel more of a desire to live some several hundred years in order to do that, actually.
I have read that book, but it’s been long enough that I don’t really remember anything about it.
Though I would guess that if you were to describe it, my reaction would be something along the lines of “if you want to have a theory of identity, sounds as as valid as any other”.
I suspect that a short, private conversation with your copy would change your mind
Can you elaborate how?
E.g. suppose that it was the case that I would get copied, and then one of us would be chosen by lot to be taken in front of a firing squad while the other could continue his life freely. I expect—though of course it’s hard to fully imagine this kind of a hypothetical—that the thought of being taken in front of that firing squad and never seeing any of my loved ones again would create a rather visceral sense of terror in me. Especially if I was given a couple of days for the thought to sink in, and I wouldn’t just be in a sudden shock of “wtf is happening”.
It’s possible that the thought of an identical copy of me being out there in the world would bring some comfort to that, but mostly I don’t see how any conversation would have a chance of significantly nudging those reactions. They seem much too primal and low-level for that.
I said “in some sense”, which grants the possibility that there is also a sense in which personal identity does exist.
I think the kind of definition that you propose is valid but not emotionally compelling in the same way as my old intuitive sense of personal identity was.
It also doesn’t match some other intuitive senses of personal identity, e.g. if you managed to somehow create an identical copy of me then it implies that I should be indifferent to whether I or my copy live. But if that happened, I suspect that both of my instances would prefer to be the ones to live.
Do you mean s-risks, x-risks, age of em style future, stagnation, or mainstream dystopic futures?
“All of the above”—I don’t know exactly which outcome to expect, but most of them feel bad and there seem to be very few routes to actual good outcomes. If I had to pick one, “What failure looks like” seems intuitively most probable, as it seems to require little else than current trends continuing.
I am suspcious about claims of this sort. It sounds like a case of “x is an illusion. Therefore, the pre-formal things leading to me reifying x are fake too.”
That sounds like a reasonable thing to be suspicious about! I should possibly also have linked my take on the self as a narrative construct.
Though I don’t think that I’m saying the pre-formal things are fake. At least to my mind, that would correspond to saying something like “There’s no lasting personal identity so there’s no reason to do things that make you better off in the future”. I’m clearly doing things that will make me better off in the future. I just feel less continuity to the version of me who might be alive fifty years from now, so the thought of him dying of old age doesn’t create a similar sense of visceral fear. (Even if I would still prefer him to live hundreds of years, if that was doable in non-dystopian conditions.)
No worries! I’ll reply anyway for anyone else reading this, but it’s fine if you don’t respond further.
It sounds like we have different ideas of what it means to identify as something. For me, one of the important functions of identity is as a model of what I am, and as what distinguishes me from other people. For instance, I identify as Finnish because of reasons like having a Finnish citizenship, having lived in Finland for my whole life, Finnish being my native language etc.; these are facts about what I am, and they’re also important for predicting my future behavior.
For me, it would feel more like rationalization if I stopped contributing to something like transhumanism but nevertheless continued identifying as a transhumanist. My identity is something that should track what I am and do, and if I don’t do anything that would meaningfully set me apart from people who don’t identify as transhumanists… then that would feel like the label was incorrect and imply wrong kinds of predictions. Rather, I should just update on the evidence and drop the label.
As for transhumanism as a useful idea of what to aim for, I’m not sure of what exactly you mean by that, but I haven’t started thinking “transhumanism bad” or anything like that. I still think that a lot of the transhumanist ideals are good and worthy ones and that it’s great if people pursue them. (But there are a lot of ideals I think are good and worthy ones without identifying with them. For example, I like that museums exist and that there are people running them. But I don’t do anything about this other than occasionally visit one, so I don’t identify as a museum-ologist despite approving of them.)
Hmm, none of these. I’m not sure of what the first one means but I’d gladly have a solution that led to aligned AI, I use LLMs quite a bit, and AI clearly does seem like the most important near-future thing.
“Pinning my hopes on AI” meant something like “(subconsciously) hoping to get AI here sooner so that it would fix the things that were making me anxious”, and avoiding that just means “noticing that therapy and conventional things like that work better for fixing my anxieties than waiting for AI to come and fix them”. This too feels to me like actually updating on the evidence (noticing that there’s something better that I can do already and I don’t need to wait for AI to feel better) rather than like rationalizing something.