Hi all, I’m Hari. Funnily enough, I found LessWrong after watching a YouTube video on R***’s b*******. (I already had some grasp of the dynamics of internet virality, so no I did not see it as saying anything substantive about the community at large.)
My background spans many subjects, but I tend to focus on computer science, psychology, and statistics. I’m really interested in figuring out the most efficient way to do various things—the most efficient way to learn, the fastest way of arriving at the correct belief, how to communicate the most information with the least amount of words, etc. So I read the Sequences and LessWrong just felt like a natural fit. And as you can imagine, I don’t have much tolerance for broken, inefficient systems, so I quit college and avoid large parts of the internet.
LessWrong is like a breath of fresh air away from all the dysfunction, which I’m really grateful for. (My only problem is that I can spend hours lost in comment sections and rabbit holes!). I think it’s a good time for me to start contributing some of my own thoughts. Here’s a few questions/requests I have:
Firstly, I’ve been trying to refine my information diet more, but it seems more difficult with some blogs that have valuable older posts. For example, I see Marginal Revolution often mentioned, but they don’t have a “best of” post that I can start with. There’s also the dreaded linkrot.
Secondly, I’m wondering to what extent expert blind spot has been covered on LW? It seems really important given the varied backgrounds and number of polymaths here.
Thirdly, I wanted to get some feedback on some of my thoughts on anthropics. After scanning through some prior work, it looks like a lot of it is unnecessarily long and more technical than it needs to be. But I think it does have real practical implications that are important to think through.
If you combine anthropics, many-worlds, timeless physics, and some decision theory, there is a consistent logic here. The simplest way I can think of to explain this is if one imagines a timeless dartboard that has the distribution of everyone’s conscious experience across time. The arbitrary dart throw is more likely to land on people with the most conscious experience across time. This addresses the anthropic trilemma—you still lose because your conscious experience across time in losing worlds vastly outweighs the trillion yous in winning worlds in that thin slice of time.
This then implies doom soon, as Nick Bostrom points out with the Doomsday argument. But your probability of doom would have to be way too high here. So perhaps humanity decides to expand current consciousnesses rather than creating new ones. There are decision-theoretic reasons for humanity to support this—if you didn’t contribute anything to the intelligence explosion, then why should you exist?
One major implication here is that you don’t need to despair because aligned ASI is practically guaranteed in at least a few worlds. (But that doesn’t mean existential risk reduction is useless! It’s more like the work that’s being done is to expand the range of worlds that make it, rather than saving only one.)
I think the most efficient way to absorb the existing Less Wrong wisdom is to read the articles without comments. Because the comments are easily 10 or more times the amount of text as the articles themselves. It is not a perfect solution: sometimes the best voted comments add something substantial. But I think it is better on average.
Anthropic reasoning is difficult. Small changes in your model of the world can cause dramatic changes in what the distribution of the conscious experience looks like. (Like: Maybe we will never expand to universe or build a Dyson sphere, and soon will consume the fossil resources and the civilization will collapse. Then our life on 21st century Earth is a normal experience. -- But maybe we will colonize the galaxies for billions of years. Then our life on 21st century Earth is astronomically exceptional. -- But maybe the things that will colonize the galaxies are mindless paperclip optimizers. Then our life on 21st century Earth is normal again. -- But maybe… -- But maybe… -- Every new thing you consider can completely change the outcome.)
Thanks for the links—I definitely do focus in on the essential parts when I have limited resources. So I personally don’t need versions without comments, but I find the alternate link for the Sequences quite aesthetically appealing, which is nice.
As for the anthropic reasoning, there are definitely all kinds of different scenarios that can play out, but I would argue that they can be clumped into one of three categories for anthropics. One is doom soon, meaning that everyone dies soon (no more souls). The second is galactic expansion with huge numbers of new conscious entities (many souls). The third is galactic expansion with only the expansion of conscious entities that have existed (same souls). Assuming many-worlds, no more souls is too unlikely to happen in all the worlds, but it will surely happen in some. Same with many souls. But given that we live in the current time period, one can infer that most worlds are same soul worlds.
Hello , chaizen. I would like to add to what you wrote on the topic of timeless decision theory etc.
I would point out that if you believe in an interpretation of physics like the “mathematical universe hypothesis”, then you need to average over instances of yourself in different ‘areas’ of mathematics or logic, as well as over different branches of a single wave function ( correct me if I am misunderstanding the Many Worlds Interpretation) . This might well affect the weight you assign to the many simulated copies of yourself; in particular, if you interpret yourself as a logical structure processing information, then it could be argued that at a high level of abstraction the trillion copies are (almost) identical and therefore don’t count as having 1 trillion times as much conscious experience as 1 of you, only being distinct consciousnesses insofar as they experience different things or thought processes.
The above would be my tentative argument for why an extremely large number of moderately happy beings would not necessarily be morally better than a moderately large number of very happy ones as they probably have much higher overlap with one another in a mathematical/logical universe.
Hi all, I’m Hari. Funnily enough, I found LessWrong after watching a YouTube video on R***’s b*******. (I already had some grasp of the dynamics of internet virality, so no I did not see it as saying anything substantive about the community at large.)
My background spans many subjects, but I tend to focus on computer science, psychology, and statistics. I’m really interested in figuring out the most efficient way to do various things—the most efficient way to learn, the fastest way of arriving at the correct belief, how to communicate the most information with the least amount of words, etc. So I read the Sequences and LessWrong just felt like a natural fit. And as you can imagine, I don’t have much tolerance for broken, inefficient systems, so I quit college and avoid large parts of the internet.
LessWrong is like a breath of fresh air away from all the dysfunction, which I’m really grateful for. (My only problem is that I can spend hours lost in comment sections and rabbit holes!). I think it’s a good time for me to start contributing some of my own thoughts. Here’s a few questions/requests I have:
Firstly, I’ve been trying to refine my information diet more, but it seems more difficult with some blogs that have valuable older posts. For example, I see Marginal Revolution often mentioned, but they don’t have a “best of” post that I can start with. There’s also the dreaded linkrot.
Secondly, I’m wondering to what extent expert blind spot has been covered on LW? It seems really important given the varied backgrounds and number of polymaths here.
Thirdly, I wanted to get some feedback on some of my thoughts on anthropics. After scanning through some prior work, it looks like a lot of it is unnecessarily long and more technical than it needs to be. But I think it does have real practical implications that are important to think through.
If you combine anthropics, many-worlds, timeless physics, and some decision theory, there is a consistent logic here. The simplest way I can think of to explain this is if one imagines a timeless dartboard that has the distribution of everyone’s conscious experience across time. The arbitrary dart throw is more likely to land on people with the most conscious experience across time. This addresses the anthropic trilemma—you still lose because your conscious experience across time in losing worlds vastly outweighs the trillion yous in winning worlds in that thin slice of time.
This then implies doom soon, as Nick Bostrom points out with the Doomsday argument. But your probability of doom would have to be way too high here. So perhaps humanity decides to expand current consciousnesses rather than creating new ones. There are decision-theoretic reasons for humanity to support this—if you didn’t contribute anything to the intelligence explosion, then why should you exist?
One major implication here is that you don’t need to despair because aligned ASI is practically guaranteed in at least a few worlds. (But that doesn’t mean existential risk reduction is useless! It’s more like the work that’s being done is to expand the range of worlds that make it, rather than saving only one.)
What do you think?
I think the most efficient way to absorb the existing Less Wrong wisdom is to read the articles without comments. Because the comments are easily 10 or more times the amount of text as the articles themselves. It is not a perfect solution: sometimes the best voted comments add something substantial. But I think it is better on average.
Less Wrong Sequences without comments: https://www.readthesequences.com/
Selected best articles from Less Wrong: https://www.lesswrong.com/bestoflesswrong—but these are links to articles with comments, not sure if there is a better way to read them
Anthropic reasoning is difficult. Small changes in your model of the world can cause dramatic changes in what the distribution of the conscious experience looks like. (Like: Maybe we will never expand to universe or build a Dyson sphere, and soon will consume the fossil resources and the civilization will collapse. Then our life on 21st century Earth is a normal experience. -- But maybe we will colonize the galaxies for billions of years. Then our life on 21st century Earth is astronomically exceptional. -- But maybe the things that will colonize the galaxies are mindless paperclip optimizers. Then our life on 21st century Earth is normal again. -- But maybe… -- But maybe… -- Every new thing you consider can completely change the outcome.)
Thanks for the links—I definitely do focus in on the essential parts when I have limited resources. So I personally don’t need versions without comments, but I find the alternate link for the Sequences quite aesthetically appealing, which is nice.
As for the anthropic reasoning, there are definitely all kinds of different scenarios that can play out, but I would argue that they can be clumped into one of three categories for anthropics. One is doom soon, meaning that everyone dies soon (no more souls). The second is galactic expansion with huge numbers of new conscious entities (many souls). The third is galactic expansion with only the expansion of conscious entities that have existed (same souls). Assuming many-worlds, no more souls is too unlikely to happen in all the worlds, but it will surely happen in some. Same with many souls. But given that we live in the current time period, one can infer that most worlds are same soul worlds.
Hello , chaizen. I would like to add to what you wrote on the topic of timeless decision theory etc.
I would point out that if you believe in an interpretation of physics like the “mathematical universe hypothesis”, then you need to average over instances of yourself in different ‘areas’ of mathematics or logic, as well as over different branches of a single wave function ( correct me if I am misunderstanding the Many Worlds Interpretation) . This might well affect the weight you assign to the many simulated copies of yourself; in particular, if you interpret yourself as a logical structure processing information, then it could be argued that at a high level of abstraction the trillion copies are (almost) identical and therefore don’t count as having 1 trillion times as much conscious experience as 1 of you, only being distinct consciousnesses insofar as they experience different things or thought processes.
The above would be my tentative argument for why an extremely large number of moderately happy beings would not necessarily be morally better than a moderately large number of very happy ones as they probably have much higher overlap with one another in a mathematical/logical universe.