I like the Alt-Viliam thought experiment. For myself, I have trouble projecting where I’d be other than: less money, more friends. I was very Christian and had a lot of friends through the Church community, so I likely would have done that instead of getting into prediction markets (which works out since presumably I’d be less good at prediction markets). I think your point about rationality preventing bad outcomes is a good one. There aren’t a lot of things in my life I can point to and say “I wouldn’t have this if I weren’t a rationalist”, but there are a lot of possible ways I could have gone wrong into some unhappy state—each one unlikely, but taken together maybe not.
I also like your points about the time limitations we face and the power of a community. That said, even adjusting for the amount of time we can spend, it’s not like 5 of us solve quantum gravity in 10 or even 100 months. As for the community—that may be really important. It’s possible that communal effects are orders of magnitude above individual ones. But if the message was that we could only accomplish great things together, that was certainly not clear (and also raises the question of why our community building has been less than stellar).
Based on the responses I’ve gotten here, perhaps a better question is: “why did I expect more out of rationality?” There’s a phenomenon I’ve observed where I tend to believe things more than most people, and it’s hard to put my finger on exactly what is going on there. It’s not that I believe things to be true more often (in fact, its probably less), but rather that I take things more seriously or literally—but neither of those quite fit either. I experienced it in church. People would preach about the power of prayer and how much more it could accomplish than our efforts. I believed them and decided to go to church instead of study for my test and pray that I’d do well. I was surprised that I didn’t and when I talked to them they’d say “that wasn’t meant for you—that was what God said to those people thousands of years ago—you can’t just assume it applies to you”. Ok, yeah, obvious in hindsight. But then I swear they’d go back up and preach like the Bible did apply to me. And when I tried to confirm that they didn’t mean this, they said “of course it applies to you. It’s the word of God and is timeless and applies to everyone”. Right, my mistake. I’d repeat this with various explanations of where I had failed. Sometimes I didn’t have enough faith. Sometimes I was putting words in God’s mouth. Sometimes I was ignoring the other verses on the topic. However, every time, I was doing something that everyone else tacitly understood not to do—taking the spoken words as literal truth and updating my expectations and actions based on them. It took me far longer to realize this than it should have because, perversely, when I asked them about this exact hypothesis, they wholeheartedly denied it and assured me they believed every word as literal truth. It’s easy to write that off as a religious phenomenon, and I mostly did. But I feel like I’ve brushed up against it in secular motivational or self-help environments. I can’t recall an instance here, but it feels like I reason: this speaker is either correct, lying, or mistaken, and other people don’t feel like its any of the above—or rather, choose correct until I start applying it to real life and then there’s always something wrong about how I apply it. Sometimes I get some explanation of what I’m doing wrong, but almost always there’s this, confusion about why I’m doing this.
I don’t know if that’s what is happening here, but if so, then that is surprising to me, because I had assumed that it was my rationalism or some other mental characteristic I’d expect to find here that was the cause of this disconnect. I read Class project and while it is obviously fiction, it is such boring fiction, in the middle of two posts telling us that we should do better than science that it seemed clear to me that it was meant as an illustration of the types or magnitudes of things we could accomplish. I don’t think I’m being overly literal here—I’m specifically considering context, intent, style. Almost the whole story is just a lecture, and nothing interesting happens—it is clearly not aimed at entertainment. It sits in the middle of a series about truth, specifically next to other non-fiction posts echoing the same sentiment. It’s just really difficult for me to believe it was just intended for whimsy and could just have easily been a whimsical story about a cat talking to a dandelion. Combine that with non-fiction posts telling us to shut up and do the impossible or that we should be sitting on a giant heap of utility, and the message seems clear.
However, the responses I’ve gotten to this posts feel very much like the same confusion I’ve experienced in the past. I get this “what did you expect?” vibe, and I’m sure I’m not the only one who read the referenced posts. So did others read them and think “Yes, Eliezer says to do the impossible and specifically designates the AI box experiment as the least impossible thing, but clearly he doesn’t mean we could do something like convince Sam Altman or Elon Musk not to doom humanity, (or in personal life, something like have a romantic relationship with no arguments and no dissastisfaction).”?
I think your feelings of disappointment are 100% valid.
It’s just that I am already over the “anger” phase, and more in the “bargaining / depression” phases.
I keep writing and deleting the following paragraphs, because it feels like there is something important in my mind that I want to say, but the words that come out keep missing the point...
First, it seems obvious to me that doing much better is possible. Not literally like in the stories, but those stories generally point in the right direction. It wouldn’t take literally five days to reinvent quantum theory. It could take maybe five years to do something amazing, even if not world-changing. Still worth it.
But sometimes it is important to get all the details right. If you build 90% of a car, you cannot take a ride.
I know I can’t do it alone. I am not even really good at thinking, unless I have someone to talk to. I find it difficult to collect the mental energy to start doing something… and even more difficult to continue doing it when I get interrupted. (And I do get interrupted every day for many hours; it’s called having a job.) The best way for me to regain the focus is to talk to someone else who cares about the same project.
And it’s difficult for me to find such people. The intersection of “interested in truth” and “interested in self-improvement” and “wants to do great things together” is almost zero. (Not sure if it’s just my bubble, but everyone interested in self-improvement is also deeply interested in pseudoscience and magic.) When I organize a rationality meetup, fewer than 10 people come. Half of those who come only want to chat.
For a moment I had a group that actually was a cooperative rationalist self-improvement project, but various things happened (including covid) and most of those people moved to other countries. It is important to meet in person. Talking over internet doesn’t have the same bandwidth, and I don’t get that visceral feeling of being a member of a tribe.
I keep wondering what happens on the other side of the planet, in the Bay Area. I don’t know the details, but I suspect that most people aren’t “winning” there either. Probably you also need to find the right subset of people, and ignore most of the drama outside.
Very well said. I also think more is possible—not nearly as much more as I originally thought, but there is always room for improvement and I do think there’s a real possibility that community effects can be huge. I mean, individual humans are smarter than individual animals, but the real advantages have accrued though society, specialization, teamwork, passing on knowledge, and sharing technology—all communal activities.
And yeah, probably the main barrier boils down to the things you mentioned. People who are interested in self-improvement and truth are a small subset of the population[1]. Across the country/world there are lots of them, but humans have some psychological thing about meeting face to face, and the local density most places is below critical mass. And having people move to be closer together would be a big ask even if they were already great friends, which the physical distance makes difficult. As far as I can see, the possible options are: 1. Move into proximity (very costly) 2. Start a community with the very few nearby rationalists (difficult to keep any momentum) 3. Start a community with nearby non-rationalists (could be socially rewarding, but likely to dampen any rationality advantage) 4. Teach people nearby to be rational (ideal, but very difficult) 5. Build an online community (LW is doing this. Could try video meetings, but I predict it would still feel less connected that in person and make momentum difficult) 5b. Try to change your psychology so that online feels like in person. (Also, difficult) 6. Do it without a community (The default, but limited effectiveness)
So, I don’t know—maybe when AR gets really good we could all hang out in the “metaverse” and it will feel like hanging out in person. Maybe even then it won’t—maybe it’s just literally having so many other options that makes the internet feel impersonal. If so, weird idea—have LW assign splinter groups and that’s the only group you get (maybe you can move groups, but there’s some waiting period so you can’t ‘hop’). And of course, maybe there just isn’t a better option than what we’re already doing.
Personally—I’m trying to start regularly calling my 2 brothers. They don’t formally study rationality but they care about it and are pretty smart. The family connection kinda makes up for the long distance and small group size, but it’s still not easy to get it going. I’d like to try to get a close-knit group of friends where I live, though they probably won’t be rationalists. But I’ll probably need to stop doing prediction markets to have the time to invest for that.
Oh, and what you said about the 5 stages makes a lot of sense—my timing is probably just not lined up with others, and maybe in a few years someone else will ask this and I’ll feel like “well I’m not surprised by what rationalists are accomplishing—I updated my model years ago”.
I read Alexander Scott say that peddling ‘woo’ might just be the side effect of a group taking self-improvement seriously and lacking the ability to fund actual studies and I think that hypothesis makes sense.
My hypothesis for the relation between self-improvement and woo is that people suck at holding two perspectives that seems to be pointing in the opposite direction, long enough to figure out a synthesis.
Let me give you an example: historically, people dreamed about flying. There are two simple ways how to respond to this desire:
giving up. Gravity is a law of nature. The birds have wings, the humans have not. End of conversation. Everyone who cannot suppress their desire to fly is an idiot, let’s laugh at them!
wishful thinking. I am sure that if I pray hard enough and purge my mind of negative thoughts, I will be able to spread my arms and fly. La-la-la, I can’t hear your skepticism!
The correct solution, as we know now, is to accept gravity as a fact, and then explore the other laws of nature until we find a force that can overcome gravity. There are even multiple solutions—balloons, gliding, reactive motors—but all of them require doing something complicated.
The difficulty is not that gravity is fundamentally incompatible with flying, but that both require contradictory emotions. You can feel the inescapable pressure of the universal law of gravity… or you can feel lightness and imagine flying… but it is difficult to feel both at the same time. Human thinking is just a thin layer on top of a fundamentally emotional machine, people usually get addicted to one emotion or the other, and then they become unable to consider the other part of the picture.
Similar pattern: effective altruism. People feel sad about bad things happening in the world and our inability to address them efficiently. The simple solutions:
grow up and accept the wisdom that the world cannot change. This can be simple fatalism, or a clever economical theory about how feeding the Africans only makes them reproduce more.
pray harder, post touching pictures on social networks, meditate and send positive energy.
A correct solution: collect data and calculate, promote the actions with the greatest impact.
The emotional problem: “observing the reality and calculating the hard data” and “desire to change reality” are emotionally incompatible. People choose one emotion or the other, and get stuck with it.
And the self-improvement seems to follow the similar dichotomy:
accept that you can’t improve (skills, looks, money, relationships), become proud of this “wisdom”, laugh at people who try to achieve something and call them immature, make sure to collect data about all their failings and never mention any success
read your horoscope, practice positive thinking, read alternative news sources, harmonize your chakras, read inspiring success stories, join a pyramid scheme, sell homeopathy, be open to everything (except for things that you develop aversion to, such as computers or vaccines)
Again, competing emotions of “noticing that the world sucks” and “the feeling that more is possible”. Can you keep trying, when you know that the motivational literature is scam, the success stories are mostly lies, many scientific findings don’t replicate, and your own results are probably just placebo?
.
About your list of options:
If you want to move somewhere, it would be nice (if you have enough time and money) to check the rationalist communities outside the Bay Area, because that place just seems doomed—the social pressure to take drugs and meditate will be too strong, and even if you personally resist it, the vortex will keep pulling away parts of your social circle.
Maybe Scott Alexander knows about this more: a few years ago, he traveled across the world, visiting various ACX meetups. He may at least narrow down the list of interesting places.
Recruiting new rationalists seemed to me like the best option, a few years ago. I mean, if I was impressed with the Sequences, surely there must be other people who will feel the same way, if only they get the text. Maybe I should go to a local math college and give away a few free copies of HP:MoR! These days, I don’t believe it anymore. The internet is a small place, and the rationalist community has a decent online presence. Most nerds have already heard about us. If they don’t join, it’s probably because they are not interested. There may be an exception or two, but probably not enough to start a local meetup.
If you start a community with non-rationalists, the chance to change it to a community of rationalists seems zero to me. (One possible exception: You could start a community that teaches self-help, or something like that, and then gradually find potential rationalists among the students.)
“Raising the sanity waterline” was the old plan, and some CFAR materials are freely available online. But you probably shouldn’t do it alone, as it is a lot of work.
I think you could achieve a better “tribe” feeling online with a smaller dedicated group, meeting regularly, having video calls, and a private channel for asynchronous communication. (Or maybe try The Guild of the Rose? I don’t know much about them, though.)
Regularly calling two or three people still sounds preferable to doing it alone. Maybe you could try to meet more people, and hope to find someone rationality-compatible.
I like the Alt-Viliam thought experiment. For myself, I have trouble projecting where I’d be other than: less money, more friends. I was very Christian and had a lot of friends through the Church community, so I likely would have done that instead of getting into prediction markets (which works out since presumably I’d be less good at prediction markets). I think your point about rationality preventing bad outcomes is a good one. There aren’t a lot of things in my life I can point to and say “I wouldn’t have this if I weren’t a rationalist”, but there are a lot of possible ways I could have gone wrong into some unhappy state—each one unlikely, but taken together maybe not.
I also like your points about the time limitations we face and the power of a community. That said, even adjusting for the amount of time we can spend, it’s not like 5 of us solve quantum gravity in 10 or even 100 months. As for the community—that may be really important. It’s possible that communal effects are orders of magnitude above individual ones. But if the message was that we could only accomplish great things together, that was certainly not clear (and also raises the question of why our community building has been less than stellar).
Based on the responses I’ve gotten here, perhaps a better question is: “why did I expect more out of rationality?”
There’s a phenomenon I’ve observed where I tend to believe things more than most people, and it’s hard to put my finger on exactly what is going on there. It’s not that I believe things to be true more often (in fact, its probably less), but rather that I take things more seriously or literally—but neither of those quite fit either.
I experienced it in church. People would preach about the power of prayer and how much more it could accomplish than our efforts. I believed them and decided to go to church instead of study for my test and pray that I’d do well. I was surprised that I didn’t and when I talked to them they’d say “that wasn’t meant for you—that was what God said to those people thousands of years ago—you can’t just assume it applies to you”. Ok, yeah, obvious in hindsight. But then I swear they’d go back up and preach like the Bible did apply to me. And when I tried to confirm that they didn’t mean this, they said “of course it applies to you. It’s the word of God and is timeless and applies to everyone”. Right, my mistake. I’d repeat this with various explanations of where I had failed. Sometimes I didn’t have enough faith. Sometimes I was putting words in God’s mouth. Sometimes I was ignoring the other verses on the topic. However, every time, I was doing something that everyone else tacitly understood not to do—taking the spoken words as literal truth and updating my expectations and actions based on them. It took me far longer to realize this than it should have because, perversely, when I asked them about this exact hypothesis, they wholeheartedly denied it and assured me they believed every word as literal truth.
It’s easy to write that off as a religious phenomenon, and I mostly did. But I feel like I’ve brushed up against it in secular motivational or self-help environments. I can’t recall an instance here, but it feels like I reason: this speaker is either correct, lying, or mistaken, and other people don’t feel like its any of the above—or rather, choose correct until I start applying it to real life and then there’s always something wrong about how I apply it. Sometimes I get some explanation of what I’m doing wrong, but almost always there’s this, confusion about why I’m doing this.
I don’t know if that’s what is happening here, but if so, then that is surprising to me, because I had assumed that it was my rationalism or some other mental characteristic I’d expect to find here that was the cause of this disconnect. I read Class project and while it is obviously fiction, it is such boring fiction, in the middle of two posts telling us that we should do better than science that it seemed clear to me that it was meant as an illustration of the types or magnitudes of things we could accomplish. I don’t think I’m being overly literal here—I’m specifically considering context, intent, style. Almost the whole story is just a lecture, and nothing interesting happens—it is clearly not aimed at entertainment. It sits in the middle of a series about truth, specifically next to other non-fiction posts echoing the same sentiment. It’s just really difficult for me to believe it was just intended for whimsy and could just have easily been a whimsical story about a cat talking to a dandelion. Combine that with non-fiction posts telling us to shut up and do the impossible or that we should be sitting on a giant heap of utility, and the message seems clear.
However, the responses I’ve gotten to this posts feel very much like the same confusion I’ve experienced in the past. I get this “what did you expect?” vibe, and I’m sure I’m not the only one who read the referenced posts. So did others read them and think “Yes, Eliezer says to do the impossible and specifically designates the AI box experiment as the least impossible thing, but clearly he doesn’t mean we could do something like convince Sam Altman or Elon Musk not to doom humanity, (or in personal life, something like have a romantic relationship with no arguments and no dissastisfaction).”?
I think your feelings of disappointment are 100% valid.
It’s just that I am already over the “anger” phase, and more in the “bargaining / depression” phases.
I keep writing and deleting the following paragraphs, because it feels like there is something important in my mind that I want to say, but the words that come out keep missing the point...
First, it seems obvious to me that doing much better is possible. Not literally like in the stories, but those stories generally point in the right direction. It wouldn’t take literally five days to reinvent quantum theory. It could take maybe five years to do something amazing, even if not world-changing. Still worth it.
But sometimes it is important to get all the details right. If you build 90% of a car, you cannot take a ride.
I know I can’t do it alone. I am not even really good at thinking, unless I have someone to talk to. I find it difficult to collect the mental energy to start doing something… and even more difficult to continue doing it when I get interrupted. (And I do get interrupted every day for many hours; it’s called having a job.) The best way for me to regain the focus is to talk to someone else who cares about the same project.
And it’s difficult for me to find such people. The intersection of “interested in truth” and “interested in self-improvement” and “wants to do great things together” is almost zero. (Not sure if it’s just my bubble, but everyone interested in self-improvement is also deeply interested in pseudoscience and magic.) When I organize a rationality meetup, fewer than 10 people come. Half of those who come only want to chat.
For a moment I had a group that actually was a cooperative rationalist self-improvement project, but various things happened (including covid) and most of those people moved to other countries. It is important to meet in person. Talking over internet doesn’t have the same bandwidth, and I don’t get that visceral feeling of being a member of a tribe.
I keep wondering what happens on the other side of the planet, in the Bay Area. I don’t know the details, but I suspect that most people aren’t “winning” there either. Probably you also need to find the right subset of people, and ignore most of the drama outside.
Very well said. I also think more is possible—not nearly as much more as I originally thought, but there is always room for improvement and I do think there’s a real possibility that community effects can be huge. I mean, individual humans are smarter than individual animals, but the real advantages have accrued though society, specialization, teamwork, passing on knowledge, and sharing technology—all communal activities.
And yeah, probably the main barrier boils down to the things you mentioned. People who are interested in self-improvement and truth are a small subset of the population[1]. Across the country/world there are lots of them, but humans have some psychological thing about meeting face to face, and the local density most places is below critical mass. And having people move to be closer together would be a big ask even if they were already great friends, which the physical distance makes difficult. As far as I can see, the possible options are:
1. Move into proximity (very costly)
2. Start a community with the very few nearby rationalists (difficult to keep any momentum)
3. Start a community with nearby non-rationalists (could be socially rewarding, but likely to dampen any rationality advantage)
4. Teach people nearby to be rational (ideal, but very difficult)
5. Build an online community (LW is doing this. Could try video meetings, but I predict it would still feel less connected that in person and make momentum difficult)
5b. Try to change your psychology so that online feels like in person. (Also, difficult)
6. Do it without a community (The default, but limited effectiveness)
So, I don’t know—maybe when AR gets really good we could all hang out in the “metaverse” and it will feel like hanging out in person. Maybe even then it won’t—maybe it’s just literally having so many other options that makes the internet feel impersonal. If so, weird idea—have LW assign splinter groups and that’s the only group you get (maybe you can move groups, but there’s some waiting period so you can’t ‘hop’). And of course, maybe there just isn’t a better option than what we’re already doing.
Personally—I’m trying to start regularly calling my 2 brothers. They don’t formally study rationality but they care about it and are pretty smart. The family connection kinda makes up for the long distance and small group size, but it’s still not easy to get it going. I’d like to try to get a close-knit group of friends where I live, though they probably won’t be rationalists. But I’ll probably need to stop doing prediction markets to have the time to invest for that.
Oh, and what you said about the 5 stages makes a lot of sense—my timing is probably just not lined up with others, and maybe in a few years someone else will ask this and I’ll feel like “well I’m not surprised by what rationalists are accomplishing—I updated my model years ago”.
I read Alexander Scott say that peddling ‘woo’ might just be the side effect of a group taking self-improvement seriously and lacking the ability to fund actual studies and I think that hypothesis makes sense.
My hypothesis for the relation between self-improvement and woo is that people suck at holding two perspectives that seems to be pointing in the opposite direction, long enough to figure out a synthesis.
Let me give you an example: historically, people dreamed about flying. There are two simple ways how to respond to this desire:
giving up. Gravity is a law of nature. The birds have wings, the humans have not. End of conversation. Everyone who cannot suppress their desire to fly is an idiot, let’s laugh at them!
wishful thinking. I am sure that if I pray hard enough and purge my mind of negative thoughts, I will be able to spread my arms and fly. La-la-la, I can’t hear your skepticism!
The correct solution, as we know now, is to accept gravity as a fact, and then explore the other laws of nature until we find a force that can overcome gravity. There are even multiple solutions—balloons, gliding, reactive motors—but all of them require doing something complicated.
The difficulty is not that gravity is fundamentally incompatible with flying, but that both require contradictory emotions. You can feel the inescapable pressure of the universal law of gravity… or you can feel lightness and imagine flying… but it is difficult to feel both at the same time. Human thinking is just a thin layer on top of a fundamentally emotional machine, people usually get addicted to one emotion or the other, and then they become unable to consider the other part of the picture.
Similar pattern: effective altruism. People feel sad about bad things happening in the world and our inability to address them efficiently. The simple solutions:
grow up and accept the wisdom that the world cannot change. This can be simple fatalism, or a clever economical theory about how feeding the Africans only makes them reproduce more.
pray harder, post touching pictures on social networks, meditate and send positive energy.
A correct solution: collect data and calculate, promote the actions with the greatest impact.
The emotional problem: “observing the reality and calculating the hard data” and “desire to change reality” are emotionally incompatible. People choose one emotion or the other, and get stuck with it.
And the self-improvement seems to follow the similar dichotomy:
accept that you can’t improve (skills, looks, money, relationships), become proud of this “wisdom”, laugh at people who try to achieve something and call them immature, make sure to collect data about all their failings and never mention any success
read your horoscope, practice positive thinking, read alternative news sources, harmonize your chakras, read inspiring success stories, join a pyramid scheme, sell homeopathy, be open to everything (except for things that you develop aversion to, such as computers or vaccines)
Again, competing emotions of “noticing that the world sucks” and “the feeling that more is possible”. Can you keep trying, when you know that the motivational literature is scam, the success stories are mostly lies, many scientific findings don’t replicate, and your own results are probably just placebo?
.
About your list of options:
If you want to move somewhere, it would be nice (if you have enough time and money) to check the rationalist communities outside the Bay Area, because that place just seems doomed—the social pressure to take drugs and meditate will be too strong, and even if you personally resist it, the vortex will keep pulling away parts of your social circle.
Maybe Scott Alexander knows about this more: a few years ago, he traveled across the world, visiting various ACX meetups. He may at least narrow down the list of interesting places.
Recruiting new rationalists seemed to me like the best option, a few years ago. I mean, if I was impressed with the Sequences, surely there must be other people who will feel the same way, if only they get the text. Maybe I should go to a local math college and give away a few free copies of HP:MoR! These days, I don’t believe it anymore. The internet is a small place, and the rationalist community has a decent online presence. Most nerds have already heard about us. If they don’t join, it’s probably because they are not interested. There may be an exception or two, but probably not enough to start a local meetup.
If you start a community with non-rationalists, the chance to change it to a community of rationalists seems zero to me. (One possible exception: You could start a community that teaches self-help, or something like that, and then gradually find potential rationalists among the students.)
“Raising the sanity waterline” was the old plan, and some CFAR materials are freely available online. But you probably shouldn’t do it alone, as it is a lot of work.
I think you could achieve a better “tribe” feeling online with a smaller dedicated group, meeting regularly, having video calls, and a private channel for asynchronous communication. (Or maybe try The Guild of the Rose? I don’t know much about them, though.)
Regularly calling two or three people still sounds preferable to doing it alone. Maybe you could try to meet more people, and hope to find someone rationality-compatible.