These are difficult questions, and unfortunately it’s possible that the situation for AI alignment in China might be even worse than we can see here. There’s a lot of really weird, secret downsides to most moves that could be made.
However, I actually do have a really, really solid recommendation that I think there’s no downside for, and massive upsides. I thought about it for a while, and I think Tuning your cognitive strategies might be an extremely good fit for AI safety researchers in China. These are my reasons:
Cognitive tuning offers human intelligence amplification in the near-term. A person tries it, and they get smarter, within a day or an hour or so. Human intelligence amplification has recently been endorsed as a potentially critical element for solving AI alignment in time; regardless of whether or not late-stage AI will give us resources to help us solve alignment, we are more likely to solve it in time if we have smarter alignment researchers. If we find ways to make humans smarter, that’s good news across the board.
People in China might or might not be slightly better than westerners at paying attention to their own thoughts (the important thing is building the habit from a young age, some cultures will be better at this than others). But people in China will almost certainly be better at avoiding the risks than westerners were, because the problem with cognitive tuning is that it violates Schelling Fences, and people in China (and Asian cultures more generally) are probably much better at noticing and budgeting Schelling Fence violations due to an ambient conformity culture.
The big bottleneck with cognitive tuning is figuring out how it works and how to do it right. It’s basically its own field of research, but it’s at the extreme early phase with massive untapped potential. The bottleneck is that almost nobody in AI safety knows about it, and almost nobody is trying it and sharing details about how it worked and what they learned from the process. There’s incredible potential for growth and incredible potential for results, no matter what country pioneers it.
Human intelligence amplification, with immediate results, is something that will be broadly smiled upon in China. It’s basically a startup. Nobody in the government would frown upon citizens for trying that. In fact, China might soon become a leader in neurofeedback, and this could potentially be retooled for human intelligence amplification.
+1 for tuning your cognitive strategies. For ~1 day a month, I experience a substantial increase in my ability to have quality thoughts. I’d read BWT quite a while ago, and when I re-read it recently, I realized “Oh, that’s what’s happening each month”. Improving my “Tuning your cognitive strategies” skill is now a high priority for me.
I’m really glad to see people are taking it seriously. This is legitimately something that could save the world, no matter what your AI assumptions are, or where the innovation/research takes place.
Keep in mind that budgeting Schelling Fences is really important, the author (squirrelinhell) tried that as well as like 5 other things at once (including self-inflicted sleep deprivation), in addition to being a roko’s basilisk researcher, and they literally died from it. When you do even one really intense mind-improving thing, it’s hard to keep track of all the things you took for granted but might no longer apply- and even harder when you’re doing lots of weird stuff at once.
Oh, I’m quite wary of mental modifications. I’ve both had some poor experiences myself, and listened to enough stories by people who’ve done far more substantial changes to their cognition, to know that this is dangerous territory.
Incidentally, I showed that skill from BWT to someone who claims to have done great amounts of mental surgery. They stated that the skill isn’t a bad solution to the problem, but the author of the page didn’t know what the problem even is. Namely, that people didn’t spend enough time thinking alone as a kid due to repeated distractions, which caused firmware damage. N.B. they only read the “Tuning your cognitive strategies page”. I think they also claimed that this damage was related to school, or perhaps social media?
I’m not sure what to make of that claim, but the fact is that I have many instinctive flinches away from entering the state of mind which that skill-page describes. These flinches are, I think, caused by a fear of failure to solve problems or produce valuable thoughts. Which, you know, does sound like the kind of damage that school or social media could do.
That’s really interesting, do you have a list of resources you could recommend to me for things that are similar to/better than BWT? I wasn’t aware that finding more was even possible.
I don’t have anything that is better than BWT. I’ve just read, and heard, people who claim to have done substantial mental modifications talking about their experience. The guy I was talking about claimed that 1) This stuff is dangerous, so he won’t go into details and 2) You really have to develop your own techniques. He seems quite sharp, so I’m inclined to trust his word, but that’s not much evidence for you. And I haven’t done much myself other than mess up my brain a few times, and practiced BWT-related focusing enough times that I started getting something out of it.
These are difficult questions, and unfortunately it’s possible that the situation for AI alignment in China might be even worse than we can see here. There’s a lot of really weird, secret downsides to most moves that could be made.
However, I actually do have a really, really solid recommendation that I think there’s no downside for, and massive upsides. I thought about it for a while, and I think Tuning your cognitive strategies might be an extremely good fit for AI safety researchers in China. These are my reasons:
Cognitive tuning offers human intelligence amplification in the near-term. A person tries it, and they get smarter, within a day or an hour or so. Human intelligence amplification has recently been endorsed as a potentially critical element for solving AI alignment in time; regardless of whether or not late-stage AI will give us resources to help us solve alignment, we are more likely to solve it in time if we have smarter alignment researchers. If we find ways to make humans smarter, that’s good news across the board.
People in China might or might not be slightly better than westerners at paying attention to their own thoughts (the important thing is building the habit from a young age, some cultures will be better at this than others). But people in China will almost certainly be better at avoiding the risks than westerners were, because the problem with cognitive tuning is that it violates Schelling Fences, and people in China (and Asian cultures more generally) are probably much better at noticing and budgeting Schelling Fence violations due to an ambient conformity culture.
The big bottleneck with cognitive tuning is figuring out how it works and how to do it right. It’s basically its own field of research, but it’s at the extreme early phase with massive untapped potential. The bottleneck is that almost nobody in AI safety knows about it, and almost nobody is trying it and sharing details about how it worked and what they learned from the process. There’s incredible potential for growth and incredible potential for results, no matter what country pioneers it.
Human intelligence amplification, with immediate results, is something that will be broadly smiled upon in China. It’s basically a startup. Nobody in the government would frown upon citizens for trying that. In fact, China might soon become a leader in neurofeedback, and this could potentially be retooled for human intelligence amplification.
+1 for tuning your cognitive strategies. For ~1 day a month, I experience a substantial increase in my ability to have quality thoughts. I’d read BWT quite a while ago, and when I re-read it recently, I realized “Oh, that’s what’s happening each month”. Improving my “Tuning your cognitive strategies” skill is now a high priority for me.
I’m really glad to see people are taking it seriously. This is legitimately something that could save the world, no matter what your AI assumptions are, or where the innovation/research takes place.
Keep in mind that budgeting Schelling Fences is really important, the author (squirrelinhell) tried that as well as like 5 other things at once (including self-inflicted sleep deprivation), in addition to being a roko’s basilisk researcher, and they literally died from it. When you do even one really intense mind-improving thing, it’s hard to keep track of all the things you took for granted but might no longer apply- and even harder when you’re doing lots of weird stuff at once.
Oh, I’m quite wary of mental modifications. I’ve both had some poor experiences myself, and listened to enough stories by people who’ve done far more substantial changes to their cognition, to know that this is dangerous territory.
Incidentally, I showed that skill from BWT to someone who claims to have done great amounts of mental surgery. They stated that the skill isn’t a bad solution to the problem, but the author of the page didn’t know what the problem even is. Namely, that people didn’t spend enough time thinking alone as a kid due to repeated distractions, which caused firmware damage. N.B. they only read the “Tuning your cognitive strategies page”. I think they also claimed that this damage was related to school, or perhaps social media?
I’m not sure what to make of that claim, but the fact is that I have many instinctive flinches away from entering the state of mind which that skill-page describes. These flinches are, I think, caused by a fear of failure to solve problems or produce valuable thoughts. Which, you know, does sound like the kind of damage that school or social media could do.
That’s really interesting, do you have a list of resources you could recommend to me for things that are similar to/better than BWT? I wasn’t aware that finding more was even possible.
I don’t have anything that is better than BWT. I’ve just read, and heard, people who claim to have done substantial mental modifications talking about their experience. The guy I was talking about claimed that 1) This stuff is dangerous, so he won’t go into details and 2) You really have to develop your own techniques. He seems quite sharp, so I’m inclined to trust his word, but that’s not much evidence for you. And I haven’t done much myself other than mess up my brain a few times, and practiced BWT-related focusing enough times that I started getting something out of it.