It seems to me like people here started focusing on the wrong things. People who knew SquirrelInHell know that the suicide was likely caused by SquirrelInHell simply starting out already over the edge, e.g. hardcore obsessive Roko’s basilisk research.
The issue at hand with the matter of tuning cognitive strategies is not “does this drive people crazy”, it is “does delta reinforcement actually work”, because if delta reinforcement actually works, then that is
As in, like, comparable in value to the rest of Lesswrong put together. If this works, even if it only works on 10-25% of people (which Raemon’s testimony indicates), then this is basically the world-saving nearterm human intelligence augmentation (which yud wants to scale).
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies. High-output passive thinking, and fun downhill thinking, have immense potential to set the world up so that someone, somewhere, eventually thinks of a solution to the world’s most pressing problems.
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies
I don’t think that’s true. I’d independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would’ve developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain’t “beating the market” with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the “sanity waterline”, but not any sort of cognitive revolution (second-order effects aside).
It seems to me like people here started focusing on the wrong things. People who knew SquirrelInHell know that the suicide was likely caused by SquirrelInHell simply starting out already over the edge, e.g. hardcore obsessive Roko’s basilisk research.
The issue at hand with the matter of tuning cognitive strategies is not “does this drive people crazy”, it is “does delta reinforcement actually work”, because if delta reinforcement actually works, then that is
As in, like, comparable in value to the rest of Lesswrong put together. If this works, even if it only works on 10-25% of people (which Raemon’s testimony indicates), then this is basically the world-saving nearterm human intelligence augmentation (which yud wants to scale).
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies. High-output passive thinking, and fun downhill thinking, have immense potential to set the world up so that someone, somewhere, eventually thinks of a solution to the world’s most pressing problems.
This is not something to sleep on.
I don’t think that’s true. I’d independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would’ve developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain’t “beating the market” with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the “sanity waterline”, but not any sort of cognitive revolution (second-order effects aside).