Okay well it seems like I’m a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I’ve been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I’m trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.
So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it “the purity argument”. Should “rationality” try to stay pure by avoiding things like listicles or the phrase “science shows”? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that’s nearest them is only marginally better than swill? (<-- That’s me trying not to be biased. I don’t like everything we’ve made, but when I’m not trying to counteract my likely biases I do think a lot of it is pretty good.)
Here’s my take on it: I don’t know. Like query, I don’t pretend to be confident one way or the other. I’m not as scared of “horrific long-term negative impact”, however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.
Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I’d say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn’t even use the “rationality” label. I’d argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).
In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn’t actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I’m not a strong believer that it will succeed. What I do think, though, is that there’s enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren’t confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.
* - Okay, it’s not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P
(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)
I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I’d say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist.
This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you?
(My impression is that “a word to the wise is sufficient,” and so most clever people find it aggravating when someone expounds on simple principles for hundreds of pages, because of the implication that they didn’t get it the first time around. Or they assume it’s less principled than it is.)
I picked 7 Habits because it’s pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.
I picked 7 Habits because it’s pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.
I suspect the point will be clearer if stated without examples? I think you’re pointing towards something like “most self-help does not materially improve the lives of most self-help readers,” which seems fairly ambiguous to me. Most self-help, if measured by titles, is probably terrible simply by Sturgeon’s Law. But is most self-help, as measured by sales? I haven’t looked at sales figures, but I imagine it’s not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.
It also seems to me that the information content of useful self-help is about pointing to places where applying effort will improve outcomes. (Every one of the 7 Habits is effortful!) Part of scientific self-help is getting an accurate handle on how much improvement in outcomes comes from expenditure of effort for various techniques / determining narrowly specialized versions.
But if someone doesn’t actually expend the effort, the knowledge of how they could have doesn’t lead to any improvements in outcomes. Which is why the other arm of self-help is all about motivation / the emotional content.
It’s not clear to me that LW-style rationality improves on the informational or emotional content of self-help for most of the populace. (I think it’s better at the emotional content mostly for people in the LW-sphere.) Most of the content of LW-style rationality is philosophical, which is very indirectly related to self-help.
Most self-help, if measured by titles, is probably terrible simply by Sturgeon’s Law. But is most self-help, as measured by sales? I haven’t looked at sales figures, but I imagine it’s not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.
Another complication is that Sturgeon’s Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. (Gated link, may not be accessible to all.) “When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” “Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end.”
Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
But is most self-help, as measured by sales? I haven’t looked at sales figures, but I imagine it’s not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.
“genuinely helpful” is a complicated term. A lot of books bring people to shift their attention to different priorities and get better at one thing while sacrificing other things.
New Agey literature about being in the moment has advantages but it can also hold people back from more long-term thinking.
Most self-help, if measured by titles, is probably terrible simply by Sturgeon’s Law. But is most self-help, as measured by sales? I haven’t looked at sales figures, but I imagine it’s not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.
Another complication is that Sturgeon’s Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. “When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” “Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end.”
Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
the goal of raising the sanity waterline is a good one, and rationalists should support the attempt
That does not follow at all.
The road to hell is in excellent condition and has no need of maintenance. Having a good goal in no way guarantees that what you do has net benefit and should be supported.
I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn’t likely to cause harm. Given that it isn’t likely to hurt, and it might help, I think it makes sense to support in general.
(To be clear: Just because something is a net positive (in expectation) clearly doesn’t imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)
a situation where failure isn’t likely to cause harm. Given that it isn’t likely to hurt, and it might help, I think it makes sense to support in general.
A failure isn’t likely to cause major harm, but by similar reasoning success is not likely to lead to major benefits as well. In simpler terms, InIn isn’t likely to have a large impact of any kind. Given this, I still see no reason why minor benefits are more likely than minor harm.
Okay well it seems like I’m a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I’ve been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I’m trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.
So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it “the purity argument”. Should “rationality” try to stay pure by avoiding things like listicles or the phrase “science shows”? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that’s nearest them is only marginally better than swill? (<-- That’s me trying not to be biased. I don’t like everything we’ve made, but when I’m not trying to counteract my likely biases I do think a lot of it is pretty good.)
Here’s my take on it: I don’t know. Like query, I don’t pretend to be confident one way or the other. I’m not as scared of “horrific long-term negative impact”, however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.
Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I’d say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn’t even use the “rationality” label. I’d argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).
In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn’t actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I’m not a strong believer that it will succeed. What I do think, though, is that there’s enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren’t confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.
* - Okay, it’s not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P
(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)
This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you?
(My impression is that “a word to the wise is sufficient,” and so most clever people find it aggravating when someone expounds on simple principles for hundreds of pages, because of the implication that they didn’t get it the first time around. Or they assume it’s less principled than it is.)
I picked 7 Habits because it’s pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.
I suspect the point will be clearer if stated without examples? I think you’re pointing towards something like “most self-help does not materially improve the lives of most self-help readers,” which seems fairly ambiguous to me. Most self-help, if measured by titles, is probably terrible simply by Sturgeon’s Law. But is most self-help, as measured by sales? I haven’t looked at sales figures, but I imagine it’s not that unlikely that half of all self-help books actually consumed are the ones that are genuinely helpful.
It also seems to me that the information content of useful self-help is about pointing to places where applying effort will improve outcomes. (Every one of the 7 Habits is effortful!) Part of scientific self-help is getting an accurate handle on how much improvement in outcomes comes from expenditure of effort for various techniques / determining narrowly specialized versions.
But if someone doesn’t actually expend the effort, the knowledge of how they could have doesn’t lead to any improvements in outcomes. Which is why the other arm of self-help is all about motivation / the emotional content.
It’s not clear to me that LW-style rationality improves on the informational or emotional content of self-help for most of the populace. (I think it’s better at the emotional content mostly for people in the LW-sphere.) Most of the content of LW-style rationality is philosophical, which is very indirectly related to self-help.
Another complication is that Sturgeon’s Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. (Gated link, may not be accessible to all.) “When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” “Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end.”
Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
Agreed; that’s where I was going with my paragraph 3 but decided to emphasize it less.
“genuinely helpful” is a complicated term. A lot of books bring people to shift their attention to different priorities and get better at one thing while sacrificing other things.
New Agey literature about being in the moment has advantages but it can also hold people back from more long-term thinking.
Another complication is that Sturgeon’s Law applies as much to the readers. The dropout rate on free MOOCs is astronomical. “When the first Mooc came out, 100,000 people signed up but “not even half went to the first lecture, let alone completed all the lectures.” “Only 4-5 per cent of the people who sign up for a course at Coursera ,,, get to the end.”
Picking up a self-help book is as easy as signing up for a MOOC. How many buyers read even the first chapter, let alone get to the end, and do all the work on the way?
That does not follow at all.
The road to hell is in excellent condition and has no need of maintenance. Having a good goal in no way guarantees that what you do has net benefit and should be supported.
I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn’t likely to cause harm. Given that it isn’t likely to hurt, and it might help, I think it makes sense to support in general.
(To be clear: Just because something is a net positive (in expectation) clearly doesn’t imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)
A failure isn’t likely to cause major harm, but by similar reasoning success is not likely to lead to major benefits as well. In simpler terms, InIn isn’t likely to have a large impact of any kind. Given this, I still see no reason why minor benefits are more likely than minor harm.