I’ll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By “kind of” I mean “have no idea what you’re talking about but are smarter than marketers and it can’t be nearly that complex so you’re going to talk about it anyways”.
Clickbait has come up a few times. The problem is that that isn’t marketing, at least not in the sense that people here seem to think. If you’re all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.
GEICO has good marketing, which doesn’t sell you on their product at all. Indeed, the most prominent “marketing” element of their marketing—the “Saves you 15% or more” bit—mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don’t get noticed as marketing, indeed don’t get noticed at all.
The issue with this entire conversation is that everybody seems to think marketing is noticed, and uses the examples they notice as examples of good marketing. Those are -terrible- examples, as demonstrated by the fact that you think of them when you think of marketing—and anybody you market to will, too. And then you justify these examples of marketing by relying on an unrealistically low opinion of average people—which many average people share.
Do you think somebody clicking on a “One Weird Trick” tries it out? No, they click on clickbait to see what it says, then move on, which is exactly its goal—be attractive enough to get someone’s attention, entertaining enough to keep them interested, and no more. Clickbait doesn’t impart anything—its goal isn’t to be remembered or to change minds or to sell anything except itself, because its goal is to serve up ads to a steady stream of readers.
And if you click on Clickbait to see what stupid people are being tricked into believing—guess what, you’re the “stupid person”. You were the target audience, which is anybody they can get to click on their stuff, for any reason at all. The author of “This One Weird Trick” doesn’t want to convince you to use it, they want you to add a little bit of traffic to the site, and if they can do that by crafting an article and headline that makes intelligent people want to click to see what gullible morons will buy into, they’ll do it.
Clickbait isn’t the answer. “Rationalist’s One Weird Trick To a Happy Life” isn’t the answer—indeed, it’s the opposite of the answer, because it’s deliberately setting rationality up as a sideshow to sell tickets to so people can laugh at what gullible morons buy into.
Not sure if it makes any difference, but instead of “stupid people” I think about people reading articles about ‘life hacking’ as “people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice”; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.
If I’d believe the readers were literally stupid, then of course I wouldn’t see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people… uhm… like I used to be before I found LW.
Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: “Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!” And that was my introduction to the rationalist community; these days I regularly attend LW meetups.
Could Gleb’s articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don’t see a reason why not.
Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.
Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.
The question is what will people do: will they actually follow the links to get more deep engagement? Let’s take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.
Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.
They are intended to not appeal to you, and that’s the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.
It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me.
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself.
Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.
Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy.
However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.
I cringed at that when I was learning how to write that way, too.
So, why do you think this is necessary? Do you believe that proles have an unyielding “tits or GTFO” mindset so you have to provide tits in order to be heard? That ideas won’t go down their throat unless liberally coated in slime?
It may look to you like you’re raising the waterline, but from the outside it looks like all you’re doing is contributing to the shit tsunami.
for fear of this kind of backlash
I think “revulsion” is a better word.
Wasn’t there a Russian intellectual fad, around the end of XIX century, about “going to the people” and “becoming of the people” and “teaching the people”? I don’t think it ended well.
are worth the rewards of raising the sanity waterline
How do you know? What do you measure that tells you you are actually raising the sanity waterline?
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up. That’s the purpose of Intentional Insights—to reach out and build people up to growing more rational over time. You don’t have to be the one doing it, of course. I’m doing it. Others are doing it. But do you think it’s better to improve the shit tsunami or put our hands in our ears and pretend it’s not there and not do anything about it? I think it’s better to improve the shit tsunami of Lifehack and other such sites.
The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up.
Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit.
Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of.
That’s the purpose … it’s better to improve the shit tsunami
The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it.
The measures we use
You measure, basically, impressions—clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline.
the stuff you provide is not less shitty. It is exactly what the tsunami consists of
Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I’m willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!
I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level.
As to the bet, please specify what is a “neutral reasonable” observer and how do you define “quality” in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y’know...
That implies you believe the probability you will lose is just under 50%
Only if $1000 is an insignificant fraction of Gleb’s wealth, or his utility-from-dollars function doesn’t show the sort of decreasing marginal returns most people’s do.
$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it.
We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere.
We can have gjm or another external observer recruit people just in case one of us doing it might bias the results.
Sorry, I don’t enjoy gambling. I am still curious about “quality” which you say your article has and the typical Lifehacker swill doesn’t. How do you define that “quality”?
As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.
As I said, I’m not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother?
Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That’s the project of raising the sanity waterline.
You seem to have made two contradicting statements, or maybe we’re miscommunicating.
1) Do you believe that raising the sanity waterline of those in the murk—those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving—is still raising the sanity waterline?
2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?
Do you believe that raising the sanity waterline of those in the murk
I don’t think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you’re deluding yourself.
Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you’re trading short-term gains for long-term difficulties overcoming that reputation (which, I’ll note, the rationalist community already struggles with).
Please avoid using terms like “poisoning” and other vague claims. It’s an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!
Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Don’t use argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, to attack Intentional Insights through pattern-matching and making vague claims.
Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, are problematic. Thanks!
If I wanted to -attack- you, I’d have accused you of using credit card information from a donation or t-shirt purchase to make illicit purchases. I’d send off for your IRS expense reports to see where your budget goes, and spin that in a very unfriendly way (if any spin were necessary). I’d start spreading rumors that your polyamory posts from your early days were proof that you were sleeping with your students. And trust me, you’ve spread enough nonsense around Less Wrong to make each one of these accusations stick in a very uncomfortable way. I did the research, trying to decide if you were legitimate or not.
I’d -destroy- you. And your regular and completely uneducated attempts at the Dark Arts would make it -absurdly- easy.
But I’m not even talking about you here. This is me talking about marketing generally.
I will save you the trouble of sending off to the IRS for the Intentional Insights expense reports. We are committed to transparency, and list our financials on our “About Us” page. I cannot control what you do with that information—it’s your choice.
My purpose for revealing it is my goal of being open. I know that doing so makes me vulnerable to the kind of destruction you describe above. It’s easy enough to frame me with fake screenshots doctored with Adobe Photoshop or other forms of framing. And then who can tell what’s real, right? The accusation would be out there, I would have to defend myself, and then people who don’t know me would suspect things. I would never be able to throw off the taint of it, would I? The same would be the case with rumors, etc.
Very clever and strategic Dark Arts stuff. Never thought about any of these until you raised them. I know you are an expert Dark Arts practitioner as you showed here in your deliberate efforts to attack my reputation on Less Wrong, as you clearly describe here. Didn’t know how expert you were. Updating on how much of a danger it is to me personally for you to be this upset with what I’m trying to do by getting more people out there in the world to be more sane.
I also noticed you chose not to respond to the point I made about the article. I would encourage you to be clear and specific. Thanks!
Updating on how much of a danger it is to me personally for you to be this upset with what I’m trying to do by getting more people out there in the world to be more sane.
I’m not upset with you. I’m at worst irritated, and that’s entirely because your style bothers me on a visceral level, and honestly, the amusement factor usually makes up for it.
The common element of all of those things is that they’re things I suspect or have suspected might be true of you, because of the way you behave—and by using the various materials that created those suspicions as “evidence” for them, the rumors are [ETA: Could be, rather] made to sound disproportionately valid. (Something you’ve said elevates hypothesis to “Extremely weak” plausibility levels for me; I suggest the hypothesis, elevating it to “Extremely weak” plausibility levels in others, then, after that update is made, separately present circumstantial evidence, causing them to elevate further. Double-counting evidence, basically.)
In the end, however, I assign very low probabilities to any of them (which is to say, I don’t believe them), and I think you’re a muggle pretending to be a Dark Lord, with just enough success at the pretense to achieve the effect of making my skin crawl, and probably benefitting from it on a personal level because it’s a step up from your previous level of social expertise. And at any rate, I wouldn’t actually unleash any such attacks, regardless of how antagonistic I felt towards you, unless I actually thought they were true.
You may notice a tendency about my use of Dark Arts: I try to always be clear about when I’m using them and what I expect them to do, if not while I’m doing it, after the fact. I’m not a fan of them, because I think that they have negative-on-average payouts. Which I suspect you’d disagree with, for the aforementioned reason that I suspect your social skills aren’t terribly good, and you’re experiencing more success using them. If this is the case: So far, you’re relying on luck. As I hope I’ve demonstrated, a single suspicion produced by their use could do far more than erase the positive benefits you may have accrued so far.
I actually do not consider myself a practitioner of Dark Arts as they are traditionally understood.
I feel pretty icky even about the “light Dark Arts” marketing stuff I am doing. As I told Lumifer in an earlier post, I cringed at that feeling when I was learning how to write for Lifehack, Huffington Post, etc. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. The only reason I am choosing to do so is to reach the goal of raising the sanity waterline effectively.
After bringing this question to the Less Wrong community in an earlier post, I updated to not think of what I do as in real Dark Arts territory. If I considered it to be in real Dark Arts territory, I don’t think I could bring myself emotionally to do it, at least without much more serious self-modification.
utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don’t get noticed as marketing, indeed don’t get noticed at all.
That’s a good strategy when you have GEICO’s name recognition. If you don’t, maybe getting noticed isn’t such a bad thing. And maybe “One Weird Trick” is a gimmick, but then so is GEICO’s caveman series—which is also associated with a stereotype of someone being stupid. Does the gimmick really matter once folks have clicked on your stuff and want to see what it’s about? That’s your chance to build some positive name recognition.
Just wanted to clarify that people who are reading Lifehack are very much used to the kind of material there—it’s cognitively easy for them and they don’t perceive it as a gimmick. So their first impression of rationality is not as a gimmick but as something that they might be interested in. After that, they don’t go to the Less Wrong website, but to the Intentional Insights website. There, they get more high-level material that slowly takes them up the level of complexity. Only some choose to go up this ladder, and most do not. Then, after they are sufficiently advanced, we introduce them to more complex content on ClearerThinking, CFAR, and LW itself. This is to avoid the problem of Endless September and other challenges. More about our strategy is in my comment.
That’s a good strategy when you have GEICO’s name recognition.
How many people had heard of the Government Employee’s Insurance Company prior to that advertising campaign? The important part of “GEICO can save you 15% or more on car insurance” is repeating the name. They started with a Gecko so they could repeat their name at you, over and over, in a way that wasn’t tiring. It was, bluntly, a genius advertising campaign.
If you don’t, maybe getting noticed isn’t such a bad thing.
Your goal isn’t to get noticed, your goal is to become familiar.
And maybe “One Weird Trick” is a gimmick, but then so is GEICO’s caveman series—which is also associated with a stereotype of someone being stupid.
You don’t notice any other elements to the caveman series? You don’t notice the fact that the caveman isn’t stupid? That the commercials are a mockery of their own insensitivity? That the series about a picked-upon identity suffering from a stereotype was so insanely popular that a commercial nearly spawned its own TV show?
Does the gimmick really matter once folks have clicked on your stuff and want to see what it’s about? That’s your chance to build some positive name recognition.
Yes, the gimmick matters. The gimmick determines people’s attitude coming in. Are they coming to laugh and mock you, or to see what you have to say? And if you don’t have the social competency to develop their as-yet-unformed attitude coming in, you sure as hell don’t have the social competency to take control of it once they’ve already committed to how they see you.
I’ll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By “kind of” I mean “have no idea what you’re talking about but are smarter than marketers and it can’t be nearly that complex so you’re going to talk about it anyways”.
Clickbait has come up a few times. The problem is that that isn’t marketing, at least not in the sense that people here seem to think. If you’re all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.
GEICO has good marketing, which doesn’t sell you on their product at all. Indeed, the most prominent “marketing” element of their marketing—the “Saves you 15% or more” bit—mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don’t get noticed as marketing, indeed don’t get noticed at all.
The issue with this entire conversation is that everybody seems to think marketing is noticed, and uses the examples they notice as examples of good marketing. Those are -terrible- examples, as demonstrated by the fact that you think of them when you think of marketing—and anybody you market to will, too. And then you justify these examples of marketing by relying on an unrealistically low opinion of average people—which many average people share.
Do you think somebody clicking on a “One Weird Trick” tries it out? No, they click on clickbait to see what it says, then move on, which is exactly its goal—be attractive enough to get someone’s attention, entertaining enough to keep them interested, and no more. Clickbait doesn’t impart anything—its goal isn’t to be remembered or to change minds or to sell anything except itself, because its goal is to serve up ads to a steady stream of readers.
And if you click on Clickbait to see what stupid people are being tricked into believing—guess what, you’re the “stupid person”. You were the target audience, which is anybody they can get to click on their stuff, for any reason at all. The author of “This One Weird Trick” doesn’t want to convince you to use it, they want you to add a little bit of traffic to the site, and if they can do that by crafting an article and headline that makes intelligent people want to click to see what gullible morons will buy into, they’ll do it.
Clickbait isn’t the answer. “Rationalist’s One Weird Trick To a Happy Life” isn’t the answer—indeed, it’s the opposite of the answer, because it’s deliberately setting rationality up as a sideshow to sell tickets to so people can laugh at what gullible morons buy into.
Not sure if it makes any difference, but instead of “stupid people” I think about people reading articles about ‘life hacking’ as “people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice”; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.
If I’d believe the readers were literally stupid, then of course I wouldn’t see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people… uhm… like I used to be before I found LW.
Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: “Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!” And that was my introduction to the rationalist community; these days I regularly attend LW meetups.
Could Gleb’s articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don’t see a reason why not.
Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.
Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.
The question is what will people do: will they actually follow the links to get more deep engagement? Let’s take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.
Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.
Not that I belong to his target demographic, but his articles would make me cringe and rapidly run in the other direction.
They are intended to not appeal to you, and that’s the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.
I don’t cringe at the level. I cringe at the slimy feel and the strong smell of snake oil.
It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me.
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself.
Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.
The overwhelming stench trips them.
This stuff can’t be edited to make it better, it can only be dumped and completely rewritten from scratch. Fisking it is useless.
Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy.
However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.
So, why do you think this is necessary? Do you believe that proles have an unyielding “tits or GTFO” mindset so you have to provide tits in order to be heard? That ideas won’t go down their throat unless liberally coated in slime?
It may look to you like you’re raising the waterline, but from the outside it looks like all you’re doing is contributing to the shit tsunami.
I think “revulsion” is a better word.
Wasn’t there a Russian intellectual fad, around the end of XIX century, about “going to the people” and “becoming of the people” and “teaching the people”? I don’t think it ended well.
How do you know? What do you measure that tells you you are actually raising the sanity waterline?
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up. That’s the purpose of Intentional Insights—to reach out and build people up to growing more rational over time. You don’t have to be the one doing it, of course. I’m doing it. Others are doing it. But do you think it’s better to improve the shit tsunami or put our hands in our ears and pretend it’s not there and not do anything about it? I think it’s better to improve the shit tsunami of Lifehack and other such sites.
The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.
Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit.
Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of.
The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it.
You measure, basically, impressions—clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline.
So I repeat: how do you know?
Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I’m willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!
I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level.
As to the bet, please specify what is a “neutral reasonable” observer and how do you define “quality” in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y’know...
Only if $1000 is an insignificant fraction of Gleb’s wealth, or his utility-from-dollars function doesn’t show the sort of decreasing marginal returns most people’s do.
Indeed, $1000 is a quite significant portion of my wealth.
$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it.
We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere.
We can have gjm or another external observer recruit people just in case one of us doing it might bias the results.
So, going through with it?
Sorry, I don’t enjoy gambling. I am still curious about “quality” which you say your article has and the typical Lifehacker swill doesn’t. How do you define that “quality”?
As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.
And I imagine that based on your response, you take your words back. Thanks!
I am sorry to disappoint you. I do not.
Well, what kind of odds would you give me to take the bet?
As I said, I’m not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother?
Four out of five dentists recommend… X-)
Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That’s the project of raising the sanity waterline.
I am not interested in crossing the inference gap to people who like the darkest shade. They can have it.
I don’t think that raising the sanity waterline involves producting shit, even of particular colours.
You seem to have made two contradicting statements, or maybe we’re miscommunicating.
1) Do you believe that raising the sanity waterline of those in the murk—those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving—is still raising the sanity waterline?
2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?
I don’t think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you’re deluding yourself.
Ok, I will agree to disagree on this one.
Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you’re trading short-term gains for long-term difficulties overcoming that reputation (which, I’ll note, the rationalist community already struggles with).
Please avoid using terms like “poisoning” and other vague claims. It’s an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!
Please avoid abusive/trollish claims, as you have previously explicitly acknowledged your intentions to be. Don’t use argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, to attack Intentional Insights through pattern-matching and making vague claims.
Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, are problematic. Thanks!
If I wanted to -attack- you, I’d have accused you of using credit card information from a donation or t-shirt purchase to make illicit purchases. I’d send off for your IRS expense reports to see where your budget goes, and spin that in a very unfriendly way (if any spin were necessary). I’d start spreading rumors that your polyamory posts from your early days were proof that you were sleeping with your students. And trust me, you’ve spread enough nonsense around Less Wrong to make each one of these accusations stick in a very uncomfortable way. I did the research, trying to decide if you were legitimate or not.
I’d -destroy- you. And your regular and completely uneducated attempts at the Dark Arts would make it -absurdly- easy.
But I’m not even talking about you here. This is me talking about marketing generally.
I will save you the trouble of sending off to the IRS for the Intentional Insights expense reports. We are committed to transparency, and list our financials on our “About Us” page. I cannot control what you do with that information—it’s your choice.
My purpose for revealing it is my goal of being open. I know that doing so makes me vulnerable to the kind of destruction you describe above. It’s easy enough to frame me with fake screenshots doctored with Adobe Photoshop or other forms of framing. And then who can tell what’s real, right? The accusation would be out there, I would have to defend myself, and then people who don’t know me would suspect things. I would never be able to throw off the taint of it, would I? The same would be the case with rumors, etc.
Very clever and strategic Dark Arts stuff. Never thought about any of these until you raised them. I know you are an expert Dark Arts practitioner as you showed here in your deliberate efforts to attack my reputation on Less Wrong, as you clearly describe here. Didn’t know how expert you were. Updating on how much of a danger it is to me personally for you to be this upset with what I’m trying to do by getting more people out there in the world to be more sane.
I also noticed you chose not to respond to the point I made about the article. I would encourage you to be clear and specific. Thanks!
I’m not upset with you. I’m at worst irritated, and that’s entirely because your style bothers me on a visceral level, and honestly, the amusement factor usually makes up for it.
The common element of all of those things is that they’re things I suspect or have suspected might be true of you, because of the way you behave—and by using the various materials that created those suspicions as “evidence” for them, the rumors are [ETA: Could be, rather] made to sound disproportionately valid. (Something you’ve said elevates hypothesis to “Extremely weak” plausibility levels for me; I suggest the hypothesis, elevating it to “Extremely weak” plausibility levels in others, then, after that update is made, separately present circumstantial evidence, causing them to elevate further. Double-counting evidence, basically.)
In the end, however, I assign very low probabilities to any of them (which is to say, I don’t believe them), and I think you’re a muggle pretending to be a Dark Lord, with just enough success at the pretense to achieve the effect of making my skin crawl, and probably benefitting from it on a personal level because it’s a step up from your previous level of social expertise. And at any rate, I wouldn’t actually unleash any such attacks, regardless of how antagonistic I felt towards you, unless I actually thought they were true.
You may notice a tendency about my use of Dark Arts: I try to always be clear about when I’m using them and what I expect them to do, if not while I’m doing it, after the fact. I’m not a fan of them, because I think that they have negative-on-average payouts. Which I suspect you’d disagree with, for the aforementioned reason that I suspect your social skills aren’t terribly good, and you’re experiencing more success using them. If this is the case: So far, you’re relying on luck. As I hope I’ve demonstrated, a single suspicion produced by their use could do far more than erase the positive benefits you may have accrued so far.
I actually do not consider myself a practitioner of Dark Arts as they are traditionally understood.
I feel pretty icky even about the “light Dark Arts” marketing stuff I am doing. As I told Lumifer in an earlier post, I cringed at that feeling when I was learning how to write for Lifehack, Huffington Post, etc. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. The only reason I am choosing to do so is to reach the goal of raising the sanity waterline effectively.
After bringing this question to the Less Wrong community in an earlier post, I updated to not think of what I do as in real Dark Arts territory. If I considered it to be in real Dark Arts territory, I don’t think I could bring myself emotionally to do it, at least without much more serious self-modification.
That’s a good strategy when you have GEICO’s name recognition. If you don’t, maybe getting noticed isn’t such a bad thing. And maybe “One Weird Trick” is a gimmick, but then so is GEICO’s caveman series—which is also associated with a stereotype of someone being stupid. Does the gimmick really matter once folks have clicked on your stuff and want to see what it’s about? That’s your chance to build some positive name recognition.
Just wanted to clarify that people who are reading Lifehack are very much used to the kind of material there—it’s cognitively easy for them and they don’t perceive it as a gimmick. So their first impression of rationality is not as a gimmick but as something that they might be interested in. After that, they don’t go to the Less Wrong website, but to the Intentional Insights website. There, they get more high-level material that slowly takes them up the level of complexity. Only some choose to go up this ladder, and most do not. Then, after they are sufficiently advanced, we introduce them to more complex content on ClearerThinking, CFAR, and LW itself. This is to avoid the problem of Endless September and other challenges. More about our strategy is in my comment.
The useful words are “first impression” and “anchoring”.
I answered this point below, so I don’t want to retype my comment, but just FYI.
How many people had heard of the Government Employee’s Insurance Company prior to that advertising campaign? The important part of “GEICO can save you 15% or more on car insurance” is repeating the name. They started with a Gecko so they could repeat their name at you, over and over, in a way that wasn’t tiring. It was, bluntly, a genius advertising campaign.
Your goal isn’t to get noticed, your goal is to become familiar.
You don’t notice any other elements to the caveman series? You don’t notice the fact that the caveman isn’t stupid? That the commercials are a mockery of their own insensitivity? That the series about a picked-upon identity suffering from a stereotype was so insanely popular that a commercial nearly spawned its own TV show?
Yes, the gimmick matters. The gimmick determines people’s attitude coming in. Are they coming to laugh and mock you, or to see what you have to say? And if you don’t have the social competency to develop their as-yet-unformed attitude coming in, you sure as hell don’t have the social competency to take control of it once they’ve already committed to how they see you.
Which is to say: Yes. First impressions matter.
I answered this point earlier in this thread, so I don’t want to retype my comment, but just FYI.