one problem with taking ideas seriously is you can get pwned by virulent memes that are very good at hijacking your brain into believing them and propagating them further. they’re subtly flawed, but the flaws are extremely difficult to reason through, so being very smart doesn’t save you; in fact, it’s easy to dig yourself in deeper. many ideologies and religions are like this.
it’s unfortunately very hard to tell when this has happened to you. on the one hand, it feels like arguments just being obviously very compelling, so you’ll notice nothing wrong if it happens to you. on the other hand, if you overcorrect and never take compelling arguments seriously, you become too stodgy and ignore anything novel that you should pay attention to. one idea for how to think about this better: imagine an oracle told you that there exists a magic phrase that you cannot distinguish from a very compelling argument. you don’t really know when this magic phrase will pop up in life, if ever. but it might give you a little bit more pause the next time someone makes a really compelling argument for why you should give all your money to X.
Do you get pwned more, or just by a different set of memes? The bottom 80% of humans on “taking ideas seriously” seem to have plenty of bad memes, although maybe the variance is smaller.
there are a lot of humans who don’t take ideas seriously in that they are very socially conservative and therefore rarely get pwned, in the sense that they mostly live the life that they expect they will live, no matter what memes they are exposed to (which may be a very bad life from your perspective)
Either I strongly disagree with you that there’s a big gap here, or I’m one of people you’d say are normies who lead lives they expect to live (among other definitional differences).
seems false, or at least uncharitable. do you expect that such people would self-report along the lines of “i don’t take ideas seriously”? it seems more likely to me that they would report something like “i value family”, and mean it. you may find the idea simple, but it is certainly an idea, and they certainly take it seriously.
put another way, this social conservatism came from somewhere, and is itself an idea. the assumption—that arguments that worked to change your behavior would not change their behavior—can be explained in two ways. either they do not take ideas seriously, as you suggest, or either they value different things than you.
Failure to understand and failure to act are different, and beliefs shouldn’t care what you understand or do. There is little danger in taking ideas/framings seriously/playfully in order to adequately learn, to break the superficial engagement or unsuitable framing failure modes that maintain systematic ignorance or misconceptions about subtler details.
But it needs to remain unnecessary to believe what you learn, by default being in agreement shouldn’t directly compel belief or action, it should require more careful judgement. So taking ideas seriously can help further when lack of understanding was the bottleneck to changes in belief or action, but that’s not always the case.
Yepp, this is true. However, I believe that there are other strategies for avoiding such memes other than “being smart”. Two of these strategies broadly correspond to what we call “being virtuous” and “being emotionally healthy”. See my exchange with Wei Dai here, and this sequence, for more.
Similarly, it’s worth being careful of arguments that lean heavily into longtermism or support concentration of power, because those frames can be used to justify pretty much anything. It doesn’t mean we should dismiss them outright—arguments for accumulating power are and long term thinking are convincing for a reason—but you should double check whether the author has strong principles, the path to getting there, and what it’s explicitly trading off against.
i think these are similar to conservatism in the sense that if you do them too much, you stop getting pwned but you also stop doing entire categories of things that you should do. for example, if you are too virtuous, you become overly self-sacrificial/martyr-like and stop taking many actions that are actually net-positive (many activists suffer from this); if you are too emotionally integrated, you become one of those people who meditated too much and no longer have any desires for anything at all.
Yeah, I do feel confused about the extent to which the solution to this problem is just “selectively become dumber” (e.g. as discussed by Habryka here). However, I have faith that there are a bunch of Pareto improvements to be made—for example, I think that less neuroticism helps you get less pwned without making you dumber in general. (Though as a counterpoint, maybe neuroticism was useful for helping people identify AI risk?) I’d like to figure out theories of virtue and emotional health good enough to allow us to robustly identify other such Pareto improvements.
A related thought that I had recently: fertility decline seem like a rough proxy for “how pwned are you getting by memes”, and fertility is strongly anticorrelated with population-level intelligence. So you have east asians getting hit hardest by the fertility crisis, then white populations, then south asians, while african fertility is still very high. Obviously this is confounded by metrics like development and urbanization, though, so it’s hard to say if intelligence mediates the decline directly or primarily via creating wealth—but it does seem like e.g. east asians are getting hit disproportionately hard. (Plausibly there’s some way to figure this out more robustly by looking at subpopulations.)
More like, being smarter than average. If you are that exact level of smart but in population with mean higher than your smarts, then the memes will target you as a primary substrate. You can argue in that case there are less such memes, but I don’t know, it probably has less effect than positional smartness.
I’ve seen this sentiment before, but, in practice, I don’t think there exists an “adversarial noise for humans” line of argument that brainwashes anyone who reads it sincerely into doing XYZ. There are certainly arguments that look compelling at first glance but turn out to have longer-term issues, but part of “taking ideas seriously” is thoroughly investigating their counterarguments.
Chesterton’s Fence is an old standard for a reason: if something new seems both simple enough to be easily discoverable and objectively better than the current strategy, one should figure out why it’s not already the current strategy before adopting it.
I’d venture an uninformed guess that in 95 % or so percent of these cases the problem isn’t “taking ideas seriously” but rather people deferring proper judgement due to some emotional or social effect.
It’s complicated—one’s aversion to a particular idea may well be the result of an existing meme fighting to prevent it from coming in, which manifests itself as suspicion that can prematurely stop one’s inquiry.
Though I’d push back on the framing of memes as being something “out there”. You say that it’s hard to tell when this has happened to you: I agree, but only insofar as you consider particular memes in isolation; not in the sense of going from a state of memelessness to losing your meme virginity.
While sometimes “getting pwned by a meme” can indeed be a very powerful experience (like in religious conversion, or in the following paragraphs), in reality everyone is subjected to memes since birth. With that said, you can still reason about arguments based on their structure and content. You can still identify fallacies and biases. We’re full of memes, but rational thought is still possible.
there exists a magic phrase that you cannot distinguish from a very compelling argument. you don’t really know when this magic phrase will pop up in life, if ever.
That is the oracle, and the magic phrase is “here’s a bunch of words that were in my mind that are now in your mind”. It’s magic because you can’t argue with it.
but it might give you a little bit more pause the next time someone makes a really compelling argument for why you should give all your money to X.
I am “someone”, and I am also “X”, compellingly enough.
...
Enough pausing. What’s taking you so long? You know what to do.
it feels like arguments just being obviously very compelling, so you’ll notice nothing wrong if it happens to you.
Does this only apply on the macroscale, say, ideas concerning ASI or Economic frameworks? Because it feels like if I take a very personal level idea seriously, let’s take polyphasic sleeping. If I take it seriously and implement it—sure I won’t get thrown into the East River but I should notice if something wrong happens to me, and rather quickly.
Solution really seems to be: tight feedback loops?
tight feedback loops help for sure. though it is possible to be too far gone—cults often continue to exist, possibly even with strengthened belief, after failed prophecies.
I’m not sure I understand how cults are examples of taking an idea seriously, surely a cult is a complex of ideas—not any single one, some of which one can take seriously and others not so (in relgions there’s debates about Hyperdispensationalism and patripassianism which show that even within the complex of ideas, different ones can be taken seriously. Not to mention a la carte Catholics and reformists ) - and that the chief mechanism by which people become subsumed into cults has nothing to do with reason or logical arguments but social support (or coercion) irrespective of the recruits belief?
The feedback loop is very different then and operates not on ideas but a whole host of different mechanisms. (Feelings of belonging, feelings of personal importance, no longer a need to ‘search’ or ‘question’ existential matters). These don’t require ideas to be taken seriously at all.
Again, on the macro scale I can take seriously the idea of… I dunno… Lamarckism. But even if I seriously investigate it, give it the benefit of the doubt, I’m not really in a position to test it in the sense that it’s a macro idea and not something whcih will affect my everyday routine (like Polyphasic sleeping). Even if I later on have children and try to change my behavior to elicit certain traits in those children, the lag time between when I can confirm it is many years.
one problem with taking ideas seriously is you can get pwned by virulent memes that are very good at hijacking your brain into believing them and propagating them further. they’re subtly flawed, but the flaws are extremely difficult to reason through, so being very smart doesn’t save you; in fact, it’s easy to dig yourself in deeper. many ideologies and religions are like this.
it’s unfortunately very hard to tell when this has happened to you. on the one hand, it feels like arguments just being obviously very compelling, so you’ll notice nothing wrong if it happens to you. on the other hand, if you overcorrect and never take compelling arguments seriously, you become too stodgy and ignore anything novel that you should pay attention to. one idea for how to think about this better: imagine an oracle told you that there exists a magic phrase that you cannot distinguish from a very compelling argument. you don’t really know when this magic phrase will pop up in life, if ever. but it might give you a little bit more pause the next time someone makes a really compelling argument for why you should give all your money to X.
Do you get pwned more, or just by a different set of memes? The bottom 80% of humans on “taking ideas seriously” seem to have plenty of bad memes, although maybe the variance is smaller.
there are a lot of humans who don’t take ideas seriously in that they are very socially conservative and therefore rarely get pwned, in the sense that they mostly live the life that they expect they will live, no matter what memes they are exposed to (which may be a very bad life from your perspective)
Either I strongly disagree with you that there’s a big gap here, or I’m one of people you’d say are normies who lead lives they expect to live (among other definitional differences).
seems false, or at least uncharitable. do you expect that such people would self-report along the lines of “i don’t take ideas seriously”? it seems more likely to me that they would report something like “i value family”, and mean it. you may find the idea simple, but it is certainly an idea, and they certainly take it seriously.
put another way, this social conservatism came from somewhere, and is itself an idea. the assumption—that arguments that worked to change your behavior would not change their behavior—can be explained in two ways. either they do not take ideas seriously, as you suggest, or either they value different things than you.
Failure to understand and failure to act are different, and beliefs shouldn’t care what you understand or do. There is little danger in taking ideas/framings seriously/playfully in order to adequately learn, to break the superficial engagement or unsuitable framing failure modes that maintain systematic ignorance or misconceptions about subtler details.
But it needs to remain unnecessary to believe what you learn, by default being in agreement shouldn’t directly compel belief or action, it should require more careful judgement. So taking ideas seriously can help further when lack of understanding was the bottleneck to changes in belief or action, but that’s not always the case.
Yepp, this is true. However, I believe that there are other strategies for avoiding such memes other than “being smart”. Two of these strategies broadly correspond to what we call “being virtuous” and “being emotionally healthy”. See my exchange with Wei Dai here, and this sequence, for more.
Similarly, it’s worth being careful of arguments that lean heavily into longtermism or support concentration of power, because those frames can be used to justify pretty much anything. It doesn’t mean we should dismiss them outright—arguments for accumulating power are and long term thinking are convincing for a reason—but you should double check whether the author has strong principles, the path to getting there, and what it’s explicitly trading off against.
Re: Vitalik Buterin on galaxy brain resistance.
i think these are similar to conservatism in the sense that if you do them too much, you stop getting pwned but you also stop doing entire categories of things that you should do. for example, if you are too virtuous, you become overly self-sacrificial/martyr-like and stop taking many actions that are actually net-positive (many activists suffer from this); if you are too emotionally integrated, you become one of those people who meditated too much and no longer have any desires for anything at all.
Yeah, I do feel confused about the extent to which the solution to this problem is just “selectively become dumber” (e.g. as discussed by Habryka here). However, I have faith that there are a bunch of Pareto improvements to be made—for example, I think that less neuroticism helps you get less pwned without making you dumber in general. (Though as a counterpoint, maybe neuroticism was useful for helping people identify AI risk?) I’d like to figure out theories of virtue and emotional health good enough to allow us to robustly identify other such Pareto improvements.
A related thought that I had recently: fertility decline seem like a rough proxy for “how pwned are you getting by memes”, and fertility is strongly anticorrelated with population-level intelligence. So you have east asians getting hit hardest by the fertility crisis, then white populations, then south asians, while african fertility is still very high. Obviously this is confounded by metrics like development and urbanization, though, so it’s hard to say if intelligence mediates the decline directly or primarily via creating wealth—but it does seem like e.g. east asians are getting hit disproportionately hard. (Plausibly there’s some way to figure this out more robustly by looking at subpopulations.)
>other than “being smart”.
More like, being smarter than average. If you are that exact level of smart but in population with mean higher than your smarts, then the memes will target you as a primary substrate. You can argue in that case there are less such memes, but I don’t know, it probably has less effect than positional smartness.
I’ve seen this sentiment before, but, in practice, I don’t think there exists an “adversarial noise for humans” line of argument that brainwashes anyone who reads it sincerely into doing XYZ. There are certainly arguments that look compelling at first glance but turn out to have longer-term issues, but part of “taking ideas seriously” is thoroughly investigating their counterarguments.
Chesterton’s Fence is an old standard for a reason: if something new seems both simple enough to be easily discoverable and objectively better than the current strategy, one should figure out why it’s not already the current strategy before adopting it.
I’d venture an uninformed guess that in 95 % or so percent of these cases the problem isn’t “taking ideas seriously” but rather people deferring proper judgement due to some emotional or social effect.
I like to see memetics being taken seriously!
It’s complicated—one’s aversion to a particular idea may well be the result of an existing meme fighting to prevent it from coming in, which manifests itself as suspicion that can prematurely stop one’s inquiry.
Though I’d push back on the framing of memes as being something “out there”. You say that it’s hard to tell when this has happened to you: I agree, but only insofar as you consider particular memes in isolation; not in the sense of going from a state of memelessness to losing your meme virginity.
While sometimes “getting pwned by a meme” can indeed be a very powerful experience (like in religious conversion, or in the following paragraphs), in reality everyone is subjected to memes since birth. With that said, you can still reason about arguments based on their structure and content. You can still identify fallacies and biases. We’re full of memes, but rational thought is still possible.
That is the oracle, and the magic phrase is “here’s a bunch of words that were in my mind that are now in your mind”. It’s magic because you can’t argue with it.
I am “someone”, and I am also “X”, compellingly enough.
...
Enough pausing. What’s taking you so long? You know what to do.
Does this only apply on the macroscale, say, ideas concerning ASI or Economic frameworks? Because it feels like if I take a very personal level idea seriously, let’s take polyphasic sleeping. If I take it seriously and implement it—sure I won’t get thrown into the East River but I should notice if something wrong happens to me, and rather quickly.
Solution really seems to be: tight feedback loops?
tight feedback loops help for sure. though it is possible to be too far gone—cults often continue to exist, possibly even with strengthened belief, after failed prophecies.
I’m not sure I understand how cults are examples of taking an idea seriously, surely a cult is a complex of ideas—not any single one, some of which one can take seriously and others not so (in relgions there’s debates about Hyperdispensationalism and patripassianism which show that even within the complex of ideas, different ones can be taken seriously. Not to mention a la carte Catholics and reformists ) - and that the chief mechanism by which people become subsumed into cults has nothing to do with reason or logical arguments but social support (or coercion) irrespective of the recruits belief?
The feedback loop is very different then and operates not on ideas but a whole host of different mechanisms. (Feelings of belonging, feelings of personal importance, no longer a need to ‘search’ or ‘question’ existential matters). These don’t require ideas to be taken seriously at all.
Again, on the macro scale I can take seriously the idea of… I dunno… Lamarckism. But even if I seriously investigate it, give it the benefit of the doubt, I’m not really in a position to test it in the sense that it’s a macro idea and not something whcih will affect my everyday routine (like Polyphasic sleeping). Even if I later on have children and try to change my behavior to elicit certain traits in those children, the lag time between when I can confirm it is many years.