Oh, it is a long post and have a lot of thoughts I wanted to comment about it. Not sure how to organize it all.
But wait! The utter extinction of humanity—argue people who do not believe that premise—is a danger so extreme, that belief in it might possibly be used to argue for unlawful force! By the Fallacy of Appeal to Consequences, then, that belief can’t be true; thus we know as a matter of politics that it is impossible for superintelligence to extinguish humanity. Either it must be impossible for any cognitive system to exist that is advanced beyond a human brain; or the many never-challenged problems of controlling machine superintelligence must all prove to be easy. We cannot deduce which of these two facts is true, but their disjunction must be true and also knowable, because if it weren’t knowable, somebody might be able to argue for violence. Never in human history has any proposition proven to be true if anyone could possibly use it to argue for violence. The laws of physics check whether that could be a possible outcome of any physical situation, and avoid it with perfect reliability.
That whole line of reasoning is deranged, of course.
The problem here is not even so much that it reads more as mockery than as arguments, but, as @Algon noticed about other fragment, that this probably is not at all how described people think. I wasn’t in such cognitive state as described here, so I don’t know, but, as my brain guesses with using automatic empathetic modeling, they probably think more more along the lines of “this belief is obviously not true, so all this unlawful force would be for nothing, so Yudkowsky shouldn’t be allowed to cause it for nothing. And you should avoid dangerous contacts with his arguments, because they are directed into convincing you about super danger of AI, and then you may do unlawful force. Because of a belief which is obviously wrong. And he may start something about epistemology and proper rules of updates, but I can forsee where it is going on—to try to make people believe in violence causing, ridicously implausible belief!” and then they may also go say something about how Yudkowsky shouldn’t tell his false stories about AI danger because they would just depress teenagers for nothing.
I am actually not sure, maybe EY has reasons to put it that way, I had very much cases when he actually had reasons for things his was doing (though in this case my guess would be “I am too tired to make it polished and respectful, so either like that, or I won’t write at all”).
Maybe also there is influence of the fact it is cross posted twitter article. Which actually isn’t marked in any way I have seen, and I found out it just accidentaly, and was confused what is it while reading, because it wasn’t said to be copied series of twitters posts and was too long for that, but was somewhat strange for LessWrong and more reminding twitter. I am actually not even sure now, should I comment here or on twitter? (on both??) Edited: no, really, does someone know some kind of standard policy for that or something?
There’s in fact a difference between calling for a law, and calling for individual outbursts of violence. (Receipt that I am not arguing with a strawman, and that some people purport to not understand any such distinction: Here). Libertarian philosophy aside, most normal ordinary people can tell the difference, and care. They correctly think that they are less personally endangered by someone calling for a law than by someone calling for street violence.
After reading the link, I actually feel being afraid that I in fact totally could also not notice the difference. If I try to figure out why by introspection, it seems that the problem may be in too much using logical mental mode with drawing consequences and too little the mode where you just stare at the thing until considerations pop out. Which actually seems to be a result of that I was trying to not make mistakes of the latter mode such like dismissing MWI because it “seems weird” and bite the bullets instead. But ended up drawing consequences from a mental model which wasn’t account for some intuitively obviously significant factors.
Oh, also I am glad that good writing is back here, unlike Top Tier Intelligence and Fleshling Story posts.
(I probably should have wrote or even posted in parallel to reading, because now I have a feeling I forgot at least one point I wanted to say)
P.S. Oh, and just post comment that it seems by avoiding social networks since I was a kid, because I heard that they are harmful for your mind, I managed to end up having no actual personal experience of what is bad in social networks. But after some examples in that post I actually have better reference point for what was spoken about in “Clickbait Subtly Destroys Intelligence”.
Actually… I looked up and it is “Is Clickbait Destroying Our General Intelligence?”. Which reminds me about another point: I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory. Or even not ever hearing about how unreliable memory is and thinking like “I vividly remember it, so it is certainly true”. And then refusing to back off because it would be Dangerous Admitting of Weakness.
P.P.S. Oh, totally forgot. Another moment when I have obvious alternate hypothesis to what these people think is about “you should commit some violence”. Just… I recently first heard about Handicap concept in signaling theory. And it immediately sparked the idea that it would have explained why people talk about all that desperate violence. Because if you people truly believe that the world is ending, then you would be ready to show something more than cheap words, make some great sacrifices which would only people in such a great desperation make. It shouldn’t even be something about AI actually. Just some great expressive sacrifice of truly desperate. Like, maybe start sawing off your own legs. Or stop eating for half a month. Though, okay, that one doesn’t prove how much you truly believe in world end, people who stop eat believe in much less extreme things.
Of course, all this “signaling theory” may be some complicated matter and there is a tiny chance my half second idea was not quite correct. Like, I was surprised that eg Robin Hanson didn’t come to explain why of course you need to start sawing off your legs, or, even more desperate—go throw your own money into heaps and burn them.
I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory.
I agree with this and think it’s an important thing to be aware of, but also, importantly, it is still lying spreading misinformation.
It’s still useful to maintain the distinction between lying (making claims while believing they’re false) and unwittingly spreading misinformation (making false claims while not being aware they’re false). Including when there are no attempts to check for correctness, even by understanding what is being said, even if it’s clear there’s something funny going on, even when there are tribal or other incentives to keep saying it unchanged and disincentives to check its correctness.
(Not maintaining this distinction leads to forming and spreading misinformed models of people who are saying false things. There are often incentives to keep saying that people saying false things are lying, and disincentives to keep making clear the distinction between lying and misinformation. One could even argue calling them out like that is a good incentive, but that’s a rather self-defeating line of argument, since it’s disincentivising misinformation through misinformation, or even through lying, depending on if you understand the argument when following its recommendations.)
Thanks! I fully agree. What I said was wrong and I edited my comment to reflect that.
It’s somewhat nice that “spreading misinformation” is an umbrella covering doing so intentionally (lying) and unintentionally. It is unpleasant that another way to say “misinformation” is “fake news” and accusations of such seem to be available as a cheap, fully general, attack on political opponents. I guess it would be pretty nice if people always used citations when talking about things, but that seems like an unrealistic ideal.
I didn’t vote on your comment, but at a quick glance, it’s pretty difficult to read. Maybe this contributed to the downvotes? E.g. the first sentence:
Oh, it is a long post and have a lot of thoughts I wanted to comment about it.
Could be “Oh, this is a long post, and I have a lot of thoughts I want to comment about it.”
Maybe your comment contains some good ideas, I don’t know. If it were easier to read, I’d have been much more likely to take the time to read it. I hope this helps.
Thank you for your feedback. It wasn’t at all in my hypothesis space. English is not my native language, so it is not obvious for me what is easy to read in it (though missing “I” before “have” certainly wasn’t intent!). Maybe I should try to use an LLM for corrections...
Yeah, makes sense! LW is probably especially tough on non-native English speakers, compared to the average place on the Internet.
You should probably tell the LLM to only make grammatical edits without changing anything else, so that the text doesn’t sound too much like an LLM (which may also get you downvoted). Although I imagine it’s hard for a non-native English speaker to verify whether something sounds like an LLM. You should put anything “substantially edited or revised by an LLM” in an LLM content block (see the LessWrong LLM policy).
I have some experience with talking with llms in English (Grok is just ludicrously worse in all dimensions, when it talks in Russian), so I think I will be able to call out at least some llm flavors (like “it is not [that]; it is [this]!”).
Though, I think I will ask to highlight the edits and then manually apply only them, it seems like a good ratio between efforts applied and having control over resulting text.
I also suspect that in my case it is more than just not being English native. Because before I learned English I for a few years was reading English text with Google translate, and so got completely desensitized to noticing even the most egregious mistakes in text (I especially remember “intelligence is not a superpower” being regularly translated like “spying is not a great state”, which I still got used to automatically interpret in my head).
I am the most uncertain—should I try to pick a shorter piece of my original comment and repost it with revised spelling? Or restrict myself to revising original collapsed comment, and writing something completely new?
I guess it’s mainly too long, and therefore unclear? There are at least 4 main points that you are addressing in one large comment, along with a bunch of smaller issues. Splitting it into multiple, targeted ones would make it easier to react to them—it would also make it easier for you to work out what people don’t like about it.
Because if you people truly believe that the world is ending, then you would be ready to show something more than cheap words, make some great sacrifices which would only people in such a great desperation make.
This seems reasonable. Signalling. One would hope that actions like dedicating one’s career to AI alignment and AI ethics or leaving AI companies over ethical concerns would count as such a signal, and there are many people doing such things. But I don’t know how compelling these actions actually are to people. I could be making way more money if I was just trying to make money instead of trying to figure out how to work on AI ethics. That’s a pretty significant sacrifice, maybe not as significant as cutting off a let, but in some ways it might actually be more significant. It seems hard to quantify.
But to be more visible, I have considered doing that “human statue” thing buskers do with a sign saying “if I can pause everything, you can pause AGI development”.
Maybe I wasn’t clear enough, but changing a job is a good political expression gesture if you are proposing to vote for Blue. If you are saying something as extreme as that the world is ending, you should be ready to make signal as extreme as that—changing a job isn’t.
To be clear: by ‘signaling’ I don’t mean that people are calculating how much of hard to fake bayesian evidence changing a job provides, they don’t feel that changing a job anywhere as near worldshattering as belief in world end. Eh. It doesn’t feel that I convey it well. I think of something like person getting cancer diagnosis—people expect that if it is true, then Walter White is maximum level of calmness allowed (as to say—Breaking Bad into a drug dealer).
When writing I was unsure what is less of a problem: split text on 7 different comments or have one comment be too long? Now it seems that having one long comment was certainly a mistake (as well as lots of P.S.). I am not sure whether it is possible to fix it by reposting it as many comments.
And now I know why I so rarely try to post (or even comment) on lesswrong. I almost was puzzled why. Now I am not. Just commented something and got karma −4 on it and I have no idea why (Ok, I have too much ideas why, I don’t know which one is right)
Suddenly I have much more appreciation for the terrible system of eg twitter where you can’t downvote things. Probably I should have written it there...
Oh, it is a long post and have a lot of thoughts I wanted to comment about it. Not sure how to organize it all.
The problem here is not even so much that it reads more as mockery than as arguments, but, as @Algon noticed about other fragment, that this probably is not at all how described people think. I wasn’t in such cognitive state as described here, so I don’t know, but, as my brain guesses with using automatic empathetic modeling, they probably think more more along the lines of “this belief is obviously not true, so all this unlawful force would be for nothing, so Yudkowsky shouldn’t be allowed to cause it for nothing. And you should avoid dangerous contacts with his arguments, because they are directed into convincing you about super danger of AI, and then you may do unlawful force. Because of a belief which is obviously wrong. And he may start something about epistemology and proper rules of updates, but I can forsee where it is going on—to try to make people believe in violence causing, ridicously implausible belief!” and then they may also go say something about how Yudkowsky shouldn’t tell his false stories about AI danger because they would just depress teenagers for nothing.
I am actually not sure, maybe EY has reasons to put it that way, I had very much cases when he actually had reasons for things his was doing (though in this case my guess would be “I am too tired to make it polished and respectful, so either like that, or I won’t write at all”).
Maybe also there is influence of the fact it is cross posted twitter article. Which actually isn’t marked in any way I have seen, and I found out it just accidentaly, and was confused what is it while reading, because it wasn’t said to be copied series of twitters posts and was too long for that, but was somewhat strange for LessWrong and more reminding twitter. I am actually not even sure now, should I comment here or on twitter? (on both??) Edited: no, really, does someone know some kind of standard policy for that or something?
After reading the link, I actually feel being afraid that I in fact totally could also not notice the difference. If I try to figure out why by introspection, it seems that the problem may be in too much using logical mental mode with drawing consequences and too little the mode where you just stare at the thing until considerations pop out. Which actually seems to be a result of that I was trying to not make mistakes of the latter mode such like dismissing MWI because it “seems weird” and bite the bullets instead. But ended up drawing consequences from a mental model which wasn’t account for some intuitively obviously significant factors.
Oh, also I am glad that good writing is back here, unlike Top Tier Intelligence and Fleshling Story posts.
(I probably should have wrote or even posted in parallel to reading, because now I have a feeling I forgot at least one point I wanted to say)
P.S. Oh, and just post comment that it seems by avoiding social networks since I was a kid, because I heard that they are harmful for your mind, I managed to end up having no actual personal experience of what is bad in social networks. But after some examples in that post I actually have better reference point for what was spoken about in “Clickbait Subtly Destroys Intelligence”.
Actually… I looked up and it is “Is Clickbait Destroying Our General Intelligence?”. Which reminds me about another point: I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory. Or even not ever hearing about how unreliable memory is and thinking like “I vividly remember it, so it is certainly true”. And then refusing to back off because it would be Dangerous Admitting of Weakness.
P.P.S. Oh, totally forgot. Another moment when I have obvious alternate hypothesis to what these people think is about “you should commit some violence”. Just… I recently first heard about Handicap concept in signaling theory. And it immediately sparked the idea that it would have explained why people talk about all that desperate violence. Because if you people truly believe that the world is ending, then you would be ready to show something more than cheap words, make some great sacrifices which would only people in such a great desperation make. It shouldn’t even be something about AI actually. Just some great expressive sacrifice of truly desperate. Like, maybe start sawing off your own legs. Or stop eating for half a month. Though, okay, that one doesn’t prove how much you truly believe in world end, people who stop eat believe in much less extreme things.
Of course, all this “signaling theory” may be some complicated matter and there is a tiny chance my half second idea was not quite correct. Like, I was surprised that eg Robin Hanson didn’t come to explain why of course you need to start sawing off your legs, or, even more desperate—go throw your own money into heaps and burn them.
I agree with this and think it’s an important thing to be aware of, but also, importantly, it is still
lyingspreading misinformation.It’s still useful to maintain the distinction between lying (making claims while believing they’re false) and unwittingly spreading misinformation (making false claims while not being aware they’re false). Including when there are no attempts to check for correctness, even by understanding what is being said, even if it’s clear there’s something funny going on, even when there are tribal or other incentives to keep saying it unchanged and disincentives to check its correctness.
(Not maintaining this distinction leads to forming and spreading misinformed models of people who are saying false things. There are often incentives to keep saying that people saying false things are lying, and disincentives to keep making clear the distinction between lying and misinformation. One could even argue calling them out like that is a good incentive, but that’s a rather self-defeating line of argument, since it’s disincentivising misinformation through misinformation, or even through lying, depending on if you understand the argument when following its recommendations.)
Thanks! I fully agree. What I said was wrong and I edited my comment to reflect that.
It’s somewhat nice that “spreading misinformation” is an umbrella covering doing so intentionally (lying) and unintentionally. It is unpleasant that another way to say “misinformation” is “fake news” and accusations of such seem to be available as a cheap, fully general, attack on political opponents. I guess it would be pretty nice if people always used citations when talking about things, but that seems like an unrealistic ideal.
No, really, can somebody who voted against say why they think my comment is so bad?
I didn’t vote on your comment, but at a quick glance, it’s pretty difficult to read. Maybe this contributed to the downvotes? E.g. the first sentence:
Could be “Oh, this is a long post, and I have a lot of thoughts I want to comment about it.”
Maybe your comment contains some good ideas, I don’t know. If it were easier to read, I’d have been much more likely to take the time to read it. I hope this helps.
Thank you for your feedback. It wasn’t at all in my hypothesis space. English is not my native language, so it is not obvious for me what is easy to read in it (though missing “I” before “have” certainly wasn’t intent!). Maybe I should try to use an LLM for corrections...
Yeah, makes sense! LW is probably especially tough on non-native English speakers, compared to the average place on the Internet.
You should probably tell the LLM to only make grammatical edits without changing anything else, so that the text doesn’t sound too much like an LLM (which may also get you downvoted). Although I imagine it’s hard for a non-native English speaker to verify whether something sounds like an LLM. You should put anything “substantially edited or revised by an LLM” in an LLM content block (see the LessWrong LLM policy).
I have some experience with talking with llms in English (Grok is just ludicrously worse in all dimensions, when it talks in Russian), so I think I will be able to call out at least some llm flavors (like “it is not [that]; it is [this]!”).
Though, I think I will ask to highlight the edits and then manually apply only them, it seems like a good ratio between efforts applied and having control over resulting text.
I also suspect that in my case it is more than just not being English native. Because before I learned English I for a few years was reading English text with Google translate, and so got completely desensitized to noticing even the most egregious mistakes in text (I especially remember “intelligence is not a superpower” being regularly translated like “spying is not a great state”, which I still got used to automatically interpret in my head).
I am the most uncertain—should I try to pick a shorter piece of my original comment and repost it with revised spelling? Or restrict myself to revising original collapsed comment, and writing something completely new?
I guess it’s mainly too long, and therefore unclear? There are at least 4 main points that you are addressing in one large comment, along with a bunch of smaller issues. Splitting it into multiple, targeted ones would make it easier to react to them—it would also make it easier for you to work out what people don’t like about it.
This seems reasonable. Signalling. One would hope that actions like dedicating one’s career to AI alignment and AI ethics or leaving AI companies over ethical concerns would count as such a signal, and there are many people doing such things. But I don’t know how compelling these actions actually are to people. I could be making way more money if I was just trying to make money instead of trying to figure out how to work on AI ethics. That’s a pretty significant sacrifice, maybe not as significant as cutting off a let, but in some ways it might actually be more significant. It seems hard to quantify.
But to be more visible, I have considered doing that “human statue” thing buskers do with a sign saying “if I can pause everything, you can pause AGI development”.
Maybe I wasn’t clear enough, but changing a job is a good political expression gesture if you are proposing to vote for Blue. If you are saying something as extreme as that the world is ending, you should be ready to make signal as extreme as that—changing a job isn’t.
To be clear: by ‘signaling’ I don’t mean that people are calculating how much of hard to fake bayesian evidence changing a job provides, they don’t feel that changing a job anywhere as near worldshattering as belief in world end. Eh. It doesn’t feel that I convey it well. I think of something like person getting cancer diagnosis—people expect that if it is true, then Walter White is maximum level of calmness allowed (as to say—Breaking Bad into a drug dealer).
When writing I was unsure what is less of a problem: split text on 7 different comments or have one comment be too long? Now it seems that having one long comment was certainly a mistake (as well as lots of P.S.). I am not sure whether it is possible to fix it by reposting it as many comments.
And now I know why I so rarely try to post (or even comment) on lesswrong. I almost was puzzled why. Now I am not. Just commented something and got karma −4 on it and I have no idea why (Ok, I have too much ideas why, I don’t know which one is right)
Suddenly I have much more appreciation for the terrible system of eg twitter where you can’t downvote things. Probably I should have written it there...