I don’t think that such a post would imply “in order to get humans to believe them” anywhere near as much as this one did. Not every post implies the same things, after all!
Jiro
I wasn’t talking about human behavior or psychology.
If you do not intend to get humans to believe the arguments, you are correct that epistemic learned helplessness doesn’t apply. I do find this sort of odd, however.
To elaborate on my objection: you seem to be making extremely fine distinctions that have no practical difference whatsoever.
Heated rhetoric leads to violence because the more urgent and the more threatening you make something sound, the more you encourage people to take matters into their own hands. It’s not possible to claim that a small group of people may be leading to the extinction of humanity without encouraging violence, because the urgency and the size of the threat go hand in hand with using violence to stop it.
You seem to think that just because you don’t say some magic words like “murderer”, then you haven’t encouraged violence. I don’t really see any reasoning here as to why these words are much worse than the ones you think are okay to use. Just because you don’t call someone evil doesn’t mean you aren’t pushing the urgency of “we’re all going to die unless we do something” and thus leading to violence. My only guess here is that since people think it’s okay to kill murderers, calling someone a murderer encourages violence. But people also think it’s okay to kill someone who poses a huge, urgent, threat.
As in, you need to stop promoting the idea of AI ending humanity. Never mind how you present it, or whether or not your statement is true. No argument is offered on whether it is true.
You fail to model normal people. What they are saying already assumes it’s not true, because neither they nor their audience think it is, and they probably think you don’t believe it’s true either. They are not saying “you can’t tell the truth”, they are saying “you should stop exaggerating”.
The trap or plan is clear. Either you support violence, and so you are horrible and must be stopped, or you don’t, in which case you can be ignored.
Pointing out “your argument has uncomfortable consequences which you don’t accept” is legitimate. They are not saying that your position leads to violence, so you should commit violence. They are saying that your position leads to violence, so you should give up your position.
Showing that your position leads to unacceptable results is called a reductio ad absurdum and yes, it’s a real thing.
The idea is implicit that you should use decisive arguments because then people will have good reason to believe them.
The point is that at least in most cases decisive arguments shouldn’t work, and if said to rational people, won’t work, because of epistemic learned helplessness.
It is definitely concerning that a lot of LW will immediately go into tribal mode when faced with a very heavily-qualified statement that <person their tribe does not like> is not a cartoon supervillain.
Like a lot of advice, some people need to follow this more and some need to follow it less.
The quokka meme is a thing for a reason. Some people are for all practical purposes cartoon supervillains, and quokkas can’t recognize it. What you’re seeing is pushback against quokkadom.
Speak The Truth Even If Your Voice Trembles
I don’t think this is consistent. If you think it is true that someone is evil, or that someone brought it on themselves, then by definition you are speaking the truth when you say that. I don’t see any meaningful difference between “you brought it on yourselves” and “you’re gambling with humanity’s future”, except that the latter is something you like to say and the former isn’t. On the level of “which one could be read by an extremist as a call to violence”, I’d say that both of them can.
“Best humans still outperform” means that we know that machines have certain types of advantages over humans such as never getting drunk or tired. To meaningfully outdo a human, the machine has to outdo a non-malfunctioning human. Outdoing the human because it never gets tired may be useful in a practical sense, but it doesn’t show that the machine is smarter than the human. Furthermore, you shouldn’t be comparing the AI against an average human anyway unless the comparison is done using an average AI (if you can even figure out a definition of ‘average AI’ that can’t be gerrymandered.)
(And no, you don’t ignore AI hallucinations, because the hallucination is inextricably a part of the AI’s reasoning process. There’s no such thing as ‘the AI is in a non-hallucinatory state right now’, like a human won’t always be tired.)
Epistemic learned helplessness should control here, even if it’s inconvenient for rationalists trying to argue people into unusual beliefs.
In an ideal world that would mean that the inefficiency also acts as a tax on irrationality.
Why is the insurance inefficient? The whole point of insurance is to spread risk; some people get out more than they pay in (or could possibly pay in), because nobody knows in advance who’s going to need the expensive procedure. If insurance couldn’t do this, it wouldn’t be “efficient”; it would be useless.
For instance, it makes no sense to have a “no man left behind” policy in war, except that it’s really useful for motivational purposes. It leads to more dead people than it saves in the long term.
Do you also object on similar grounds to insurance for expensive medical conditions? “Man is left behind” is a dangerous medical condition with fewer microbes than usual, and the cost of the insurance premium is implicitly part of the servicemen’s pay even though there isn’t a line item for it.
Your reasoning could be used to argue that servicemen shouldn’t get paid at all. Aside from motivational purposes, paying the soldiers costs money and accomplishes nothing, and there’s an exchange rate between money and lives, so giving them pay has a real cost in lives.
I don’t answer adversarial questions.
You linked to “I don’t answer questions”, which is very different.
Long tasks require being able to decompose your problem into subgoals and meet them, rather than just copy something from the training set.
This may or may not be covered by the examples you already gave, depending on how broadly you interpret them.
How would you rate this story? It has some continuity errors, but it got my eyes moist.
Some subject matter is almost guaranteed to make a lot of people’s eyes moist unless it’s written exceptionally badly. Treating a story based on those as good writing because of people’s reactions to them is cheating.
I’ve never heard of a D&Desque universe where that happens. There are worlds which can be described in a vaguely similar way, but there’s always an explicit recipient, whether intended or otherwise, of the thing you’re getting rid of. You can cure a fever by transferring it to someone, or maybe by tossing it out the window for the next person who walks by to get it, but you don’t cure a fever and have some random guy with no connection get it.
I’m sure there are worlds that do this, but it’s not very common at all. And even a world that had it would tell the reader about it, not just use it in an analogy about something else.
Shortly after this essay went up, I read Audrey Henson’s two-part reporting on Shy Girl at The Drey Dossier:
Which has this comment which should be required reading. And I noticed some of it before reading the comment: asking for a DRM-free copy does not mean piracy, and even if it was pirated it would contain the same text as the original and would therefore still be useful for AI detection.
That’s not what they actually said, they did make the wrong claim that there’s no evidence
According to your own post:
My doctor just said, she doesn’t know of any evidence
which is not a claim that there is no evidence.
Instead of focusing on saying “this is wrong,” try to add new information that is related to the discussion and that could stimulate readers’ minds
That allows the trolls to control the direction of the discussion by picking a topic and having other people add new information to it. Some topics are inherently inflammatory or inherently are likely to attract people who behave badly, so even letting the troll pick the topic can be a disaster. You do not want to have a troll arguing that Jews descend from Cthulu, provide “new information”, and attract people sincerely arguing that Jews are not descendants of Cthulhu but are as evil as if they were, or even just Holocaust deniers.
You get to specify that the trolley doesn’t have brakes because you can say “I will only apply the conclusion I am getting from the trolley problem in real life situations that are similar to the trolley problem in that there are no (metaphorical) brakes.”
The equivalent for your slavery contract is “I will permit slavery only in situations where I will starve if a slavery contract is not allowed”. You can’t do this, because you can’t have a slavery law that applies only in a specific, unlikely, scenario. A law permitting slavery will apply in the vastly more common scenario where the slaver is still incentivised to get some money from you and sell you the food.
I tried it myself with a piece of fiction I had posted on Questionable Questing (which requires registration so Claude could not have seen it). Current free Claude uses Sonnet 4.6. Claude said it was rational xianxia fiction written by someone with a traditional scifi background (which was correct, but doesn’t require style analysis), and could not identify me. Its best guess was that I am Ack.