Hi. I’m Gareth McCaughan. I’ve been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is “gjm” (not “Gjm” despite the wiki software’s preference for that capitalization). Elsewehere I generally go by one of “g”, “gjm”, or “gjm11”. The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say “mathematician” but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren’t.
If you’re wondering why some of my very old posts and comments are at surprisingly negative scores, it’s because for some time I was the favourite target of old-LW’s resident neoreactionary troll, sockpuppeteer and mass-downvoter.
If this post had just said “I think some people may feel strongly about AI x-risk for reasons that ultimately come down to some sort of emotional/physical pain whose origins have nothing to do with AI; here is why I think this, and here are some things you can do that might help find out whether you’re one of them and to address the underlying problem if so”, then I would consider it very valuable and deserving of attention and upvotes and whatnot. I think it’s very plausible that this sort of thing is driving at least some AI-terror. I think it’s very plausible that a lot of people on LW (and elsewhere) would benefit from paying more attention to their bodies.
… But that’s not what this post does. It says you have to be “living in a[...] illusion” to be terrified by apocalyptic prospects. It says that if you are “feeling stressed” about AI risks then you are “hallucinating”. It says that “what LW is actually about” is not actual AI risk and what to do about it (but, by implication, this alleged “game” of which Eliezer Yudkowsky is the “gamesmaster” that works by engaging everyone’s fight-or-flight reactions to induce terror). It says that, for reasons beyond my understanding, it is impossible to make actual progress on whatever real AI risk problems there might be while in this stressed-because-of-underlying-issues state of mind. It says that “the reason” (italics mine) AI looks like a big threat is because the people to whom it seems like a big threat are “projecting [their] inner hell onto the external world”. And it doesn’t offer the slightest shred of evidence for any of this; we are just supposed to, I dunno, feel in our bodies that Valentine is telling us the truth, or something like that.
I don’t think this is good epistemics. Maybe there is actually really good evidence that the mechanism Valentine describes here is something like the only way that stress ever arises in human beings. (I wouldn’t be hugely surprised to find that it’s true for the stronger case of terror, and I could fairly easily be convinced that anyone experiencing terror over something that isn’t an immediate physical threat is responding suboptimally to their situation. Valentine is claiming a lot more than that, though.) But in that case I want to see the really good evidence, and while I haven’t gathered any actual statistics on how often people claiming controversial things with great confidence but unwilling to offer good evidence for them turn out to be right and/or helpful, I’m pretty sure that many of them don’t. Even more so when they also suggest that attempts to argue with them about their claims are some sort of deflection (or, worse, attempts to keep this destructive “game” going) that doesn’t merit engaging with.
Full disclosure #1: I do not myself feel the strong emotional reaction to AI risk that many people here do. I do not profess to know whether (as Valentine might suggest) this indicates that I am less screwed up psychologically than people who feel that strong emotional reaction, or whether (as Eliezer Yudkowsky might suggest) it indicates that I don’t understand the issues as fully as they do. I suspect that actually it’s neither of those (though either might happen to be true[1]) but just that different people get more or less emotionally involved in things in ways that don’t necessarily correlate neatly with their degree of psychological screwage or intellectual appreciation of the things in question.
[1] For that matter, the opposite of either might be true, in principle. I might be psychologically screwed up in ways that cut me off from strong emotions I would otherwise feel. I might have more insight into AI risk than the people who feel more strongly that helps me see why it’s not so worrying, or why being scared doesn’t help with it. I think these are both less likely than their opposites, for what it’s worth.
Full disclosure #2: Valentine’s commenting guidelines discourage commenting unless you “feel the truth of [that Valentine and you are exploring the truth together] in your body” and require “reverent respect”. I honestly do not know, and don’t know how I could tell with confidence, whether Valentine and I are exploring the truth together; at any rate, I do not have the skill (if that’s what it is) of telling what someone else is doing by feeling things in my body. I hope I treat everyone with respect; I don’t think I treat anyone with reverence, nor do I wish to. If any of that is unacceptable to Valentine, so be it.
Clarification for the avoidance of doubt: I don’t have strong opinions on just what probability we should assign to (e.g.) the bulk of the human race being killed-or-worse as a result of the actions of an AI system within the next century, nor on what psychological response is healthiest for any given probability. The criticisms above are not (at least, not consciously) some sort of disguise for an underlying complaint that Valentine is trying to downplay an important issue, nor for anger that he is revealing that an emperor I admire has no clothes. My complaint is exactly what I say it is: I think this sort of bulveristic “I know you’re only saying this because of your psychological problems, which I shall now proceed to reveal to you; it would be a total waste of time to engage with your actual opinions because they are merely expressions of psychological damage, and providing evidence for my claims is beneath me”[2] game is not only rude (which Valentine admits, and I agree that it is sometimes helpful or even necessary to be rude) but usually harmful and very much not the sort of thing I want to see more of on Less Wrong.
[2] I do not claim that Valentine is saying exactly those things. But that is very much the general vibe.
(Also somewhat relevant, though not especially to any of what I’ve written above, and dropped here without further comment: “Existential Angst Factory”.)