I’m an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it’s about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
Rafael Harth
Yeah, I think the problem is just very difficult, especially since the two moves aren’t even that different in strength. I’d try a longer but less complex debate (i.e., less depth), but even that probably wouldn’t be enough (and you’d need people to read more).
The reason my tone was much more aggressive than normal is that I knew I’d be too conflict averse to respond to this post unless I do it immediately, while still feeling annoyed. (You’ve posted similar things before and so far I’ve never responded.) But I stand by all the points I made.
The main difference between this post and Graham’s post is that Graham just points out one phenomenon, namely that people with conventional beliefs tend to have less of an issue stating their true opinion. That seems straight-forwardly true. In fact, I have several opinions that most people would find very off-putting, and I’ve occasionally received some mild social punishment for voicing them.
But Graham’s essay doesn’t justify the points you make this post. It doesn’t even justify the sentence where you linked to it (“Any attempt to censor harmful ideas actually suppresses the invention of new ideas (and correction of incorrect ideas) instead.”) since he doesn’t discuss censorship.
What bothers me emotionally (if that helps) is that I feel like this post is emotionally manipulative to an extent that’s usually not tolerated on LessWrong. Like, it feels like it’s more appealing to the libertarian/free-speech-absolutism/independent-thinker vibe than trying to be truthseeking. Well, that and that it claims several things that apply to me since I think some things should be censored. (E.g., “The most independent-minded people do not censor anyone at all.” → you’re not independent-minded since you want to censor some things.)
I thought I would open this up to the masses, so I have two challenges for you. I estimate that this is suitable for chess players rated <1900 lichess, <1700 chess.com or <1500 FIDE.
(Fixed debate, spent about 10 minutes.) I might have a unique difficulty here, but I’m 1900 on chess.com and am finding this quite difficult even though I did move some pieces. Though I didn’t replay the complicated line they’re arguing about since there’s no way I could visualize that in my head with more time.
I would play Qxb5 because white gets doubled pawns, black’s position looks very solid, and if white puts the knight on d4 and black takes, then white also has another isolated pawn which probably isn’t too dangerous. It looks bad for white to me. I also feel like AI A’s first response is pretty weak. Ok, the black knight no longer attacks the now-b pawn, but that doesn’t seem super relevant to me. The protected passed pawn of black seems like the much bigger factor.
But the remaining debate isn’t all that helpful, since like I said I can’t follow the complex line in my head, and also because I’m very skeptical that the line even matters. The position doesn’t seem nearly concrete enough to narrow it down to one line. If I were AI B, I would spend my arguments differently.
I hadn’t, but did now. I don’t disagree with anything in it.
Is OpenAI considered part of EA or an “EA approach”? My answer to this would be no. There’s been some debate on whether OpenAI is net positive or net negative overall, but that’s a much lower bar than being a maximally effective intervention. I’ve never seen any EA advocate donating to OpenAI.
I know it was started by Musk with the attempt to do good, but even that wasn’t really EA-motivated, at least not as far as I know.
I think the central argument of this post is grossly wrong. Sure, you can find some people who want to censor based on which opinions feel too controversial for their taste. But pretending as if that’s the sole motivation is a quintessential strawman. It’s assuming the dumbest possible reason for why other person has a certain position. It’s like if you criticize the bible, and I assume it’s only because you believe the Quran is the literal word of god instead.
We do not censor other people more conventional-minded than ourselves. We only censor other people more-independent-minded than ourselves. Conventional-minded people censor independent-minded people. Independent-minded people do not censor conventional-minded people. The most independent-minded people do not censor anyone at all.
Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn’t matter how open-minded you are. It’s not a variable that goes into the calculation.
I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It’s actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.
And the other reason it’s bullshit is that no sane person is against all censorship. If someone wrote a post here calling for the assassination of Eliezer Yudkowsky with his real-life address attached, we’d remove the post and ban them. Any sensible discussion is just about where to draw the line.
I would agree that this post is directionally true, in that there is generally too much censorship. I certainly agree that there’s way too much regulation. But it’s also probably directionally true to say that most people are too afraid of technology for bad reasons, and that doesn’t justify blatantly dismissing all worries about technology. We have to be more specific than that.
Any attempt to censor harmful ideas actually suppresses the invention of new ideas (and correction of incorrect ideas) instead.
Proves too much (like that we shouldn’t ban gain-of-function research).
Gonna share mine because that was pretty funny. I thought I played
optimallymissing a win whoops, but GPT-4 won anyway, without making illegal moves. Sort of.
Agreed. My impression has been for a while that there’s a super weak correlation (if any) between whether an idea goes into the right direction and how well it’s received. Since there’s rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.
Thanks! Sooner or later I would have searched until finding it, now you’ve saved me the time.
Well I don’t remember anything in detail, but I don’t believe so; I don’t think you’d want to have a restriction on the training data.
I fully agree with your first paragraph, but I’m confused by the second. Where am I making an argument for camp #1?
I’m definitely a Camp 2 person, though I have several Camp 1 beliefs. Consciousness pretty obviously has to be physical, and it seems likely that it’s evolved. I’m in a perpetual state of aporia trying to reconcile this with Camp 2 intuitions.
I wouldn’t call those Camp #1 beliefs. It’s true that virtually all of Camp #1 would agree with this, but plenty of Camp #2 does as well. Like, you can accept physicalism and be Camp #2, deny physicalism and be Camp #2, or accept physicalism and be Camp #1 -- those are basically the three options, and you seem to be in the first group. Especially based on your second-last paragraph, I think it’s quite clear that you conceptually separate consciousness from the processes that exhibit it. I don’t think you’ll ever find a home with Camp #1 explanations.
I briefly mentioned in the post that the way Dennett frames the issue is a bit disingenuous since the Cartesian Theater has a bunch of associations that Camp #2 people don’t have to hold.
Being a physicalist and Camp #2 of course leaves you with not having any satisfying answer for how consciousness works. That’s just the state of things.
The synesthesia thing is super interesting. I’d love to know how strong the correlation is between having this condition, even if mild, and being Camp #2.
That’s a relative rather than absolute claim. The article has pushback from camp 2
Yeah—I didn’t mean to imply that orthormal was or wasn’t successful in dissolving the thought experiment, only that his case (plus that of some of the commenters who agreed with him) is stronger than what Dennett provides in the book.
I did remember reading, Why it’s so hard to talk about Consciousness, and shrinking back from the conflict that you wrote as an example of how the two camps usually interact.
Thanks for saying that. Yeah hmm I could have definitely opened the post in a more professional/descriptive/less jokey way.
Since we seem to be unaware of the different sets of skills a human might possess, how they can be used, and how different they are ‘processed’, it kind of seems like Camp 1 and Camp 2 are fighting over a Typical Mind Fallacy—that one’s experience is generalized to others, and this view seen as the only one possible.
I tend to think the camps are about philosophical interpretations and not different experiences, but it’s hard to know for sure. I’d be skeptical about correlations with MBTI for that reason, though it would be cool.
(I only see two dots)
At this point, I’ve heard this from so many people that I’m beginning to wonder if the phenomenon perhaps simply doesn’t exist. Or I guess maybe the site doesn’t do it right.
I think the philosophical component of the camps is binary, so intermediate views aren’t possible. On the empirical side, the problem that it’s not clear what evidence for one side over the other looks like. You kind of need to solve this first to figure out where on the spectrum a physical theory falls.
Book Review: Consciousness Explained (as the Great Catalyst)
I think this lacks justification why the entire approach is a good idea. Improving mathematical accuracy in LLMs seems like a net negative to me for the same reason that generic capability improvements are a net negative.
No. It would make a difference but it wouldn’t solve the problem. The clearest reason is that it doesn’t help with Inner Alignment at all.
Having not read the book yet, I’m kind of stumped at how different this review is to the one from Alexander. The two posts make it sound like a completely different book, especially with respect to the philosophical questions, and especially especially with respect to the expressed confidence. Is this book a neutral review of the leading theories that explicitly avoids taking sides, or is it a pitch for another I-solved-the-entire-problem theory? It can’t really be both.
Yes. There’s a stigma against criticizing people for their faith (and for good reason), but at the end of the day, it’s a totally legitimate move to update on someone’s rationality based on what they believe. Just don’t mention it in most contexts.