Evan Morikawa?
LGS
Weirdly aggressive post.
I feel like maybe what’s going on here is that you do not know what’s in The Bell Curve, so you assume it is some maximally evil caricature? Whereas what’s actually in the book is exactly Scott’s position, the one you say is “his usual “learn to love scientific consensus” stance”.
If you’d stop being weird about it for just a second, could you answer something for me? What is one (1) position that Murray holds about race/IQ and Scott doesn’t? Just name a single one, I’ll wait.
Or maybe what’s going on here is that you have a strong “SCOTT GOOD” prior as well as a strong “MURRAY BAD” prior, and therefore anyone associating the two must be on an ugly smear campaign. But there’s actually zero daylight between their stances and both of them know it!
Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn’t use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.
Strong disagree. If I know an important true fact, I can let people know in a way that doesn’t cause legal liability for me.
Can you grapple with the fact that the “vague insinuation” is true? Like, assuming it’s true and that Cade knows it to be true, your stance is STILL that he is not allowed to say it?
Your position seems to amount to epistemic equivalent of ‘yes, the trial was procedurally improper, and yes the prosecutor deceived the jury with misleading evidence, and no the charge can’t actually be proven beyond a reasonable doubt- but he’s probably guilty anyway, so what’s the issue’. I think the issue is journalistic malpractice. Metz has deliberately misled his audience in order to malign Scott on a charge which you agree cannot be substantiated, because of his own ideological opposition (which he admits). To paraphrase the same SSC post quoted above, he has locked himself outside of the walled garden. And you are “Andrew Cord”, arguing that we should all stop moaning because it’s probably true anyway so the tactics are justified.
It is not malpractice, because Cade had strong evidence for the factually true claim! He just didn’t print the evidence. The evidence was of the form “interview a lot of people who know Scott and decide who to trust”, which is a difficult type of evidence to put into print, even though it’s epistemologically fine (in this case IT LED TO THE CORRECT BELIEF so please give it a rest with the malpractice claims).
Here is the evidence of Scott’s actual beliefs:
https://twitter.com/ArsonAtDennys/status/1362153191102677001
As for your objections:
First of all, this is already significantly different, more careful and qualified than what Metz implied, and that’s after we read into it more than what Scott actually said. Does that count as “aligning yourself”?
This is because Scott is giving a maximally positive spin on his own beliefs! Scott is agreeing that Cade is correct about him! Scott had every opportunity to say “actually, I disagree with Murray about...” but he didn’t, because he agrees with Murray just like Cade said. And that’s fine! I’m not even criticizing it. It doesn’t make Scott a bad person. Just please stop pretending that Cade is lying.
Relatedly, even if Scott did truly believe exactly what Charles Murray does on this topic, which again I don’t think we can fairly assume, he hasn’t said that, and that’s important. Secretly believing something is different from openly espousing it, and morally it can be much different if one believes that openly espousing it could lead to it being used in harmful ways (which from the above, Scott clearly does, even in the qualified form which he may or may not believe). Scott is going to some lengths and being very careful not to espouse it openly and without qualification, and clearly believes it would be harmful to do so, so it’s clearly dishonest and misleading to suggest that he has “aligns himself” with Charles Murray on this topic. Again, this is even after granting the very shaky proposition that he secretly does align with Charles Murray, which I think we have established is a claim that cannot be substantiated.Scott so obviously aligns himself with Murray that I knew it before that email was leaked or Cade’s article was written, as did many other people. At some point, Scott even said that he will talk about race/IQ in the context of Jews in order to ease the public into it, and then he published this. (I can’t find where I saw Scott saying it though.)
Further, Scott, unlike Charles Murray, is very emphatic about the fact that, whatever the answer to this question, this should not affect our thinking on important issues or our treatment of anyone. Is this important addendum not elided by the idea that he ‘aligned himself’ with Charles Murray? Would not that not be a legitimate “gripe”?
Actually, this is not unlike Charles Murray, who also says this should not affect our treatment of anyone. (I disagree with the “thinking on important issues” part, which Scott surely does think it affects.)
The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).
Cade correctly informed the readers that Scott is aligned with Murray on race and IQ. This is true and informative, and at the time some people here doubted it before the one email leaked. Basically, Cade’s presented evidence sucked but someone going with the heuristic “it’s in the NYT so it must be true” would have been correctly informed.
I don’t know if Cade had a history of “tabloid rhetorical tricks” but I think it is extremely unbecoming to criticize a reporter for giving true information that happens to paint the community in a bad light. Also, the post you linked by Trevor uses some tabloid rhetorical tricks: it says Cade sneered at AI risk but links to an article that literally doesn’t mention AI risk at all.
What you’re suggesting amounts to saying that on some topics, it is not OK to mention important people’s true views because other people find those views objectionable. And this holds even if the important people promote those views and try to convince others of them. I don’t think this is reasonable.
As a side note, it’s funny to me that you link to Against Murderism as an example of “careful subtlety”. It’s one of my least favorite articles by Scott, and while I don’t generally think Scott is racist that one almost made me change my mind. It is just a very bad article. It tries to define racism out of existence. It doesn’t even really attempt to give a good definition—Scott is a smart person, he could do MUCH better than those definitions if he tried. For example, a major part of the rationalist movement was originally about cognitive biases, yet “racism defined as cognitive bias” does not appear in the article at all. Did Scott really not think of it?
What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did.It was quite straightforward, actually. Don’t be autistic about this: anyone reasonably informed who is reading the article knows what Scott is accused of thinking when Cade mentions Murray. He doesn’t make the accusation super explicit, but (a) people here would be angrier if he did, not less angry, and (b) that might actually pose legal issues for the NYT (I’m not a lawyer).
What Cade did reflects badly on Cade in the sense that it is very embarrassing to cite such weak evidence. I would never do that because it’s mortifying to make such a weak accusation.
However, Scott has no possible gripe here. Cade’s article makes embarrassing logical leaps, but the conclusion is true and the reporting behind the article (not featured in the article) was enough to show it true, so even a claim of being Gettier Cased does not work here.
Scott thinks very highly of Murray and agrees with him on race/IQ. Pretty much any implication one could reasonably draw from Cade’s article regarding Scott’s views on Murray or on race/IQ/genes is simply factually true. Your hypothetical author in Alabama has Greta Thunberg posters in her bedroom here.
Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?
Does it bother you that your prediction didn’t actually happen? Scott is not dying in prison!
This objection is just ridiculous, sorry. Scott made it an active project to promote a worldview that he believes in and is important to him—he specifically said he will mention race/IQ/genes in the context of Jews, because that’s more palatable to the public. (I’m not criticizing this right now, just observing it.) Yet if the NYT so much as mentions this, they’re guilty of killing him? What other important true facts about the world am I not allowed to say according to the rationalist community? I thought there was some mantra of like “that which can be destroyed by the truth should be”, but I guess this does not apply to criticisms of people you like?
The evidence wasn’t fake! It was just unconvincing. “Giving unconvincing evidence because the convincing evidence is confidential” is in fact a minor sin.
I assume it was hard to substantiate.
Basically it’s pretty hard to find Scott saying what he thinks about this matter, even though he definitely thinks this. Cade is cheating with the citations here but that’s a minor sin given the underlying claim is true.
It’s really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott’s defenders. It reminds me of a guy I know who was cheating on his girlfriend, and she suspected this, and he got really mad at her. Like, “how can you believe I’m cheating on you based on such flimsy evidence? Don’t you trust me?” But in fact he was cheating.
I think for the first objection about race and IQ I side with Cade. It is just true that Scott thinks what Cade said he thinks, even if that one link doesn’t prove it. As Cade said, he had other reporting to back it up. Truth is a defense against slander, and I don’t think anyone familiar with Scott’s stance can honestly claim slander here.
This is a weird hill to die on because Cade’s article was bad in other ways.
What position did Paul Christiano get at NIST? Is it a leadership position?
The problem with that is that it sounds like the common error of “let’s promote our best engineer to a manager position”, which doesn’t work because the skills required to be an excellent engineer have little to do with the skills required to be a great manager. Christiano is the best of the best in technical work on AI safety; I am not convinced putting him in a management role is the best approach.
Eh, I feel like this is a weird way of talking about the issue.
If I didn’t understand something and, after a bunch of effort, I managed to finally get it, I will definitely try to summarize the key lesson to myself. If I prove a theorem or solve a contest math problem, I will definitely pause to think “OK, what was the key trick here, what’s the essence of this, how can I simplify the proof”.
Having said that, I would NOT describe this as asking “how could I have arrived at the same destination by a shorter route”. I would just describe it as asking “what did I learn here, really”. Counterfactually, if I had to solve the math problem again without knowing the solution, I’d still have to try a bunch of different things! I don’t have any improvement on this process, not even in hindsight; what I have is a lesson learned, but it doesn’t feel like a shortened path.
Anyway, for the dates thing, what is going on is not that EY is super good at introspecting (lol), but rather that he is bad at empathizing with the situation. Like, go ask EY if he never slacks on a project; he has in the past said he is often incapable of getting himself to work even when he believes the work is urgently necessary to save the world. He is not a person with a 100% solved, harmonic internal thought process; far from it. He just doesn’t get the dates thing, so assumes it is trivial.
This is interesting, but how do you explain the observation that LW posts are frequently much much longer than they need to be to convey their main point? They take forever to get started (“what this NOT arguing: [list of 10 points]” etc) and take forever to finish.
I’d say that LessWrong has an even stronger aesthetic of effort than academia. It is virtually impossible to have a highly-voted lesswrong post without it being long, even though many top posts can be summarized in as little as 1-2 paragraphs.
Without endorsing anything, I can explain the comment.
The “inside strategy” refers to the strategy of safety-conscious EAs working with (and in) the AI capabilities companies like openAI; Scott Alexander has discussed this here. See the “Cooperate / Defect?” section.
The “Quokkas gonna quokka” is a reference to this classic tweet which accuses the rationalists of being infinitely trusting, like the quokka (an animal which has no natural predators on its island and will come up and hug you if you visit). Rationalists as quokkas is a bit of a meme; search “quokka” on this page, for example.
In other words, the argument is that rationalists cannot imagine the AI companies would lie to them, and it’s ridiculous.
This seems harder, you’d need to somehow unfuse the growth plates.
It’s hard, yes—I’d even say it’s impossible. But is it harder than the brain? The difference between growth plates and whatever is going on in the brain is that we understand growth plates and we do not understand the brain. You seem to have a prior of “we don’t understand it, therefore it should be possible, since we know of no barrier”. My prior is “we don’t understand it, so nothing will work and it’s totally hopeless”.
A nice thing about IQ is that it’s actually really easy to measure. Noisier than measuring height, sure, but not terribly noisy.
Actually, IQ test scores increase by a few points if you test again (called test-retest gains). Additionally, IQ varies substantially based on which IQ test you use. It is gonna be pretty hard to convince people you’ve increased your patients’ IQ by 3 points due to these factors—you’ll need a nice large sample with a proper control group in a double-blind study, and people will still have doubts.
More intelligence enables progress on important, difficult problems, such as AI alignment.
Lol. I mean, you’re not wrong with that precise statement, it just comes across as “the fountain of eternal youth will enable progress on important, difficult diplomatic and geopolitical situations”. Yes, this is true, but maybe see if you can beat botox at skin care before jumping to the fountain of youth. And there may be less fantastical solutions to your diplomatic issues. Also, finding the fountain of youth is likely to backfire and make your diplomatic situation worse. (To explain the metaphor: if you summon a few von Neumanns into existence tomorrow, I expect to die of AI sooner, on average, rather than later.)
This is an interesting post, but it has a very funny framing. Instead of working on enhancing adult intelligence, why don’t you start with:
Showing that many genes can be successfully and accurately edited in a live animal (ideally human). As far as I know, this hasn’t been done before! Only small edits have been demonstrated.
Showing that editing embryos can result in increased intelligence. I don’t believe this has even been done in animals, let alone humans.
Editing the brains of adult humans and expecting intelligence enhancement is like 3-4 impossibilities away from where we are right now. Start with the basic impossibilities and work your way up from there (or, more realistically, give up when you fail at even the basics).
My own guess, by the way, is that editing an adult human’s genes for increased intelligence will not work, because adults cannot be easily changed. If you think they can, I recommend trying the following instead of attacking the brain; they all should be easier because brains are very hard:
Gene editing to make people taller. You’d be an instant billionaire. (I expect this is impossible but you seem to be going by which genes are expressed in adult cells, and a lot of the genes governing stature will be expressed in adult cells.)
Gene editing to enlarge people’s penises. You’ll be swimming in money! Do this first and you can have infinite funding for anything else you want to do.
Gene editing to cure acne. Predisposition to acne is surely genetic.
Gene editing for transitioning (FtM or MtF).
Gene editing to cure male pattern baldness.
[Exercise for the reader: generate 3-5 more examples of this general type, i.e. highly desirable body modifications that involve coveting another human’s reasonably common genetic traits, and for which any proposed gene therapy can be easily verified to work just by looking.]
All of the above are instantly verifiable (on the other hand, “our patients increased 3 IQ points, we swear” is not as easily verifiable). They all also will make you rich, and they should all be easier than editing the brain. Why do rationalists always jump to the brain?
The market has very strong incentives to solve the above, by the way, and they don’t involve taboos about brain modification or IQ. The reason they haven’t been solved via gene editing is that gene editing in adults simply doesn’t work nearly as well as you want it to.
A platonically perfect Bayesian given complete information and with accurate priors cannot be substantially fooled. But once again this is true regardless of whether I report p-values or likelihood ratios. p-values are fine.
I think the problem to grapple with is that I can cover the rationals in [0,1] with countably many intervals of total length only 1⁄2 (eg enumerate rationals in [0,1], and place interval of length 1⁄4 around first rational, interval of length 1⁄8 around the second, etc). This is not possible with reals—that’s the insight that makes measure theory work!
The covering means that the rationals in an interval cannot have a well defined length or measure which behaves reasonably under countable unions. This is a big barrier to doing probability theory. The same problem happens with ANY countable set—the reals only avoid it by being uncountable.