Another factor is that reading a book might (depending on the person) simply replace time that would otherwise be spent online doing rather unproductive things. The main question may then less be whether reading the book would be more productive than the alternative, but rather whether it is engaging enough to get the motivation to actually read it. It’s hard to compete against the addictiveness of the web.
cubefox
Though the usual money pump justification for transitivity does rely on time and a “sequence of actions”. Which is strange insofar decision theory doesn’t even model such temporal sequences.
Isn’t Bessel engaging in a kind of publication bias? After all, if he never stops getting the desired effect size out of a study, it never concludes, so presumably you won’t hear from it.
You may have two different treatments A and B, and both have comparable effect sizes according to the literature, but you learn that all the published studies involving B were performed by Bessel who you know engages in publication bias. The published studies for A were conducted by George who, as is widely known, doesn’t have this bias. So presumably, if you hear a study was conducted by Bessel, you should correct the reported effect size downwards when estimating the real (underlying) effect size. If you hear a study was conducted by George, you can assume no such publication bias exists, so you shouldn’t correct the reported effect size downwards.
So, if A and B have the same overall reported effect size, you should assume that the effect size of B is lower than of A.
Now assume, unbeknownst to you, Bessel didn’t actually have to withhold any studies, as the effect sizes all happened be above the desired range. Should you still correct the reported effect size downward? Answer: Yes of course, since you don’t know that this is the case. The only thing you “know” is the published effect sizes and the fact that Bessel (the person) engages in biased reporting, which is evidence that the reported effect sizes overestimate the real effect size.
This is similar to how your subjective probability that you have won the lottery is very low before you have checked the results, even if, as luck would have it, you did indeed win the lottery.
For people who are not very interested in some topic like math, presenting the topic as a story about the struggle of historical figures usually makes it much more palatable. Quanta magazine and Veritasium use this strategy to great effect. But I agree: At school time is very limited, and focusing on historical stories would greatly take away from the available time.
Moreover, I think it’s usually not possible anyway to present technical subjects as engaging historical dramas. For most subjects there simply was no such historical drama, nothing fitting an engaging narrative.
By the way, I think the issue you are pointing at is even worse for philosophy, and probably other subject areas except math and science. Too often popular introductions into philosophical topics (insofar they even exist) get turned into a biography of philosopher X, which is almost never illuminating, and usually ignored by academics for good reason.
I’m saying that it is a serious accusation, whose consequences are far more impactful (e.g. possible career end) than ones feelings being hurt. So one should be extra careful before making the accusation. In case of Cremieux we know that he is in fact defending an empirical hypothesis, and he has provided an extensive amount of evidence and arguments in its favor (e.g. on his blog). This provides strong reason to think that the accusation of racism is not justified.
These are more or less controversial, but range from not outside the Overton window at all (saying that factory farming is immoral) to being a little outside. But they are by no means “taboo” in the sense that you would face serious social cost for expressing them. Saying “there are heritable statistical group differences in mean IQ” is on a completely different level. People had their careers ended and reputation ruined because of this. In comparison, saying that golf courses should be replaced with apartments carries almost zero personal risk.
My issue is with the specific takes Cremieux has and ways he acts, which are racist, and harmful, and bad.
I think it is defamatory, bad and counter to the spirit of rationalist discourse to accuse someone of racism when they have put forward an empirical hypothesis including evidence to back it up. The term “racist” has an implication of being merely based on an irrational prejudice, which is clearly not the case for Cremieux.
Consider that there are people with high P(doom) who don’t have any depression or anxiety. Emotions are not as much caused by our beliefs as we tend to assume. A therapist might be able to teach more productive thought patterns and behaviors, but they are unlikely to speak with competence on the object level issue of AI doom.
Independently I recommend trying to get a prescription for SSRIs. Most probably won’t help, but some might, and they tend to not have strong side effects in my experience, so trying them doesn’t hurt.
Only problem is that trying different SSRIs can take a very long time: usually you take one for several weeks, nothing happens, the doctor says “up the dosage”, weeks pass, still no effect, and the doctor might increase the dosage again. Only then may they switch you to a different SSRI, and the whole process begins anew. So persistence is required.
Both labor and compute have been scaled up over the last several years at big AI companies. My understanding is the scaling in compute was more important for algorithmic progress
That may be the case, but I suppose that in the last several years, compute has been scaled up more than labor. (Labor cost is entirely reoccurring, while compute cost is a one-time cost plus a reoccurring electricity cost, and a progress in compute hardware, from smaller integrated circuits, means that compute cost is decreasing over time.) Then obviously that doesn’t necessarily mean that an AI company with access to FLOP/s compute and AI researchers has an advantage over a company with only FLOP/s compute but researchers.
In fact I think in that sense labor is likely more important than compute for algorithmic progress. And that doesn’t seem so far away from reality, if you model as a US company with cheaper access to compute and as a Chinese company with cheaper access to labor (due to lower wages).
I think more posts should be formatted as a nested list. They are especially clear, since indentation ≈ elaboration, which is not visible in continuous text.
Beliefs? World model?
One example: For decades, we ate large amounts of vegetable shortening and margarine made from a chemical process that creates trans fats. Only relatively recently we became sure that trans fats cause heart disease, and restrictions about trans fat contents where put in place in many countries. See this Wikipedia article.
A major problem seems to be that many engineered things we eat every day were invented before rigorous food testing was mandatory. Since retroactively banning all this stuff is not realistic, we have to live with the risk of finding more such cases in the future. Similar things hold for other products, like specialized substances in food packaging, etc.
There is a strong correlation between someone boycotting a person for saying X and X being outside the Overton window. So a causal link is likely. People rarely boycott people for expressing things they disagree with but which are inside the Overton window.
It is also possible that Bob is racist in the sense of successfully working to cause unjust ethnic conflict of some kind, but also Bob only says true things. Bob could selectively emphasize some true propositions and deemphasize others.
Sure, though this is equally possible for the opposite: When Alice is shunning or shaming or cancelling people for expressing or defending a taboo hypothesis, without her explicitly arguing that the hypothesis is false or disfavored by the evidence. In fact, this is usually much easier to do than the former, since defending a taboo hypothesis is attached to a large amount of social and career risk, while attacking a taboo hypothesis is virtually risk-free. Moreover, attacking a taboo hypothesis will likely cause you to get points from virtue signalling.
I think the following resembles a motte-and-bailey pattern: Bailey: “He is a racist, people may want to explain why racism is terrible.” Motte: “Oh I just meant he argued for the empirical proposition that there are heritable statistical group differences in IQ.” Accusing someone of racism is a massively different matter from saying that he believes there are heritable group differences in IQ. You can check whether a term is value neutral by whether the accused people apply it to themselves, in this case they clearly do not. The term “racist” usually carries the implication or implicature of an attitude that is merely based on an irrational prejudice, not an empirical hypothesis with reference to a significant amount of statistical and other evidence.
I find already labelling someone who holds an empirical proposition (which is true or false) as “racist” (which is a highly derogatory term, not a value neutral label) is defamatory. The vague hinting about alleged “rumours” here also seems to just serve to make him appear in a bad light.
I find this attitude sad. I think his blog is currently clearly one of the best ones on the Internet. Even if you don’t agree with some of his positions, I take it to be a deeply anti-rational attitude to try to shun or shame people for saying things that are outside the Overton window. Especially when he has clearly proven on his website that he has highly nuanced takes on various other, less controversial, topics. It reminds me of people trying to shame Scott Alexander for daring to step a little outside the Overton window himself.
In my opinion, true rationalists should exactly not react to such takes with “he said something taboo, let’s boycott things where he is involved”. If you disagree with him, a better attitude would be to write a post about one of the articles on his website, concretely indicating and rebutting things where you think he is wrong. Only claiming “I do not think Cremieux meets this standard of care and compassion” is so vague of an accusation that I don’t know whether you even disagree with anything he said. It sounds like low decoupling and tone policing. I wrote more on rationalist discourse involving taboos here.
It seems highly likely that the majority of humans prefer humanity not to be replaced by AI in the foreseeable future. So I don’t think there is that much variance here.
In a preference utilitarian calculus it does matter which possible preferences actually exist and which don’t. We don’t count hypothetical votes in an election either.
But from a consistent description doesn’t follow concrete existence. Otherwise an ontological argument could imply the existence of God. And if someone doesn’t exist, their parts don’t exist either. A preference is a part of someone, like an arm or a leg, and they wouldn’t exist. A description could only answer what preference someone would have if they existed. But that’s only a hypothetical preference, not one that exists.
This is a plausible assumption (@Steven Byrnes made a similar comment), yet the money pump argument apparently does compare what you have to what you might get in the future.