Donated $500.
David Althaus
I’m thinking about writing a more comprehensive guide than Skatche’s Rationalist’s Guide to Psychoactive Drugs.
Nobody is smart enough to be wrong all the time.
Ken Wilber
Doubt is not a pleasant condition, but certainty is absurd.
Voltaire
Took the survey.
It is easy to be certain....One has only to be sufficiently vague.
Charles S. Peirce
Meta-Note: This is great! We should make this into a monthly or bi-monthly recurring thread like “Rationality Quotes” or “What are you working on?”.
Back to the topic: I overestimated the efficacy of my anti-depressant and now believe that it was mainly placebo.
I just read the actual study.
These guys list on page 14 the variance ratios (i.e. male variance divided by female variance) of 31 countries on 5 different tests (1 PISA, 4 TIMSS). 28 test-results are missing so there are in total 127 measurements.
I don’t have SPSS but the following should be illuminating enough:
In 7 tests the female variance is higher than the male variance. On 6 tests they are equal. But on 114 tests the male variance is higher than the female one.
In the Netherlands and in Marocco the average variance ratio is around 1. In Indonesia female variance seems to be greater than male variance. But in 28 other countries male variance is on average higher than female variance.
It’s true however that the average score of men in some countries is lower than the female one, so maybe the greater male variation is due to the very low scores of some boys. It’s important to note however that most participants were younger than 15, and girls tend to score higher on IQ-tests than boys when they are young, whereas this trend reverses as they are getting older. (Oh, and if you look at page 16 you’ll see that they only list 16 countries out of 86. Is there some cherry picking going on? )
Maybe I’m missing something huge but these results seem not that promising. Not to mention other studies which showed greater male variance, publication bias and stuff :-)
First of all, I don’t think that morality is objective as I’m a proponent of moral anti-realism. That means that I don’t believe that there is such a thing as “objective utility” that you could objectively measure.
But, to use your terms, I also believe that there currently exists more “disutility” than “utility” in the world. I’d formulating it this way: I think there exists more suffering (disutility, disvalue, etc.) than happiness (utility, value, etc.) in the world today. Note that this is just a consequence of my own personal values, in particular my “exchange rate” or “trade ratio” between happiness and suffering: I’m (roughly) utilitarian but I give more weight to suffering than to happiness. But this doesn’t mean that there is “objectively” more disutility than utility in the world.
For example, I would not push a button that creates a city with 1000 extremely happy beings but where 10 people are being tortured. But a utilitarian with a more positive-leaning trade ratio might want to push the button because the happiness of the 1000 outweighs the suffering of the 10. Although we might disagree, neither of us is “wrong”.
Similar reasoning applies with regards to the “expected value” of the future. Or to use a less confusing term: The ratio of expected happiness to suffering of the future. Crucially, this question has both an empirical as well as a normative component. The expected value (EV) of the future for a person will both depend on her normative trade ratio as well as her empirical beliefs about the future.
I want to emphasize, however, that even if one thinks that the EV of the future is negative, one should not try to destroy the world! There are many reasons for this so I’ll just pick a few: First of all, it’s extremely unlikely that you will succeed and will probably only cause more suffering in the process. Secondly, planetary biocide is one of the worst possible things one can do according to many value systems. I think it’s extremely important to be nice to other value systems and promote cooperation among their proponents. If you attempted to implement planetary biocide you would cause distrust, probably violence and the breakdown of cooperation, which will only increase future suffering, hurting everyone in expectation.
Below, I list several more relevant essays that expand on what I’ve written here and which I can highly recommend. Most of these link to the Foundational Research Institute (FRI) which is not a coincidence as FRI’s mission is to identify cooperative and effective strategies to reduce future suffering.
I. Regarding the empirical side of future suffering
II. On the benefits of cooperation
III. On ethics
Here are some policy recommendations which would not be very PC:
(Disclaimer: I didn’t say I endorse any of these views! See this comment please )
Only folks with above IQ 100 should be allowed to vote.
People with high IQ should get money for having more children.
The male variance in IQ is greater than that for females which explains why most nobel prize winners, CEOs etc. are men. Therefore we should stop pointless countermeasures. (Men are also more ambitious, aggressive and psychopathic which seems also relevant)
Africans have (on average) low IQ scores and low conscientiousness. Therefore international aid is hopeless and we should stop it.
Some of these examples are rather mindkilling, of course.
Eliezer Yudkowsky currently has 2486 karma.
Ah, the good old days!
Come on, Gwern deserves more than a favorable comparison to the FDA.
I know several people who have more credibility than the “FDA and the entire decision-making apparatus of the United States government”, at least when it comes to drugs. Not because I know so many cool folks, but because drug regulation is a paramount example of government irrationality.
Great post. See also this related post by Eliezer.
I had a similar experience with a friend of mine who admired Thomas Mann. And, well, science was not his strong point: He: ” I just finished the Magic Mountain and I believe Thomas Mann anticipated Einstein’s theory of relativity!” Me: ” Äh, what...” He: ” On one page he writes about how the watch hands divide the space of the clock-face into several areas! He anticipated that space and time are related!!”
And just in case: Mann started writing ” Magic Mountain” in 1912.
Talking of Russians who saved the world, is October 27 the Vasili Arkhipov Day?
hi everybody,
I’m 22, male, a student and from Germany. I’ve always tried to “perceive whatever holds the world together in its inmost folds”, to know the truth, to grok what is going on. Truth is the goal, and rationality the art of achieving it. So for this reason alone lesswrong is quite appealing.
But in addition to that Yudkowsky and Bostrom convinced me that existential risks, transhumanism , the singularity, etc. are probably the most important issues of our time.
Furthermore this is the first community I’ve ever encountered in my life that makes me feel rather dumb. ( I can hardly follow the discussions about solomonoff induction, everett-branches and so on, lol, and I thought I was good at math because I was the best one in high school :-) But, nonetheless being stupid is sometimes such a liberating feeling!
To spice this post with more gooey self-disclosure: I was sort of a “mild” socialist for quite some time ( yeah, I know. But, there are some intelligent folks who were socialists, or sort-of-socialists like Einstein and Russell). Now I’m more pro-capitalism, libertarian, but some serious doubts remain. I’m really interested in neuropsychological research of mystic experiences. ( I think I share this personal idiosyncrasy with Sam Harris...) I think many rational atheists ( myself included before I encountered LSD), underestimate the preposterous and life-transfomring power of mystic experiences, that can convert the most educated rationalist into a gibbering crackpot. It makes you think you really “know” that there is some divine and mysterious force at the deepest level of the universe, and the quest for understanding involves reading many, many absurd and completely useless books, and this endeavor may well destroy your whole life.
I would say that’s a typical case of an antiprediction. Humans differ in all sorts of things (IQ, height, sexual orientation), so why shouldn’t they differ in relationship-preferences?
For example, he must have thought that writing HPMoR was a good use of time, and therefore must have (correctly) predicted that it would be quite popular if he was to write it.
Isn’t the simpler explanation that he just enjoys writing fiction?
I think mostly I expect us to continue to overestimate the sanity and integrity of most of the world, then get fucked over like we got fucked over by OpenAI or FTX. I think there are ways to relating to the rest of the world that would be much better, but a naive update in the direction of “just trust other people more” would likely make things worse.
[...]
Again, I think the question you are raising is crucial, and I have giant warning flags about a bunch of the things that are going on (the foremost one is that it sure really is a time to reflect on your relation to the world when a very prominent member of your community just stole 8 billion dollars of innocent people’s money and committed the largest fraud since Enron), [...]I very much agree with the sentiment of the second paragraph.
Regarding the first paragraph, my own take is that (many) EAs and rationalists might be wise to trust themselves and their allies less.[1]
The main update of the FTX fiasco (and other events I’ll describe later) I’d make is that perhaps many/most EAs and rationalists aren’t very good at character judgment. They probably trust other EAs and rationalists too readily because they are part of the same tribe and automatically assume that agreeing with noble ideas in the abstract translates to noble behavior in practice.
(To clarify, you personally seem to be good at character judgment, so this message is not directed at you. (I base that mostly on your comments I read about the SBF situation, big kudos for that, btw!)
It seems like a non-trivial fraction of people that joined the EA and rationalist community very early turned out to be of questionable character, and this wasn’t noticed for years by large parts of the community. I have in mind people like Anissimov, Helm, Dill, SBF, Geoff Anders, arguably Vassar—these are just the known ones. Most of them were not just part of the movement, they were allowed to occupy highly influential positions. I don’t know what the base rate for such people is in other movements—it’s plausibly even higher—but as a whole our movements don’t seem to be fantastic at spotting sketchy people quickly. (FWIW, my personal experiences with a sketchy, early EA (not on the above list) inspired this post.)
My own takeaway is that perhaps EAs and rationalists aren’t that much better in terms of integrity than the outside world and—given that we probably have to coordinate with some people to get anything done—I’m now more willing to coordinate with “outsiders” than I was, say, eight years ago.
- ^
Though I would be hesitant to spread this message; the kinds of people who should trust themselves and their character judgment less are more likely the ones who will not take this message to heart, and vice versa.
- ^
Razib Khan