I filled in the survey! Like many people I didn’t have a ruler to use for the digit ratio question.
Gedusa
Whilst I really, really like the last picture—it seems a little odd to include it in the article.
Isn’t this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn’t the picture sort of act against that—by being slightly sci-fi and weird?
Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn’t multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).
A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.
Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC’s $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.
I found it really helpful to have a list of places where Eliezer and Paul agree. It’s interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.
OpenPhil gave Carl Shulman $5m to re-grant
I didn’t realise this was happening. Is there somewhere we can read about grants from this fund when/if they occur?
This is great! I hope there’s a big response.
It seems likely you’re going to get skewed answers for the IQ question. Mostly it’s the really intelligent and the below average who get (professional) IQ tests—average people seem less likely to get them.
I predict high average IQ, but low response rate on the IQ question, which will give bad results. Can you tell us how many people respond to that question this time? (no. of responses isn’t registered on the previous survey)
Hi Less Wrong!
Decided to register after seeing this comment and wanting to post give a free $10 to a cause I value highly.
I got pulled into less wrong by being interested in transhumanist stuff for a few years, finally decided to read here after realizing that this was the best place to discuss this sort of stuff and actually end up being right as opposed to just making wild predictions with absolutely no merit. I’m an 18 year old male living in the UK. I don’t have a background in maths or computer sci as a lot of people here do (though I’m thinking of learning them). I’m just finishing up at school and then going on to do a philosophy degree (hopefully—though I’m scared of it making me believe crap things)
I’ve found the most useful LW stuff to be along the lines of instrumental rationality (the more recent stuff). Lukeprog’s sequence on winning at life is great! My favorite LW-related posts have been:
The Cynic’s Conundrum: Because I used to think idealistically about my own thought processes and cynically about other people’s. In essence I fell into comfortable cynicism.
Tsuyoku Naritai! (I Want To Become Stronger): Because this was just really galvanizing and made me want to do better, much more than any self-help stuff ever did!
A Suite of Pragmatic Considerations in Favor of Niceness: Fantastic as I tended (and still tend to) be mean for no real reason and this post put a lot of motivation towards stopping. I’ve actually started to have niceness as a terminal value now, which is a tad odd.
So anyway, I’m happy to have registered and I hope to get stronger and have fun here!
Slightly off topic, but I’m very interested in the “policy impact” that FHI has had—I had heard nothing about it before and assumed that it wasn’t having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.
FHI Essay Competition
Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)
“Hell Futures—When is it better to be extinct?” (not entirely serious)
What initiatives is the Singularity Institute taking or planning to take to increase it’s funding to whatever the optimal level of funding is?
I was the one who asked that question!
I was slightly disappointed by his answer—surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.
I guess what I’m really thinking is that it’s pretty unlikely that the two charities are equally optimal.
Would this approach have any advantages vs brain uploading? I would assume brain uploading to be much easier than running a realistic evolution simulation, and we would have to worry less about alignment.
I thought this article was for SL0 people—that would give it the widest audience possible, which I thought was the point?
If it’s aimed at the SL0′s, then we’d be wanting to go for an SL1 image.
Your right action is most excellent!
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I’m guessing they mean a university affiliated person doing a formal philosophy degree of some kind.
I would find this more useful if you spelled out a bit more about your scoring method. You say:
They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.
Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)
And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)
a dog which is as close to being a wolf as one can get without sacrificing any of those essential characteristics which define a dog as such
Why do you think a dog that is close to a wolf is objectively better than dogs which are further away?
A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?