L’Ésswrong, c’est moi.
Larks
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
If most firms have these clauses, one firm doesn’t, and most people don’t understand this, it seems possible that most people would end up with a less accurate impression of their relative merits than if all firms had been subject to equivalent evidence filtering effects.
In particular, it seems like this might matter for Wave if most of their hiring is from non-EA/LW people who are comparing them against random other normal companies.
Sorry, not for 2022.
I would typically aim for mid-December, in time for the American charitable giving season.
After having written an annual review of AI safety organisations for six years, I intend to stop this year. I’m sharing this in case someone else wanted to in my stead.
Reasons
It is very time consuming and I am busy.
I have a lot of conflicts of interests now.
The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
A small org OP/FTX/etc. haven’t heard of, in which case I probably can’t help you either.
Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.
Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.
- 1 Mar 2023 4:33 UTC; 5 points) 's comment on 2021 AI Alignment Literature Review and Charity Comparison by (
Larks’s Shortform
Thanks!
Alignment research: 30
Could you share some breakdown for what these people work on? Does this include things like the ‘anti-bias’ prompt engineering?
I would expect that to be the case for staff who truly support faculty. But many of them seem to be there to directly support students, rather than via faculty. The number of student mental health coordinators (and so on) you need doesn’t scale with the number of faculty you have. The largest increase in this category is ‘student services’, which seems to be definitely of this nature.
Thanks very much for writing this very diligent analysis.
I think you do a good job of analyzing the student/faculty ratio, but unless I have misread it seems like this is only about half the answer. ‘Support’ expenses rose by even more than ‘Instruction’, and the former category seems less linked to the diversity of courses offered than to things like the proliferation of Deans, student welfare initiatives, fancy buildings, etc.
Thanks, that’s very kind of you!
Is your argument about personnel overlap that one could do some sort of mixed effect regression, with location as the primary independent variable and controls for individual productivity? If so I’m so somewhat skeptical about the tractability: the sample size is not that big, the data seems messy, and I’m not sure it would capture necessarily the fundamental thing we care about. I’d be interested in the results if you wanted to give it a go though!
More importantly, I’m not sure this analysis would be that useful. Geography-based-priors only really seem useful for factors we can’t directly observe; for an organization like CHAI our direct observations will almost entirely screen off this prior. The prior is only really important for factors where direct measurement is difficult, and hence we can’t update away from the prior, but for those we can’t do the regression. (Though I guess we could do the regression on known firms/researchers and extrapolate to new unknown orgs/individuals).
The way this plays out here is we’ve already spent the vast majority of the article examining the research productivity of the organizations; geography based priors only matter insomuchas you think they can proxy for something else that is not captured in this.
As befits this being a somewhat secondary factor, it’s worth noting that I think (though I haven’t explicitly checked) in the past I have supported bay area organisations more than non-bay-area ones.
Thanks, fixed in both copies.
Thanks, fixed.
Should be fixed, thanks.
Changed in both copies as you request.
I prioritized posts by named organizations.
Diffractor does not list any institutional affiliations on his user page.
No institution I noticed listed the post/sequence on their ‘research’ page.
No institution I contacted mentioned the post/sequence.
No post in the sequence was that high in the list of 2021 Alignment Forum posts, sorted by karma.
Several other filtering methods also did not identify the post
However upon reflection it does seem to be MIRI-affiliated so perhaps should have been affiliated; if I have time I may review and edit it in later.
This post seems like it was quite influential. This is basically a trivial review to allow the post to be voted on.