Some questions for the people at 80,000 Hours

Link post

I originally wrote this for the EAForum and later realised they could be linked? Anyway, thought this might be of interest to the people at LW (I don’t expect the folks at 80k to answer here as well as the EAForum)

---

1. By recommending people work at the big AI labs (whose explicit aim is to create AGI), do you think this creates a positive Halo Effect for the labs’ brand? 80k is known as an organisation whose mission is to make the world a better place, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work. This point shouldn’t be a crux since 80k has run partnerships in the past for their own marketing purposes).

Put concretely, the impact of this is that people (job seekers, investors, users of LLMs) can look at the lab in question and assume that that lab is not doing a bad thing by trying to quickly create AGI.

2. If you think the answer to #1 is Yes (it does create a positive Halo Effect), then do you believe that there is a cost of this Halo Effect? I.e. is it bad that you’re improving their brand perception among job seekers, investors, users of LLMs? (TBH, I don’t think I’ve ever actually seen/​heard anyone at 80k point at a big lab and say “um I don’t think you should make that thing that might kill everyone”, so maybe this is a non-starter?).

3. If you think there is a cost, do you believe this cost is outweighed by the benefit of having safety minded EA /​ Rationalist folk inside big labs? This is a crux I find it hard to wrap my head around, but it is possible that everything boils down to this question. And my personal take is that if you’re unsure about this then you shouldn’t be creating the Halo Effect in the first place.

No comments.