I have reservations about ControlAI in-particular, but also endorse this as a general policy. I think there are organizations that themselves would be more likely to be robustly trustworthy and would be more fine to link to, though I think it’s actually very hard and rare, and I would still avoid it in-general (the same way LW has a general policy of not frontpaging advertisements or job postings for specific organizations, independent of the organization)[1].
Also, I want to make sure I understand what you mean by “betraying people’s trust.” Is it something like, “If in the future ControlAI does something bad, then, from the POV of our viewers, that means that they can’t trust what they watch on the channel anymore?”
Yeah, something like that. I don’t think “does something bad” is really the category, more something like “will end up engaging with other media by ControlAI which will end up doing things like riling them up about deepfakes in a bad faith manner, i.e. not actually thinking deepfakes are worth banning but the banning of deepfake being helpful for slowing down AI progress without being transparent about that, and then they will have been taken advantage of, and then this will make a lot of coordination around AI x-risk stuff harder”.
We made an exception with our big fundraising post because Lightcone disappearing does seem of general interest to everyone on the site, but it made me sad and I wish we could have avoided it
I think there are organizations that themselves would be more likely to be robustly trustworthy and would be more fine to link to
I would be curious for your thoughts on which organizations you feel are robustly trustworthy.
Bonus points for a list that is kind of a weighted sum of “robustly trustworthy” and “having a meaningful impact RE improving public/policymaker understanding”. (Adding this in because I suspect that it’s easier to maintain “robustly trustworthy” status if one simply chooses not to do a lot of externally-focused comms, so it’s particularly impressive to have the combination of “doing lots of useful comms/policy work” and “managing to stay precise/accurate/trustworthy”).
I have reservations about ControlAI in-particular, but also endorse this as a general policy. I think there are organizations that themselves would be more likely to be robustly trustworthy and would be more fine to link to, though I think it’s actually very hard and rare, and I would still avoid it in-general (the same way LW has a general policy of not frontpaging advertisements or job postings for specific organizations, independent of the organization)[1].
Yeah, something like that. I don’t think “does something bad” is really the category, more something like “will end up engaging with other media by ControlAI which will end up doing things like riling them up about deepfakes in a bad faith manner, i.e. not actually thinking deepfakes are worth banning but the banning of deepfake being helpful for slowing down AI progress without being transparent about that, and then they will have been taken advantage of, and then this will make a lot of coordination around AI x-risk stuff harder”.
We made an exception with our big fundraising post because Lightcone disappearing does seem of general interest to everyone on the site, but it made me sad and I wish we could have avoided it
I would be curious for your thoughts on which organizations you feel are robustly trustworthy.
Bonus points for a list that is kind of a weighted sum of “robustly trustworthy” and “having a meaningful impact RE improving public/policymaker understanding”. (Adding this in because I suspect that it’s easier to maintain “robustly trustworthy” status if one simply chooses not to do a lot of externally-focused comms, so it’s particularly impressive to have the combination of “doing lots of useful comms/policy work” and “managing to stay precise/accurate/trustworthy”).