For example, I think AI safety people often have sort of arbitrary strong takes about things that would be very bad to do, and it’s IMO sometimes been good that Anthropic leadership hasn’t been very pressured by their staff.
Specific examples would be appreciated.
Do you mean things like opposition to open-source? Opposition to pushing-the-SOTA model releases?
Specific examples would be appreciated.
Do you mean things like opposition to open-source? Opposition to pushing-the-SOTA model releases?
(I see that you offered the second as an example to Tsvi.)