This is my main concern here. My view is the AI safety community has a budget of how many alarmist claims we can make before we simply become the boy who cried wolf. We need to spend our alarmist points wisely and in general, I think we could be setting a higher bar for demonstrations of risk we share externally.
This is my main concern here. My view is the AI safety community has a budget of how many alarmist claims we can make before we simply become the boy who cried wolf. We need to spend our alarmist points wisely and in general, I think we could be setting a higher bar for demonstrations of risk we share externally.
Want to emphasize:
Yea. In light of this, someone should start a AIS replication org 👀👀