“Share the dual-use stuff only with specific people who are known to properly understand the AGI risk, can avoid babbling about it in public, and would be useful contributors” seems like the straightforward approach here.
Like, groups of people are able to maintain commercial secrets. This is kind of not unlike that, except with somewhat higher stakes.
I mean, AI people are notoriously bad at doing these kinds of things xD I would expect the people running openai or anthropic to say similar things to this (when their orgs were just starting out). So I hope you can see why I wanted to ask this. None of this is to cast any doubt on your ability or motives, just noting the minefield that is unfortunately next to the park where we’re having this conversation.
“Share the dual-use stuff only with specific people who are known to properly understand the AGI risk, can avoid babbling about it in public, and would be useful contributors” seems like the straightforward approach here.
Like, groups of people are able to maintain commercial secrets. This is kind of not unlike that, except with somewhat higher stakes.
I mean, AI people are notoriously bad at doing these kinds of things xD I would expect the people running openai or anthropic to say similar things to this (when their orgs were just starting out). So I hope you can see why I wanted to ask this. None of this is to cast any doubt on your ability or motives, just noting the minefield that is unfortunately next to the park where we’re having this conversation.
For what it’s worth, I’m painfully aware of all the skulls lying around, yep.