So, while I agree with the general point of your arguments about what the problem is (people falling to get that they stand to lose everything and gain nothing by pursuing dangerous AI capabilities), I don’t agree that trying really hard to be quiet will help much. I think the word is on the street. Foomers gonna foom. The race that really matters is alignment & safety vs capabilities. Doesn’t matter who wins the capabilities race. If any capabilities racer gets to foom before alignment gets to safe-enough, then we’re all doomed.
So I think we have to talk loudly and clearly among ourselves, without allowing ourselves to be quiet for fear of aiding capabilities.
I do think that having a better way of communicating, which allowed for easier restriction of dangerous ideas, would be good. I do think that if someone has literal code that anyone can run which advances capabilities, they shouldn’t share it. But I think we can’t afford to worry too much about leaking ideas. I don’t think we have enough control over what ideas will spread for the benefit of holding our tounges all the time to be worth the cost to alignment that that paranoid mindset would bring. The ideas will spread, whether we spread them or not. We are a very small group of worriers amongst a very large group of capabilities enthusiasts.
So, while I agree with the general point of your arguments about what the problem is (people falling to get that they stand to lose everything and gain nothing by pursuing dangerous AI capabilities), I don’t agree that trying really hard to be quiet will help much. I think the word is on the street. Foomers gonna foom. The race that really matters is alignment & safety vs capabilities. Doesn’t matter who wins the capabilities race. If any capabilities racer gets to foom before alignment gets to safe-enough, then we’re all doomed. So I think we have to talk loudly and clearly among ourselves, without allowing ourselves to be quiet for fear of aiding capabilities. I do think that having a better way of communicating, which allowed for easier restriction of dangerous ideas, would be good. I do think that if someone has literal code that anyone can run which advances capabilities, they shouldn’t share it. But I think we can’t afford to worry too much about leaking ideas. I don’t think we have enough control over what ideas will spread for the benefit of holding our tounges all the time to be worth the cost to alignment that that paranoid mindset would bring. The ideas will spread, whether we spread them or not. We are a very small group of worriers amongst a very large group of capabilities enthusiasts.