Should I be recommending to the AI companies that they go silent about that stuff?
I do think this isn’t actually equivalent. In a vacuum, if we have information on how competent things are, seems good to share that information. But the question is “does the mechanism of gaining that information provide an easy feedbackloop for companies to use to climb? (and how bad is that?”, which isn’t a simple reversal.
Fair point that if companies already have the info, that’s different. Proper reversal test would be cases where the info isn’t widely known within companies already.
There’s also like a matter of degree – the thing that seems bad to me is the easy-repeatability. (I don’t know how much the AI village would exactly have this problem, but the scaled up versions in your vignettes seem like they likely veer into easy-repeatability.)
I do think this isn’t actually equivalent. In a vacuum, if we have information on how competent things are, seems good to share that information. But the question is “does the mechanism of gaining that information provide an easy feedbackloop for companies to use to climb? (and how bad is that?”, which isn’t a simple reversal.
Fair point that if companies already have the info, that’s different. Proper reversal test would be cases where the info isn’t widely known within companies already.
There’s also like a matter of degree – the thing that seems bad to me is the easy-repeatability. (I don’t know how much the AI village would exactly have this problem, but the scaled up versions in your vignettes seem like they likely veer into easy-repeatability.)