Makes you wonder if there’s some 4D chess going on here. Occam’s razor suggests otherwise, though. And if true, this seems wholly irresponsible, given that AI risk skeptics can just point to this situation as an example that “even if we do no safety testing/guardrails, it’s not that bad! It just offends a few people.” It seems hard to say which direction this will impact SB 53, for example.
Makes you wonder if there’s some 4D chess going on here. Occam’s razor suggests otherwise, though. And if true, this seems wholly irresponsible, given that AI risk skeptics can just point to this situation as an example that “even if we do no safety testing/guardrails, it’s not that bad! It just offends a few people.” It seems hard to say which direction this will impact SB 53, for example.
My only solace would be if someone actually does some bad shady stuff (but not too bad) with Grok and this becomes a scandal.
It is actually an interesting thing to observe: whether such a misaligned near-human level model in the wild will lead to real-world problems.