An “open source bad” mentality becomes more risky.
I agree with this actually”
We need to dig deeper into what open source AI is mostly like in practice. If OS AI naturally tilts defensive (including counter offensive capabilities), then yeah, both of your accounts make sense. But I’m looking at the current landscape and I think I see something different: we’ve got many models that are actively disaligned (“uncensored”) by the community, and there’s a chance that the next big GPT moment is some brilliant insight that doesn’t need massive compute and can be run from a small cloud.
On this part:
″
I agree with this actually”
We need to dig deeper into what open source AI is mostly like in practice. If OS AI naturally tilts defensive (including counter offensive capabilities), then yeah, both of your accounts make sense. But I’m looking at the current landscape and I think I see something different: we’ve got many models that are actively disaligned (“uncensored”) by the community, and there’s a chance that the next big GPT moment is some brilliant insight that doesn’t need massive compute and can be run from a small cloud.