[Question] How can I use AI without increasing AI-risk?

I barely use AI tools, and the main reason is that I developed some sort of revulsion to them, because associate them with the harms they cause and the risks they bring.

On the other hand, it seems that more and more, whoever doesn’t adopt AI tools will get become far less productive and will be left behind.

And also, if the people who worry about the risk limit themselves, and the people who don’t worry about the risk don’t limit themselves, it creates a personal responsibility vortex which tilts the balance of power away from safety.

But, perhaps it’s possible to get the best of both words—use AI tools , but in a responsible manner that doesn’t increase the harms and risks from it.

How does one do that?

No comments.