First of all, thank you! I was a little nervous, since this was my first post, and wanted to share what I’ve been working on.
I was aware of the LLM policy, and did my best writing on my own, and used Claude to edit/format some markdown (it is my second time writing a blog in markdown).
There were a couple of sentences I did not like how I wrote them, and was kind of lost on how to properly communicate the idea (English is my second language).
Next time I will ask for opinions or suggestions, instead of defaulting to an LLM!
Note: Could you please point out an example of those default LLM phrases?
Some random phrases with lots of big-model-LLM-vibes:
If interpretability is about understanding AI, alignment is about steering it. This tackles the core challenge: how do we ensure that as AI systems become more powerful, they remain beneficial to humanity?
Imagine being handed a black box that makes life-or-death decisions, and your job is to figure out how it works. That’s interpretability research in a nutshell. It is essentially doing neuroscience on artificial minds, trying to understand not just what they do, but how and why they do it.
Research isn’t just about discovering new insights, it’s about creating knowledge and sharing it effectively. Strong communication and collaboration skills are essential for advancing AI safety.
First of all, thank you! I was a little nervous, since this was my first post, and wanted to share what I’ve been working on.
I was aware of the LLM policy, and did my best writing on my own, and used Claude to edit/format some markdown (it is my second time writing a blog in markdown).
There were a couple of sentences I did not like how I wrote them, and was kind of lost on how to properly communicate the idea (English is my second language).
Next time I will ask for opinions or suggestions, instead of defaulting to an LLM!
Note: Could you please point out an example of those default LLM phrases?
Some random phrases with lots of big-model-LLM-vibes: