Hackathon and Staying Up-to-Date in AI

Link post

Listen to the first episode of a new sub-series, which will follow my team’s process during the Evals Hackathon hosted by AlignmentJamz in November of 2023. In the end, our submission was titled “Detecting Implicit Gaming through Retrospective Evaluation Sets,” and it received first place.

Also, I forgot to post about last week’s episode, so I am mentioning it here. The episode, “Staying Up-to-Date in AI,” covers the methods and tools that I use in an attempt to keep up with the break-neck pace of AI developments.

If you are enjoying the kind of content that I am publishing and/​or support my goals with the podcast, please share it to anyone that you think may like is as well.

As a reiteration of my previous statements, I want the Into AI Safety podcast to be a resource for individuals who are interested in getting involved, but are having a difficult time taking the next steps. If you have any advice, feedback, or ideas that you think could help in that endeavor, please reach out!


In addition to the Into AI Safety podcast website, you can find the content on (I believe) all podcast listening platforms (e.g., Spotify, Apple Podcasts, Pocket Casts). If you think that I missed a platform, please let me know.

No comments.