I totally agree with the sentiment here!
As both a researcher, founder, and early employee of multiple non-profits around this space, I think it’s critical to start building out the infrastructure to leverage talent and enable safety work. Right now, there isn’t much to support people making their own opportunities, not to mention that doing so necessarily requires a more stable financial situation than is possible for many individuals.
One of my core goals starting Kairos.fm was to help others who are wanting to start their own projects (e.g. podcasts), and to amplify the reach of others.
While I’m not solely focused on founders/field builders, I have had quite a few on one of my shows, and I’d be incredibly excited to have more.
For any founders reading this, I would love to have you as a guest on the Into AI Safety podcast to discuss your journey and work. If you’re interested, reach out to me at listen@kairos.fm.
jacobhaimes
Why Most Efforts Towards “Democratic AI” Fall Short
Double Podcast Drop on AI Safety
Hunting for AI Hackers: LLM Agent Honeypot
Understanding AI World Models w/ Chris Canal
Let’s Talk About Emergence
I have heard that this has been a talking point for you for at least a little while; I am curious, do you think that there are any aspects of this problem that I missed out on?
Thanks for responding! I really appreciate engagement on this, and your input.
I would disagree that Mistral’s models are considered Open Source using the current OSAID. Although training data itself is not required to be considered Open Source, a certain level of documentation is [source]. Mistral’s models do not meet these standards, however, if they did, I would happily place them in the Open Source column of the diagram (at least, if I were to update this article and/or make a new post).
“Open Source AI” is a lie, but it doesn’t have to be
Podcast interview series featuring Dr. Peter Park
INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park
INTERVIEW: StakeOut.AI w/ Dr. Peter Park
Hackathon and Staying Up-to-Date in AI
Interview: Applications w/ Alice Rigg
Into AI Safety: Episode 3
Into AI Safety Episodes 1 & 2
Into AI Safety—Episode 0
Glad to hear that my post is resonating with some people!
I definitely understand the difficulty regarding time allocation when also working a full time job. As I gather resources and connections I will definitely make sure to spread awareness of them.
One thing to note, though, is that I found the more passive approach of waiting until I find an opportunity to be much less effective than forging opportunities myself (even though I was spending a significant amount of time looking for those opportunities).
A specific and more detailed recommendation for how to do this is going to be highly dependent on your level of experience with ML and time availability. My more general recommendation would be to apply to be in a cohort of BlueDot Impact’s AI Governance or AI Safety Fundamentals courses (I believe that the application for the early 2024 session of the AI Safety Fundamentals course is currently open). Taking a course like this provides opportunities to gain connections, which can be leveraged into independent projects/efforts. I found that the AI Governance session was very doable with a full time position (when I started it, I was still full time at my current job). Although I cannot definitely say the same for the AI Safety Fundamentals course, as I did not complete it through a formal session (and instead just did the readings independently), it seems to be a similar time commitment. I think that taking the course with a cohort would definitely be valuable, even for those that have completed the readings independently.
Apart’s Perspective: Why this Project
Jacob Haimes, Apart’s Research Programs Lead!
When Callum let me know about this post, I thought it would be a great opportunity to give a little insight into Apart’s thought process for taking on this project.
Callum’s output from the Studio was a proposal to investigate Schelling coordination in various ways, but the Apart team wasn’t sure that doing so was necessarily valuable given contemporary work on LLMs and steganography. That being said, Callum had delivered a decent proposal, and was both dedicated and communicative. In addition, we wanted to investigate whether the Apart Fellowship could be easily modified to support a literature-review-style paper as opposed to our typical process which is oriented towards empirical work.
We opted to accept Callum’s project with a reduced scope: a systematic literature review on the concept of Schelling coordination. This would provide us a case to test whether a literature review is a good fit for the Apart Fellowship, more insight as to what potential threat vectors leveraging Schelling coordination are, and what the current state of this research looks like.
While we missed that this area of research is too nascent to do a meaningful systematic literature review, working with Callum was a pleasure, and conducting this project gave us confidence that, for the right literature review, the Apart Fellowship can work well.