Experience Report—ML4Good Bootcamp Singapore, Sep′25

Introduction

This is my personal report for the recently held Machine Learning for Good (ML4Good Bootcamp) Singapore, Sept 20-28, 2025.

ML4Good provides intensive in-person bootcamps to upskill people in AI safety. The bootcamps have been held in various parts of the world (mainly Europe and LatAm). ML4Good Singapore, to the best of my knowledge, was the first ML4Good bootcamp in Asia. You can see more information at their page.

There has been similar posts in the LW community as well (for example, see this and this) .

Curriculum and Schedules

The bootcamp covers a lot of topics related to AI safety, including

  • General ML prerequisites technical topics: optimizers and hyperparameters, transformers, agents

  • Topics that are relevant to AI safety, such as capabilities, forecasting, risks, and strategies.

  • AI safety technical topics, such as interpretability, RLHF + CoT. Most of these technical topics include hands-on workshops besides lecture.

  • Governance-related topics, e.g. reading latest AI bills

Our main book is the AI Safety Atlas written by CeSIA. The first three chapters were prerequisites for the bootcamp, and my impression is that the course was organized around those chapters.

A usual day started at 9 AM and ended formally at 7.30 PM. A single day usually consisted of mixes of lectures, hands-on technical sessions, and other workshops in the format of discussions. The mix was varied; for example, our first day was mostly filled with hands-on sessions, whereas some other days lectures and discussions are more common.

Besides the lecture-style sessions, we also had one-on-one sessions between participants and career planning. For the one-on-one session, each participant was assigned a partner and given some time to talk through each of their career plans and provide feedback. Career planning was done by the instructors to help the participants solidify their career plans and they provided feedback as well.

The last major component of the bootcamp was the final project. All participants were given roughly 2 days (10 hours) to do AI safety related topics of their interests. A large number of participants worked together to set up accountability systems for their current/​future AI safety endeavors (e.g. fellowships, field building), and the rest did mixtures of governance and technical works on quite diverse topics, e.g. governance, eval awareness, control, red-teaming, to name a few. I did my project with another participant on the topic of interpretability of speech augmented models.

Instructors

We had the following very wonderful people as our instructors :

  • Jonathan Clayborough from European Network for AI Safety

  • Julian Schulz, now working on Meridian Research

  • Jonathan Ng, who has done some amazing eval works such as the Machiavelli benchmark and 3CB.

Valerie Pang from Singapore AI Safety Hub (SASH) acted as the main coordinator of the event (and also a special thanks to Jia Yang for letting her place be the venue for the second day!)

We also were grateful to have Tekla Emborg (Future of Life Institute, governance) and Mike Zijdel (Catalyze, startup incubation) as external speakers.

Participants

There were 14 participants for the program. For ASEAN countries, we have people from Indonesia, the Philippines, Malaysia and Singapore. There were also some participants from Taiwan, China and Japan as well. The backgrounds were somewhat diverse

  • Most of the people have CS or STEM-related education/​professional-experience.

  • Participants have different exposure to AI safety. Notably, there were people who already worked in AI safety (e.g. did some fellowships, produced AI safety/​governance related papers), and there were some people who had just started exploring AI safety.

The Good Things

  • I don’t think I have any strong opinion against or for the material as I am quite new to AI Safety. It seemed to me that we touched all the major components of AI safety. Understandably we didn’t go too deep into each area due to the time limit.

  • The general vibe among the people had been nothing but very positive for me ! The TAs were very helpful and positive. Besides encouraging people to generally have fun, they also encouraged people to work together and help each other (e.g. pair programming), which I think helped to build some sort of camaraderie. People also got along quite well .We had a few participants sharing lunch and having fun activities together at night (e.g. karaoke, origami folding, board games). We also celebrated Petrov Day by dividing people into two teams and simulated a nuclear war game.

  • Having the event in the ASEAN region benefits me a lot since it reduces the cost to participate significantly. It would cost me ~1000$ to participate in the UK or France bootcamp, whereas having it in Singapore reduces the cost significantly to around ~300$.

The Good Things that Might be Improved

  • There were workshops about scalable oversights and governance, however these two materials were not mentioned in the participants guide’s prerequisites materials, so I came unprepared. I personally felt these two workshops could be improved if I had read the materials prior to the camp, even after we were given time to read about it during the camp.

Final Remarks

I wasn’t really confident about how and whether I should go into AI safety earlier, but the camp had provided me with enough nudge to start spending more time in doing AI safety. One major thing I learned was that I probably could start very early in AI safety without needing an advanced background (MSc/​PhD/​expert in some topics of AI safety). It seems to be that there are a lot of good introductory projects out there, and even I can contribute to something non-technical, such as field building, with good potential impacts.

I mentioned the vibe a lot because personally the people had been a major net positive contributor to my experience ! I think I probably would lean less towards working on AI safety if I felt the community to be unwelcoming, but my experience has been the opposite so far.

I am very happy to recommend this camp to anyone interested in AI safety and will be interested to see more such initiatives, especially in the region.

Notes : Special thanks to all ML4Good Singapore organizers and participants who had made the event possible, hence allowing me to write this post. Also special thanks to Jia Yang, Harry, Valerie , and Sasha for the feedback on this post.

No comments.