RSS

Ram Potham

Karma: 54

My goal is to do work that counterfactually reduces AI risk from loss-of-control scenarios. My perspective is shaped by my experience as the founder of a VC-backed AI startup, which gave me a firsthand understanding of the urgent need for safety.

I have a B.S. in Artificial Intelligence from Carnegie Mellon and am currently a CBAI Fellow at MIT/​Harvard. My primary project is ForecastLabs, where I’m building predictive maps of the AI landscape to improve strategic foresight.

I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://​​sl4.org/​​crocker.html—inspired by Daniel Kokotajlo.

(xkcd meme)

(xkcd meme)

I Tested LLM Agents on Sim­ple Safety Rules. They Failed in Sur­pris­ing and In­for­ma­tive Ways.

Ram Potham25 Jun 2025 21:39 UTC
9 points
12 comments6 min readLW link

AI Con­trol Meth­ods Liter­a­ture Review

Ram Potham18 Apr 2025 21:15 UTC
10 points
1 comment9 min readLW link

Ram Potham’s Shortform

Ram Potham23 Mar 2025 15:08 UTC
1 point
14 comments1 min readLW link