My goal is to do work that counterfactually reduces AI risk from loss-of-control scenarios. My perspective is shaped by my experience as the founder of a VC-backed AI startup, which gave me a firsthand understanding of the urgent need for safety.
I have a B.S. in Artificial Intelligence from Carnegie Mellon and am currently a CBAI Fellow at MIT/Harvard. My primary project is ForecastLabs, where I’m building predictive maps of the AI landscape to improve strategic foresight.
I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html—inspired by Daniel Kokotajlo.
(xkcd meme)
Could you discuss the motivations for why you think these are important (and the theories of change)? Though I know the deadline has passed, I’m keen to build a project in monitoring AI behaviors in production and incident tracking, so I’m curious to learn what research Coefficient Giving has done suggesting these have gaps.