Autonomous Systems @ UK AI Safety Institute (AISI)
DPhil AI Safety @ Oxford (Hertford college, CS dept, AIMS CDT)
Former senior data scientist and software engineer + SERI MATS
I’m particularly interested in sustainable collaboration and the long-term future of value. I’d love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read—let me know your suggestions! In no particular order, here are some I’ve enjoyed recently
Ord—The Precipice
Pearl—The Book of Why
Bostrom—Superintelligence
McCall Smith—The No. 1 Ladies’ Detective Agency (and series)
Melville—Moby-Dick
Abelson & Sussman—Structure and Interpretation of Computer Programs
Stross—Accelerando
Graeme—The Rosie Project (and trilogy)
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
Hanabi (can’t recommend enough; try it out!)
Pandemic (ironic at time of writing...)
Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)
People who’ve got to know me only recently are sometimes surprised to learn that I’m a pretty handy trumpeter and hornist.
Saw this event (recursive.to) coming soon. I won’t be attending, but there are some exciting speakers lined up!
One point I’m worried won’t get raised, but which is pretty crucial (according to me):
I know that there’s the whole spectre of superexponential AI progress/FOOM etc! I know! It’s scary, it’s a legitimate concern about recursive AI R&D. [1]
Even if that comes to pass, perhaps a far more crucial effect of automated AI progress is that whoever has the compute suddenly has much more unilateral say over how that’s directed. If they’re reckless or worse, that’s a really big deal. If it doesn’t rapidly result in superintelligence, that concentration of influence is still a really big and concerning deal, possibly substantially setting the stage for whatever subsequent tech transitions come up (perhaps including industrial automation).
For myself, I tentatively treat that as plausible but unlikely, because I think the main constraints on frontier AI are compute and domain data—agents which can devise and run experiments cheaply will add a boost but probably not drastically change the rate, which is already startling enough. And subsequent real-world impacts also look largely compute constrained. A bigger feedback is due in principle when high-tech manufacturing is automatable.