Call me Oliver or Oly—I don’t mind which.
I’m particularly interested in sustainable collaboration and the long-term future of value. I’d love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I’m currently (2023) embarking on a PhD in AI in Oxford (Hertford College), and also spend time in (or in easy reach of) London. Until recently I was working as a senior data scientist and software engineer, and doing occasional AI alignment research with SERI.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read—let me know your suggestions! In no particular order, here are some I’ve enjoyed recently
Ord—The Precipice
Pearl—The Book of Why
Bostrom—Superintelligence
McCall Smith—The No. 1 Ladies’ Detective Agency (and series)
Melville—Moby-Dick
Abelson & Sussman—Structure and Interpretation of Computer Programs
Stross—Accelerando
Graeme—The Rosie Project (and trilogy)
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
Hanabi (can’t recommend enough; try it out!)
Pandemic (ironic at time of writing...)
Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)
People who’ve got to know me only recently are sometimes surprised to learn that I’m a pretty handy trumpeter and hornist.
Quick dump.
Impressions
Having met Sam (only once) it’s clear he’s a slick operator and is willing to (at least) distort facts to serve a narrative
I tentatively think Sam is one of the ‘immortality or die trying’ crowd (which is maybe acceptable for yourself but not when gambling with everything else too)
Story from OpenAI leadership re racing has always struck me as suspicious rationalisation (esp re China)
‘You aren’t stuck in traffic/race; you are the traffic/race’
A few interactions with OpenAI folks weakly suggests even the safety-conscious ones aren’t thinking that clearly about safety
I’ve been through org shakeups and it tends to take time to get things ticking over properly again, maybe months (big spread)
Assumptions
I admit I wasn’t expecting Sam to come back. If it sticks, this basically reverses my assessment!
I’ve been assuming that the apparent slavish loyalty of the mass of employees is mostly a fog of war illusion/artefact
the most suckered might follow, but I’m tentatively modelling those as the least competent
crux: there aren’t large capabilities insights that most employees know
looks like we won’t get a chance to find out (and nor will they!)
Other
Note that if the board gets fired, this is bad evidentially for the whole ‘corrigibly aim a profit corp’ attempt
it turns out the profit corp was in charge after all
it’s also bad precedent, which can make a difference for future such things
but it presumably doesn’t change much in terms of actual OpenAI actions
I buy the ‘bad for EA PR’ thing, and like John I’m unsure how impactful that actually is
I think I’m less dismissive of this than John
in particular it probably shortened the fuse on tribalism/politicisation catching up (irrespective of EA in particular)
but I’ve some faith that tribalism won’t entirely win the day and ideas can be discussed on their merit
New news
looks like Sam is coming back
looks like board is fired
presumably more to come
Anyway, I certainly wasn’t (and ain’t) sure what’s happening, but I tentatively expected that if Sam were replaced, that’d a) remove a particular source of racingness b) slow things through generic shakeup faff c) set a precedent for reorienting a profit corp for safety reasons. These were enough to make it look net good.
It looks like Sam is coming back, which isn’t a massive surprise, though not what I was expecting. So, OpenAI direction maybe not changed much. In this branch, EA PR thing maybe ends up dominating after all. Hard to say what the effect is on Sam’s personal brand; lots to still cash out I expect. It could enhance his charisma, or he might have spent something which is hard to get back.
Based on new news, I softly reverse my position [EDIT on the all-considered goodness-of-outcome, mainly for the PR and ‘shortened the fuse’ reasons].
Incidentally, I think the way things played out is more evidence for the underlying views that made it a good idea to (try to) oust Sam (both his direct actions, and the behaviour of the people around him, are evidence that he’s effective at manipulation and not especially safety-oriented). The weird groupthink (outward) from OpenAI employees is also a sign of quite damaged collective epistemics, which is sad (but informative/useful) evidence. But hey ho.