I am a PhD student in computer science at the University of Waterloo.
My current research is related to Kolmogorov complexity. Sometimes I build robots, professionally or otherwise.
See my personal website colewyeth.com for an overview of my interests.
Cruxes connected to whether we get human level A.I. soon:
Do LLM agents become useful in the short term?
How much better is GPT-5 than GPT-4?
Does this generation of robotics startups (e.g. Figure) succeed?
Cruxes connected to whether takeoff is fast:
Are A.I. significantly better at self improving while maintaining alignment of future versions than we are at aligning A.I.?
Cruxes that might change my mind about mech. interp. being doomed:
Can a tool which successfully explains cognitive behavior in GPT-N do the same for GPT-N+1 without significant work?
Last ditch crux:
In high dimensional spaces, do agents with radically different utility functions actually stomp on each other or do they trade? When the intelligence of one agent scales far beyond the other, does trade become stomping or do both just reduce etc.