Jessica Taylor. CS undergrad and Master’s at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
We might disagree about the value of thinking about “we are all dead” timelines. To my mind, forecasting should be primarily descriptive, not normative; reality keeps going after we are all dead, and having realistic models of that is probably a useful input regarding what our degrees of freedom are. (I think people readily accept this in e.g. biology, where people can think about what happens to life after human extinction, or physics, where “all humans are dead” isn’t really a relevant category that changes how physics works.)
Of course, I’m not implying it’s useful for alignment to “see that the AI has already eaten the sun”, it’s about forecasting future timelines by defining thresholds and thinking about when they’re likely to happen and how they relate to other things.
(See this post, section “Models of ASI should start with realism”)