What do you do to keep up with AI Safety / ML / theoretical CS research, to the extent that you do? And how much time do you spend on this? For example, do you browse arXiv, Twitter, …?
A broader question I’d also be interested in (if you’re willing to share) is how you allocate your working hours in general.
According to your internal model of the problem of AI safety, what are the main axes of disagreement researchers have?