[Question] AI for Agent Foundations etc.?

What’s the state of using current AIs for agent foundations research, or other theoretical AI safety work?

I’d be pretty surprised if no one has thought to do this, so I’m guessing this is just a matter of me catching up on what’s going on.

I’m thinking for instance about how I saw Terence Tao talking about using Archimedes for advancing math. And I’d think that current AIs could explain their theoretical advances in ways that could convey the key insights to skilled humans. So they could maybe act as insight-searchers rather than just theorem-proving machines.

I get the sense that the current thinking is, this kind of work can’t be fast enough at this point for many short timelines. That raising the alarms in public and politics is more key right now. And at first brush that basically looks right to me.

But I don’t know, doing a straight AI-assisted shot at theoretical alignment work seems like it’s probably worth trying, and easy enough for anyone to try, so I imagine someone is working on it. I just haven’t yet heard of anyone doing this.

So, what’s the current status of this kind of work?

No answers.