[Question] What specific thing would you do with AI Alignment Research Assistant GPT?

Why I think this question is important: I asked myself, “What would my AGI timelines be if some AI could summarize Yudkowsky-Ngo debates on alignment difficulty in a way that both participants agree with this summary, everyone who reads this summary understands both positions and participants can check understanding in conversation?” My semi-intuitive answer: “Five years tops and two years as modal prediction”. Debate Summarizer is not a very useful Alignment Assistant, it can’t boost research by 10x. If someone told me that Alignment Assistant suggested idea that sparked optimism in MIRI, I would think that we have exact amount of time it takes for someone to turn every tools needed to build such an Alignment Assistant to the creation of AGI (conditional on “this Alignment Assistant is not AGI itself”).

I.e., if you bet on assistance of narrow AI in alignment research, you should also bet on finding solution quickly. Quick search for a solution requires an already existing plan. On the other hand, we are talking about a narrow AI, you can’t just ask “solve alignment problem for me”. You should ask specific questions, test pre-selected hypotheses, prove well-defined statements. Therefore, I think that those who want to use Alignment Assistants should outline this set of specific things as soon as possible.

UPD: Thanks janus for the link, it helped me to clarify what I would like to see as a perfect answer.

Let’s suppose that your immediate answer is “brainstorming”. Then the perfect specific answer is something like that:

“In my opinion, the most narrow bottleneck in AI alignment is the lack of ideas about X, so I will brainstorm about it with Alignment Assistant.”

Extremely unrealistic example:

“I have The Grand Theory of Alignment, but it critically depends on Goldbach conjecture, so I will try to prove it.”

My very (very) simplified model of Paul Cristiano’s answer:

“80% of alignment can be solved with ELK strategy, so we can make builder-breaker debate on (counter)examples for ELK between Assistant and ARC until we figure out the solution.”

Yet another possible answer:

“I don’t know! We are still in early exploratory mode, I can’t imagine a specific thing. I just want to become as effectively smart as possible and see where it gets us.”