[Question] How to parallelize “inherently” serial theory work?

Things this question is assuming, for the sake of discussion: The hardest parts of AI alignment are theoretical. Those parts will be critical for getting AI alignment right. The biggest bottlenecks to theoretical AI alignment, are “serial” work, as described in this Nate Soares post. For quick reference: is the kind that seems to require “some researcher retreat to a mountain lair for a handful of years” in a row.

Examples Soares gives are “Einstein’s theory of general relativity, [and] Grothendieck’s simplification of algebraic geometry”.

The question: How can AI alignment researchers parallelize this work?

I’ve asked a version of this question before, without realizing that this is a core part of it.

This thread is for brainstorming, collecting, and discussing techniques for taking the “inherently” serial work of deep mathematical and theoretical mastery… and making it parallelizable.

I am aware this could seem impossible, but sometimes seemingly-impossible things are worth brainstorming about, just in case, whenever (as is true here) we don’t know it’s impossible.

No comments.