I think it’s possible that extreme centralization could occur here (not necessarily so), but even if it does happen, it will likely not be long-term maintainable. A few familiar failure modes come to mind. 1.) Highly centralized systems suppress dissent leading to sycophancy and its associated delusional thinking. 2.) They also tend to hollow out the intellectual capital they need for innovation, adaptation, and maintenance. 3.) Maintaining rule requires delegation. The more it delegates to compensate for degraded performance (see 1 and 2) the greater the risk of creating powerful misaligned subordinates. 4.) Even if AI helps extend the life of a central committee, succession remains a problem. If the bus factor of key members is too high, the transition will be brittle and may fail outright. These failure modes are pretty timeless, and I don’t see how AI would prevent them still being applicable here. Maybe AI delays these feedback mechanisms to some degree, but my suspicion is that it will accelerate them. So, even if AI weakens the dependence on distributed coercive power, there are other mechanisms that will likely limit the extent and / or duration of centralization.
This isn’t to say great atrocities can’t occur during the “relatively short” lifetime of a centralized rule. The 20th century has provided a number of examples wherein short-lived authoritarian regimes have caused mass death and suffering. So all reasonable efforts should still be made to prevent the emergence of such a highly centralized regime.
I agree. “I worked really hard on it” is neither necessary nor sufficient for research quality. We already know that lots of careful-looking, labor-intensive, neatly written work can still be wrong or non-replicable. Meanwhile, some valuable insights emerge from relatively simple “aha” moments, and some deep ideas are developed more clearly outside the formal journal pipeline (ex: The Bitter Lesson).
Instead of reverting back to the old imperfect proof-of-work proxy for truth, we should try figuring out how to use these new AI tools to help assess research merit more efficiently.
Granted, some research work will require expensive experiments or other forms of “hard work”, in which case proof-of-work can still function as a useful initial filter.