Insight volume/quality doesn’t seem meaningfully correlated with hours worked (see June Huh for an extreme example), high-insight people tend to have work schedules optimized for their mental comfort. I don’t think encouraging someone who’s producing insights at 35 hours per week to work 60 hours per week is positive will result in more alignment progress, and I also doubt that the only people producing insight are those working 60 hours per week.
EDIT: this of course relies on the prior belief that more insights are what we need for alignment right now.
This seems to support Reward is Enough.
More specifically:
DM simulates a lower fidelity version of real world physics → Applies real world AI methods → Achieves generalised AI performance.
This is a pretty concrete demonstration that current AI methods are sufficient to achieve generality, just need more real world data to match the more complex physics of reality.