[Question] Steelmanning OpenAI’s Short-Timelines Slow-Takeoff Goal

Here’s the question: Is one year of additional alignment research more beneficial than one more year of hardware overhang is harmful?

One problem I see is that alignment research can be of variable or questionable value, while hardware overhang is an economic certainty.

What if we get to the 2020s and it turns out all the powerful AI are LLMs? /​s

I don’t know how much that’s affected the value of completed alignment research, but I feel like that twist in the story can’t have had no impact on our understandings of what the important or useful research ought to be.

No answers.