Do you treat the coordination problem as a “non-alignment” problem? For example, creating an international AI treaty to pause or regulate AI development seems infeasible under current geopolitical and market conditions—is that the kind of problem you mean?
The problem of coordinating on AI development isn’t the same thing as solving the alignment problem, but it’s not the thing I’m pointing at in this post because it’s still about avoiding misalignment.
Ah yeah, I figured it out. I believe that work on international collaboration is part of your solution—it’s needed to pause frontier AI development & steer ASI development
Do you treat the coordination problem as a “non-alignment” problem? For example, creating an international AI treaty to pause or regulate AI development seems infeasible under current geopolitical and market conditions—is that the kind of problem you mean?
The problem of coordinating on AI development isn’t the same thing as solving the alignment problem, but it’s not the thing I’m pointing at in this post because it’s still about avoiding misalignment.
Ah yeah, I figured it out. I believe that work on international collaboration is part of your solution—it’s needed to pause frontier AI development & steer ASI development