All of the arguments saying that it’s hard to be confident that transformative AI (TAI) isn’t just around the corner also apply to safety research progress.
How so? It’s much easier to harness forces than engineer them, it’s much easier to write general search code than to write code that does something specific, it’s much easier to point at something than to describe it well enough to recreate it, it’s much easier to get general intelligence by pointing a search at a context and saying “find something that does really well here on these programmable objectives” than by understanding general intelligence well enough to make it robustly not do something.
How so? It’s much easier to harness forces than engineer them, it’s much easier to write general search code than to write code that does something specific, it’s much easier to point at something than to describe it well enough to recreate it, it’s much easier to get general intelligence by pointing a search at a context and saying “find something that does really well here on these programmable objectives” than by understanding general intelligence well enough to make it robustly not do something.
Agree, the claim in the post seems to require assumptions that directly contradict observations of the real world.