I don’t think it’s *contradicting* it but I vaguely thought maybe it’s in tension with:
“Big changes within
companiesGovernment AI x-risk policy are typically bottlenecked much more by coalitional politics than knowledge of technical details.
Because lack of knowledge of technical details by A ends up getting B to reject and oppose A.
Mostly I wasn’t trying to push against you though, and more trying to download part of your model on how important you think this is, out of curiosity, given your experience at OA.
But the problem is that we likely don’t have time to flesh out all the details or do all the relevant experiments before it might be too late, and governments need to understand that based on arguments that therefore cannot possibly rely on everything being fleshed out.
Of course I want people to gather as much important empirical evidence and concrete detailed theory as possible asap.
Also, the pre-everything-worked-out-in-detail arguments also need to inform which experiments are done, and so that is why people who have actually listened to those pre-detailed arguments end up on average doing much more relevant empirical work IMO.