I suppose that’s true in a very strict sense, but I wouldn’t expect people considering AI risk to have the level of uncertainty necessary for their decision to be predominately swayed by that kind of second order influence.
For example, someone can get pretty far with “dang, maybe GPT4 isn’t amazing at super duper deep reasoning, but it is great at knowing lots of things and helping synthesize information in areas that have incredibly broad complexity… And biology is such an area, and I dunno, it seems like GPT5 or GPT6 will, if unmitigated, have the kind of strength that lowers the bar on biorisk enough to be a problem. Or more of a problem.”
That’s already quite a few bits of information available by a combination of direct observation and one-step inferences. It doesn’t constrain them to “and thus, I must work on the fundamentals of agency,” but it seems like a sufficient justification for even relatively conservative governments to act.
I suppose that’s true in a very strict sense, but I wouldn’t expect people considering AI risk to have the level of uncertainty necessary for their decision to be predominately swayed by that kind of second order influence.
For example, someone can get pretty far with “dang, maybe GPT4 isn’t amazing at super duper deep reasoning, but it is great at knowing lots of things and helping synthesize information in areas that have incredibly broad complexity… And biology is such an area, and I dunno, it seems like GPT5 or GPT6 will, if unmitigated, have the kind of strength that lowers the bar on biorisk enough to be a problem. Or more of a problem.”
That’s already quite a few bits of information available by a combination of direct observation and one-step inferences. It doesn’t constrain them to “and thus, I must work on the fundamentals of agency,” but it seems like a sufficient justification for even relatively conservative governments to act.