I can see why you’d say that, but I think for me the two are often intermingled and hard to separate. Even assuming that the most greedy/single-minded business leaders wouldn’t care about catastrophic risks on a global scale (which I’m not sure I buy on its own), they’re probably going to want to avoid the economic turbulence which would ensue from egregiously-misaligned, capable AIs being deployed.
For a more fine-grained example, actions like siphoning compute to run unauthorised tasks might be a signal that a model poses significantly higher catastrophic risk, but would also be something a commercial business would want to prevent for their own reasons (e.g. cost, lower performance, etc.). If a lab can demonstrate that their models won’t attempt things of this nature, that’s a win for the commercial customers.
Sorry for the late reply!
I can see why you’d say that, but I think for me the two are often intermingled and hard to separate. Even assuming that the most greedy/single-minded business leaders wouldn’t care about catastrophic risks on a global scale (which I’m not sure I buy on its own), they’re probably going to want to avoid the economic turbulence which would ensue from egregiously-misaligned, capable AIs being deployed.
For a more fine-grained example, actions like siphoning compute to run unauthorised tasks might be a signal that a model poses significantly higher catastrophic risk, but would also be something a commercial business would want to prevent for their own reasons (e.g. cost, lower performance, etc.). If a lab can demonstrate that their models won’t attempt things of this nature, that’s a win for the commercial customers.