That’s fair, but I guess tlevin is saying these other provisions prevent AI companies from being able to completely avoid talking about catastrophic AI risk. So it’s not just that these provisions compensate for a bad provision. They mitigate the downside you’re concerned about.
I was ineptly objecting to this snippet in particular:
If that were the only provision of the bill, then yes, that would be a problem
The problem I intended to describe in the first two comments of the thread is that this provision creates a particular harmful incentive. By itself, this incentive is created regardless of whether it’s also opposed in some contexts by other things. The net effect of the bill in the mitigated contexts could then be beneficial, but the incentive would still be there (in some balance with other incentives), and it wouldn’t be mitigated in the other contexts. The incentive is not mitigated for podcasts and blog posts, examples I’ve mentioned above, so it would still be a problem there (if my argument for it being a problem makes sense), and the way it’s still a problem there is not moved at all by the other provisions of the bill.
So I was thinking of my argument as about existence of this incentive specifically, and read tlevin’s snippet as missing the point, claiming the incentive’s presence depends on things that have nothing to do with the mechanism that brings it into existence. But there’s also a plausible reading of what I was saying (even though unintended) as an argument for the broader claim that the bill as a whole incentivises AI companies to communicate less than what they are communicating currently, because of this provision. I don’t have a good enough handle on this more complicated question, so it wasn’t my intent to touch on it at all (other than by providing a self-contained ingredient for considering this broader question).
But in this unintended reading, tlevin’s comment is a relevant counterargument, and my inept objection to it is stubborn insistence on not seeing its relevance or validity, expressed without argument. Judging by the votes, it was a plausible enough reading, and the readers are almost always right (about what the words you write down actually say, regardless of your intent).
Makes sense! Yeah I can see that “that would be a problem” can easily be read as saying I don’t think this incentive effect even exists in this case; as you’re now saying, I meant “that would make the provision a problem, i.e. net-negative.” I think conditional on having to say sufficiently detailed things about catastrophic risk (which I think 53 probably does require, but we’ll see how it’s implemented), the penalty for bad-faith materially false statements is net positive.
Whether this thing in particular is a problem or not doesn’t depend on the presence of other things in there, even those that would compensate for it.
That’s fair, but I guess tlevin is saying these other provisions prevent AI companies from being able to completely avoid talking about catastrophic AI risk. So it’s not just that these provisions compensate for a bad provision. They mitigate the downside you’re concerned about.
I was ineptly objecting to this snippet in particular:
The problem I intended to describe in the first two comments of the thread is that this provision creates a particular harmful incentive. By itself, this incentive is created regardless of whether it’s also opposed in some contexts by other things. The net effect of the bill in the mitigated contexts could then be beneficial, but the incentive would still be there (in some balance with other incentives), and it wouldn’t be mitigated in the other contexts. The incentive is not mitigated for podcasts and blog posts, examples I’ve mentioned above, so it would still be a problem there (if my argument for it being a problem makes sense), and the way it’s still a problem there is not moved at all by the other provisions of the bill.
So I was thinking of my argument as about existence of this incentive specifically, and read tlevin’s snippet as missing the point, claiming the incentive’s presence depends on things that have nothing to do with the mechanism that brings it into existence. But there’s also a plausible reading of what I was saying (even though unintended) as an argument for the broader claim that the bill as a whole incentivises AI companies to communicate less than what they are communicating currently, because of this provision. I don’t have a good enough handle on this more complicated question, so it wasn’t my intent to touch on it at all (other than by providing a self-contained ingredient for considering this broader question).
But in this unintended reading, tlevin’s comment is a relevant counterargument, and my inept objection to it is stubborn insistence on not seeing its relevance or validity, expressed without argument. Judging by the votes, it was a plausible enough reading, and the readers are almost always right (about what the words you write down actually say, regardless of your intent).
Makes sense! Yeah I can see that “that would be a problem” can easily be read as saying I don’t think this incentive effect even exists in this case; as you’re now saying, I meant “that would make the provision a problem, i.e. net-negative.” I think conditional on having to say sufficiently detailed things about catastrophic risk (which I think 53 probably does require, but we’ll see how it’s implemented), the penalty for bad-faith materially false statements is net positive.