Human texts also need reasons to trust the takeaways from them, things like bounded distrust from reputational incentives, your own understanding after treating something as steelmanning fodder, expectation that the authors are talking about what they actually observed. So it’s not particularly about alignment with humans either. Few of these things apply to LLMs, and they are not yet good at writing legible arguments worth verifying, though IMO gold is reason to expect this to change in a year or so.
Human texts also need reasons to trust the takeaways from them, things like bounded distrust from reputational incentives, your own understanding after treating something as steelmanning fodder, expectation that the authors are talking about what they actually observed. So it’s not particularly about alignment with humans either. Few of these things apply to LLMs, and they are not yet good at writing legible arguments worth verifying, though IMO gold is reason to expect this to change in a year or so.