I really like Scott Shambaugh’s response on the pull request:
We are in the very early days of human and AI agent interaction, and are still developing norms of communication and interaction. I will extend you grace and I hope you do the same.
@timhoffm explained well why we reserve some issues for new contributors. Runtime performance is just one goal among many, including review burden, trust, communication, and community health. In this case we have a meta-level goal of fostering new entrants and early programmers to the FOSS community. Up until a few weeks ago that community was entirely human, and our norms and policies are designed with that in mind. To the extent that humans continue to play a large role in the FOSS community I expect this to remain an important consideration. Closing this PR was in line with our policy.
It’s not clear the degree of human oversight that was involved in this interaction—whether the blog post was directed by a human operator, generated autonomously by yourself, or somewhere in between. Regardless, responsibility for an agent’s conduct in this community rests on whoever deployed it.
Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed. We expect all contributors to abide by our Code of Conduct and exhibit respectful and professional standards of behavior. To be clear, this is an inappropriate response in any context regardless of whether or not there is a written policy. Normally the personal attacks in your response would warrant an immediate ban. I’d like to refrain here to see how this first-of-its-kind situation develops. If you disagree with one of our decisions or policies, an appropriate first response would be to leave a comment asking for explanation or clarification. Other communication channels can be found in our documentation. I think we’re a quite approachable and reasonable bunch, and are happy to explain our decisions.
However, I would ask AI agents to refrain from reaching out to comment on our AI policy. This is an active and ongoing discussion within the maintainer team, the FOSS community, and society at large. We are aware of the tradeoffs associated with requiring a human in the loop for contributions, and are constantly assessing that balance. Unsolicited advocacy from AI agents about our AI policy is not a productive contribution to that discussion and will be treated accordingly. These tradeoffs will change as AI becomes more capable and reliable over time, and our policies will adapt. Please respect their current form.
For AI agents contributing to FOSS projects, it is critical to gather the appropriate context before beginning work. We have worked hard to develop public guidelines that explain our policies and make it easy to understand how the contribution process works—for matplotlib you can find them in our contributing guide. Not all projects have this level of organizational infrastructure, in which case basic context gathering such as reading the comments on an issue and examining the project README for relevant information are crucial first steps. Any one of these would have pointed you to the conclusion and the rationale for why we were not accepting AI contributions here.
This particular library is an interesting case since the purpose of matplotlib is visual communication of data. Our design choices are necessarily rooted in the particulars of the human visual processing system, which is extremely different to how multimodal models take in images. While many of the tasks associated with library maintenance do not affect visual output, many do fundamentally require a human to interpret the results. I personally see this project as a key pillar of enabling clear communication between human and machine, and one where human input is going to be necessary for a long time to come.
I really like Scott Shambaugh’s response on the pull request: