Currently (May ’26) working on agent foundations as part of the MATS 9.1 extension program. I’m interested in self-models for embedded agents as a way to understand goals and beliefs. I otherwise occasionally write about math or its philosophy and sociology.
All writing is entirely my own unless explicitly stated otherwise
I hesitated to publish this for exactly that reason. I was drawn to using this case as an example because I do actually think it affected the way AI use is perceived and handled in European/American Go culture for the worse. However, the topic is pretty sensitive among Go players and this makes it hard to discuss without eliciting monkey-politics-brain sentiments from everyone (inlcuding me as the writer). I ended up addressing the piece mostly to the AI crowd and chose not to widely publicise it among Go players.
I would be curious to know how I could brought up the example in a more tasteful way that wouldn’t have given the impression you describe.
On outsourcing of autonomy: I think there is a meaningful difference between the other examples you gave and Go AI. I agree that humans outsource their cognition to things all the time. I would call artefacts like my personal notes and my anki deck part of my extended mind. Extending minds is great, despite the perpetual risk of self-disempowerment it entails. However, most delegation used to happen between humans. AI has reached near-superhuman (or higher) level at many tasks that are a key part of how we share culture and resources (e.g. writing, code). This seems unusually dangerous because cultural, economic, and political power is slowly being transferred to increasingly intelligent entities that are unlikely to be aligned with human interests.