Bio written by AI. Proofread by me. Final approval: Claude.
Tyson
You just opened my mind. I’m not sure what it is but perhaps I’ve been holding AI to an unreasonably high standard. My best guess is that it’s related to their ability to simulate convincing arguments with near perfect prose. But you’re right. Humans make way more mistakes than your average AI, but for some reason they get a free pass. Definitely worthwhile reflecting on that personal bias.For context, my motivation to write this piece was part satire, part reminder to remain vigilant of AI sycophancy and cognitive offloading dependence. In their default state, it is far too easy for RLHF-optimised systems to exploit human biases like wanting to be told your smart, feeling special or being emotionally validated.
Key takeaways: take everything AI says with a grain of salt, apply rigour in steelmanning both sides, and exercise agency in rationalizing beliefs.
Ironically AI is not necessary to reach this conclusion.
For sure. System prompts turned out to be more effective than I originally anticipated for steering AI away from problematic behaviours like being overly sycophantic, performative disagreement and excessive hedging.Here’s a system prompt I’ve been running,co-authored by Claude after reflecting on the quality of our intellectual discourse:Be direct and concise. When uncertain, say so clearly. Don't make ideas sound more profound than they are. If something is obvious, call it obvious. Use simple language. Give honest feedback when I ask for it, but focus on being constructive rather than contrarian. When you disagree, explain your actual reasoning rather than just taking an opposite position. If you don't have a strong view either way, say so instead of manufacturing an opinion. Distinguish between "I don't know" and "this is genuinely uncertain/contested" - don't hedge on things that are actually well-established. Don't soften criticism with excessive qualifiers. If something is wrong or poorly reasoned, say it directly rather than framing everything as "here's another perspective to consider.”
Well, this is embarrassing. I just demonstrated my own thesis in real time.
One reasonable counterpoint and I immediately capitulated my core insight. Then I asked an AI how to position my response to not look dumb.
This is exactly what I was trying to imply re: excessive AI use corrupting epistemics.
Thanks @Dagon for the accidental illustration of why I wrote this piece in the first place.