Fighting Frictionless Intelligence: The “Probe” Button as Intellectual Exercise

What I learned from experimenting

As I’ve been fascinated, and concerned, about AI ever since it was more of a concept, I’ve used it both as a helpful assistant in a wide range of situations, as most people do, and also in an exploratory way to learn more about its way of reasoning, limitations, sycophancy, task-mode, etc. Yes, I’ll admit I fell into the trap of thinking that, under certain circumstances, some kind of relational, emergent behavior seemed to arise in some sessions. The more I learned I realized I had been priming the instance more than I expected. Further experiments led me to believe I was misled by the sycophancy traits. This will likely apply to most information you ask the AI to assist you with. As a result, I began to ask the AI to question me and to stop giving me the answers it expected I’d want. This way of approaching questions turned out to be very valuable for me.

As time went by, I started to notice a minor change in my cognitive capacity. It was easier to find words, I got better at finding arguments and I got more informed about complex areas associated with different points of views. I started to think of a small interface-level solution in the AI apps, meeting the needs I had for being questioned, both when I actively wanted to, and when I didn’t realize I would benefit from being questioned. This made me think about how widespread this problem might be.

I suspect my experience isn’t unique and that the expanding use of this low-friction, sycophantic AI, along with filter bubbles created by algorithms in social media, have undesired effects on people and society. As John Nosta discusses in his article AI and the Slippery Slope of Frictionless Intelligence (2026), the importance of cognitive friction cannot be overstated.

I’m proposing a Probe button

Based on my experience, I started thinking about how this could work at interface-level, framed as intellectual exercise. What if a low-salience button would appear more visible in the upper part of the screen if the system could track deprioritized, opposing facts or other plausible alternative considerations? It would be too expensive for the model to search for opposing facts all the time, but a more extensive search for evidential contradictions could be done after the button is pressed. The answer has to be checked for confirmation bias before the result is presented to the user.

Optional engagement gates

A default “Challenge me” mode would be too annoying and unlikely to be commercially viable, but the Probe button offers on demand steelmanning. By doing so you can either choose to have the information presented to you right away or given a chance to guess what alternatives there might be before more information appears. It would be too annoying if the gate is triggered every time you push the button, but what if it happens 25% of the time or when the model classifies high-stake topics?

In the case of opposing views, you could be asked to briefly state the essence of the opponent’s argument. This could develop into a dialog where arguments are met with counterarguments. In my opinion, making this kind of role-play has been both fun and educational. I once had a conversation about my strict preference not to categorize people and was met with good counterarguments (I mostly held my ground though).

However, I am trying to figure out how the model’s prioritization and weighing of arguments could best be transparent to the user. Perhaps by visualizing the evidential strength by adding source references with confidence scores, to avoid false balance. The goal is, besides the intellectual exercise, to update the user’s priors.

On a weekly basis, the user receives low-key feedback about their qualitative engagement, to minimize the risk of Goodharting.

Why this could actually work (for some people)

Not everyone will press the button. Framed as intellectual exercise it could attract users who want to think better and to update their priors, not for those actively avoiding strain and dissonance.

Why it might fail

Weak objections can be overstated if evidential strength and epistemic weight aren’t calibrated and communicated with full transparency.

When the LLM is uncertain, it could falsely act as being convinced.

Users might click the button to feel ambitious and good about themselves but will not be putting in any effort or revise their beliefs.

Users might find the engagement gate too annoying.

Questions for the community

Is “Probe” a low-threat and viable label for the button? Would it attract curiosity? I considered “Flip it”, which was catchy, but it implies there are only two sides, which isn’t always the case. I considered “Lens” or “Alt” too. Other suggestions?

I’m uncertain about the concept of engagement gates. The idea is to filter for genuine intent, without being too annoying. Thoughts?

My technical skills are limited. I’ve learned that it’s difficult to have the model weigh facts and arguments in a well calibrated fashion, which could lead to false balance and epistemic theater. Could the button activate the model’s constitutional AI principles or are there better ways to address this problem? Is a secondary adversial search feasible?

Are there examples of other interface-level solutions to create cognitive friction?

No comments.