In order to push back on the anthropomorphization inherent in chat interfaces, Sahil suggests that we call the activity of interacting with AI via chat interfaces talkizing. The relationship between talking and talkizing is being analogized with the relationship between rationality and rationalization; rationalization is a “phony” version of rationality, a cheap substitute, perhaps intended to fool you. Instead of “I talked with ChatGPT about...” one would say “I talkized with ChatGPT about...”
Is there an actual conceptual distinction here, or is talkizing just a word for talking to an AI? The rationalization vs. rationality distinction seems different. We’re able to label rationality because we have established markers for it. If something looks like rationality on the surface but doesn’t have any of the established markers, we can conclude it’s rationalization/motivated-reasoning. Do we have markers for distinguishing “real” talking from talkizing or “phony” talking?
Is there any empirical test that would distinguish talkizing from talking other than substrate difference?
Is there an actual conceptual distinction here, or is talkizing just a word for talking to an AI? The rationalization vs. rationality distinction seems different. We’re able to label rationality because we have established markers for it. If something looks like rationality on the surface but doesn’t have any of the established markers, we can conclude it’s rationalization/motivated-reasoning. Do we have markers for distinguishing “real” talking from talkizing or “phony” talking?
Is there any empirical test that would distinguish talkizing from talking other than substrate difference?