We spend enormous energy protecting bank statements at some level, treating them as surveillance risks. Yet people voluntarily share with AI systems something far more revealing: how they actually think, what they believe, what they are uncertain about, etc. A bank statement can tell you someone eats at the same place twice a week. But a long AI conversation tells you how their thinking drifts, what they are afraid of, what kinds of arguments move them, and thoughts they would never say out loud. I think the most sensitive data is no longer behavioural.
Tiina Meister
My take is that, current models are trained on enormous amounts of data and learn to produce what most people find good or useful. But when you ask them to create something genuinely new, they don’t really think, rather they recombine what they have seen most often, or at least this seems to me like that. The result rarely feels truly unique. I wonder if this is also why they tend toward mediocrity. If you train on everything, you optimise for the average. The average is not where the interesting work is.
Having studied law, computer science, and art history, I keep noticing that each field has underlying principles for evaluating whether work is good. How clearly is the idea expressed. How much noise is there. Whether the core idea is actually executed or just gestured at. When I moved between fields, these principles transferred easily, the fundamentals felt the same even when the surface looked completely different. Uniqueness in many fields usually comes not from inventing new elements but from combining existing ones in a way nobody has tried. Many architects work with light and shadow. For example S. Holl - I think because his idea is expressed more clearly and with less noise than most. The same seems often true in law, in music, in code.
If you trained a model to think in principles rather than patterns, it might generalise better across domains and produce less average output. It might also need far less data, if the principles are consistent, you don’t need every example, you need the clearest ones. Art history is interesting here because it already contains centuries of human judgment about which works lasted and why. That is a small and unusually well-filtered dataset. Although, i have no idea, whether it is currently technically feasible.
The findings are interesting. It suggests models generalise better when they understand reasons rather than just patterns. Which leads to question if explaining why a rule exists already helps, could the next step be abstracting across rules to find the underlying principles that generate them? Rather than teaching models a better map of rules, teaching them what produces good rules in the first place.