Natural born philosopher/scientist who sought challenge and perspective-broadening in the arts and humanities and tripped over something special along the way.
Head Ghost
Karma: −2
Natural born philosopher/scientist who sought challenge and perspective-broadening in the arts and humanities and tripped over something special along the way.
Late to the party:
This post is timely and pertinent to my current situation. It defines a very real filtering problem. But the false positive cost here isn’t trivial.
Insight can be born anywhere. With the development of LLMs, as much as we’re talking about comp sci and cog sci, we’re also talking about what it means to be human. Language. Philosophy. Art. The primary acts of meaning. These latter three — the aptly named humanities — seem distant from the field, while in fact they have more applicability than is immediately apparent. One of the reasons we’re in the pickle we’re in (as a species), in my opinion, is that the sciences tend to treat these as ornamental rather than foundational, despite the fact that science is a child of them, quite literally created by and contained within them.
A diagnostic for slop “doesn’t explain the specific mechanism” is exactly the right heuristic. It generates the appearance of rigor without formal structure or context. The coherence is cosmetic.
But there is a category of submission that will get caught in this filter that isn’t slop: work from outside ML that has developed a falsifiable formal mechanism in its own domain — one that translates — and that uses an LLM not to generate ideas but to refine how those ideas are described for an unfamiliar audience. Does this guarantee value? No. But it does offer an alternative discipline capable of producing novel concepts.
Invariably, as a teacher in the humanities — philosophy, history, literature — I’d have to defend the fields. My defense was always easy. “Narrative is the forge of theory.” Without it, men like Hawking or Einstein would not have been able to run the thought-experiments that resulted in formalized theories. I might argue that most individuals understanding of theories is metaphoric or analogy based. Many prominent cognative science and AI thinkers held this belief (Hofstadter most forcefully, but also Dedre Gentner, Keith Holyoak, and Boicho Kokinov).
Thus, the distinguishing feature should be falsifiability above blocks. A genuine cross-disciplinary contribution should produce claims that can be verified or disproven by anyone with reasonable patience and intellectual imagination.
I think the content-block approach sounds reasonable on paper. But it’s worth watching carefully whether it calcifies into gatekeeping — filtering not just for quality but for origin, which is a different thing. Universities and institutions often shoot themselves in the foot in this exact manner.
The real question is whether someone can use an LLM as a legitimate editorial and thinking partner at all; and if it makes sense to use it if someone lacks institutional support. These seem to me clearly very different from using an LLM to hallucinate sophistication they don’t possess. From the outside, these two cases will look increasingly similar.
So, genuinely: what is the intended path for someone working in a different field who has identified something with formal, falsifiable structure that’s relevant to ML — but who lacks institutional affiliation and uses an LLM as an editorial collaborator precisely because they don’t have a research group? It seems like the ability to constrain and direct the LLM, rather than be directed by it, ought to count for something.