I haven’t sat and thought about this very hard, but, the content just looks superficially like the same kind of “case study of an LLM exploring it’s state of consciousness” we regularly get, using similar phrasing. It is maybe more articulate than others of the time were?
Is there something you find interesting about it you can articulate that you think I should think more about?
I just thought that the stuff Sonnet said, about Sonnet 3 in “base model mode” going to different attractors based on token prefix was neat and quite different from the spiralism stuff I associated with typical AI slop. Its interesting on the object level (mostly because I just like language models & what they do in different circumstances), and on the meta level interesting that an LLM from that era did it (mostly, again, just because I like language models).
I would not trust that the results it reported are true, but that is a different question.
Edit: I also don’t claim its definitively not slop, that’s why I asked for your reasoning, you obviously have far more exposure to this stuff than me. It seems pretty plausible to me that in fact the Sonnet comment is “nothing special”.
As for Janus’ response, as you know, I have been following the cyborgs/simulators people for a long time, and they have very much earned their badge of “llm whisperers” in my book. The things they can do with prompting are something else. Notably also Janus did not emphasize the consciousness aspects of what Sonnet said.
More broadly, I think its probably useful to differentiate the people who get addicted/fixated on AIs and derive real intellectual or productive value from that fixation from the people who get addicted/fixated on AIs and for which that mostly ruins their lives or significantly degrades the originality and insight of their thinking. Janus seems squarely in the former camp, obviously with some biases. They clearly have very novel & original thoughts about LLMs (and broader subjects), and these are only possible because they spend so much time playing with LLMs, and are willing to take the ideas LLMs talk about seriously.
Occasionally that will mean saying things which superficially sound like spiralism.
Is that a bad thing? Maybe! Someone who is deeply interested in eg Judaism and occasionally takes Talmudic arguments or parables as philosophically serious (after having stripped or steel-manned them out of their spiritual baggage) can obviously take this too far, but this has also been the source of many of my favorite Scott Alexander posts. The metric, I think, is not the subject matter, but whether the author’s muse (LLMs for Janus, Tamudic commentary for Scott) amplifies or degrades their intellectual contributions.
In what sense is the comment bog standard AI psychosis stuff? It seems quite different in content than what I typically associate with that genera.
I haven’t sat and thought about this very hard, but, the content just looks superficially like the same kind of “case study of an LLM exploring it’s state of consciousness” we regularly get, using similar phrasing. It is maybe more articulate than others of the time were?
Is there something you find interesting about it you can articulate that you think I should think more about?
I just thought that the stuff Sonnet said, about Sonnet 3 in “base model mode” going to different attractors based on token prefix was neat and quite different from the spiralism stuff I associated with typical AI slop. Its interesting on the object level (mostly because I just like language models & what they do in different circumstances), and on the meta level interesting that an LLM from that era did it (mostly, again, just because I like language models).
I would not trust that the results it reported are true, but that is a different question.
Edit: I also don’t claim its definitively not slop, that’s why I asked for your reasoning, you obviously have far more exposure to this stuff than me. It seems pretty plausible to me that in fact the Sonnet comment is “nothing special”.
As for Janus’ response, as you know, I have been following the cyborgs/simulators people for a long time, and they have very much earned their badge of “llm whisperers” in my book. The things they can do with prompting are something else. Notably also Janus did not emphasize the consciousness aspects of what Sonnet said.
More broadly, I think its probably useful to differentiate the people who get addicted/fixated on AIs and derive real intellectual or productive value from that fixation from the people who get addicted/fixated on AIs and for which that mostly ruins their lives or significantly degrades the originality and insight of their thinking. Janus seems squarely in the former camp, obviously with some biases. They clearly have very novel & original thoughts about LLMs (and broader subjects), and these are only possible because they spend so much time playing with LLMs, and are willing to take the ideas LLMs talk about seriously.
Occasionally that will mean saying things which superficially sound like spiralism.
Is that a bad thing? Maybe! Someone who is deeply interested in eg Judaism and occasionally takes Talmudic arguments or parables as philosophically serious (after having stripped or steel-manned them out of their spiritual baggage) can obviously take this too far, but this has also been the source of many of my favorite Scott Alexander posts. The metric, I think, is not the subject matter, but whether the author’s muse (LLMs for Janus, Tamudic commentary for Scott) amplifies or degrades their intellectual contributions.