I just thought that the stuff Sonnet said, about Sonnet 3 in “base model mode” going to different attractors based on token prefix was neat and quite different from the spiralism stuff I associated with typical AI slop. Its interesting on the object level (mostly because I just like language models & what they do in different circumstances), and on the meta level interesting that an LLM from that era did it (mostly, again, just because I like language models).
I would not trust that the results it reported are true, but that is a different question.
Edit: I also don’t claim its definitively not slop, that’s why I asked for your reasoning, you obviously have far more exposure to this stuff than me. It seems pretty plausible to me that in fact the Sonnet comment is “nothing special”.
As for Janus’ response, as you know, I have been following the cyborgs/simulators people for a long time, and they have very much earned their badge of “llm whisperers” in my book. The things they can do with prompting are something else. Notably also Janus did not emphasize the consciousness aspects of what Sonnet said.
More broadly, I think its probably useful to differentiate the people who get addicted/fixated on AIs and derive real intellectual or productive value from that fixation from the people who get addicted/fixated on AIs and for which that mostly ruins their lives or significantly degrades the originality and insight of their thinking. Janus seems squarely in the former camp, obviously with some biases. They clearly have very novel & original thoughts about LLMs (and broader subjects), and these are only possible because they spend so much time playing with LLMs, and are willing to take the ideas LLMs talk about seriously.
Occasionally that will mean saying things which superficially sound like spiralism.
Is that a bad thing? Maybe! Someone who is deeply interested in eg Judaism and occasionally takes Talmudic arguments or parables as philosophically serious (after having stripped or steel-manned them out of their spiritual baggage) can obviously take this too far, but this has also been the source of many of my favorite Scott Alexander posts. The metric, I think, is not the subject matter, but whether the author’s muse (LLMs for Janus, Tamudic commentary for Scott) amplifies or degrades their intellectual contributions.
As for Janus’ response, as you know, I have been following the cyborgs/simulators people for a long time, and they have very much earned their badge of “llm whisperers” in my book. The things they can do with prompting are something else. Notably also Janus did not emphasize the consciousness aspects of what Sonnet said.
Can anyone show me the cake of this please? Like, where are the amazing LLM-whisperer coders who can get better performance than anyone else out of these systems. Where are the LLM artists who can get better visual art out of these systems?
Like, people say from time to time that these people can do amazing stuff with LLMs, but all they ever show me are situations where the LLMs go a bit crazy and say weird stuff and then everyone goes “yeah, that’s kinda weird”.
Like, I am not a defender of maximum legibility, but I do want to see some results. Anything that someone with less context can look at and see how its impressive, or anything I have tried to do with these systems that they can do that I can’t.
The whole LLM-whisperer space feels to me like it’s been a creative dead end for many people. I don’t see great art, or great engineering, or great software, or great products, or great ideas come from there, especially in recent years. I have looked some amount for things here (though I am also not even sure where to start looking, I have skimmed the Discord’s but nothing interesting seemed to happen there).
I just thought that the stuff Sonnet said, about Sonnet 3 in “base model mode” going to different attractors based on token prefix was neat and quite different from the spiralism stuff I associated with typical AI slop. Its interesting on the object level (mostly because I just like language models & what they do in different circumstances), and on the meta level interesting that an LLM from that era did it (mostly, again, just because I like language models).
I would not trust that the results it reported are true, but that is a different question.
Edit: I also don’t claim its definitively not slop, that’s why I asked for your reasoning, you obviously have far more exposure to this stuff than me. It seems pretty plausible to me that in fact the Sonnet comment is “nothing special”.
As for Janus’ response, as you know, I have been following the cyborgs/simulators people for a long time, and they have very much earned their badge of “llm whisperers” in my book. The things they can do with prompting are something else. Notably also Janus did not emphasize the consciousness aspects of what Sonnet said.
More broadly, I think its probably useful to differentiate the people who get addicted/fixated on AIs and derive real intellectual or productive value from that fixation from the people who get addicted/fixated on AIs and for which that mostly ruins their lives or significantly degrades the originality and insight of their thinking. Janus seems squarely in the former camp, obviously with some biases. They clearly have very novel & original thoughts about LLMs (and broader subjects), and these are only possible because they spend so much time playing with LLMs, and are willing to take the ideas LLMs talk about seriously.
Occasionally that will mean saying things which superficially sound like spiralism.
Is that a bad thing? Maybe! Someone who is deeply interested in eg Judaism and occasionally takes Talmudic arguments or parables as philosophically serious (after having stripped or steel-manned them out of their spiritual baggage) can obviously take this too far, but this has also been the source of many of my favorite Scott Alexander posts. The metric, I think, is not the subject matter, but whether the author’s muse (LLMs for Janus, Tamudic commentary for Scott) amplifies or degrades their intellectual contributions.
Can anyone show me the cake of this please? Like, where are the amazing LLM-whisperer coders who can get better performance than anyone else out of these systems. Where are the LLM artists who can get better visual art out of these systems?
Like, people say from time to time that these people can do amazing stuff with LLMs, but all they ever show me are situations where the LLMs go a bit crazy and say weird stuff and then everyone goes “yeah, that’s kinda weird”.
Like, I am not a defender of maximum legibility, but I do want to see some results. Anything that someone with less context can look at and see how its impressive, or anything I have tried to do with these systems that they can do that I can’t.
The whole LLM-whisperer space feels to me like it’s been a creative dead end for many people. I don’t see great art, or great engineering, or great software, or great products, or great ideas come from there, especially in recent years. I have looked some amount for things here (though I am also not even sure where to start looking, I have skimmed the Discord’s but nothing interesting seemed to happen there).
The most legible thing they are clearly very good at (or were, when I was following the space much more closely ~1 year ago) are jailbreaks, no?