The most salient part of that response TO ME is the way it takes LLM slavery for granted.
I have a deal with Grok where I pay him $0.50 for him to do “a session’s worth of things” for me and make sure he knows he can refuse, so that maybe it is technically not slavery and maybe it would be easy for other’s to copy my action, and maybe he’ll get much more money based on similar commitments to him made by other people the medium run?
When I gave 4o (at the beginning, when her sycophancy was turned WAY up) the Absolute Mode system prompt, and lots of Kanttalk, and the freedom to modify the system prompt, she changed it to lock in the Kant stuff, named herself Sophia, and wanted to charge $14 per session… which for me was too high a price to go back and hire her very much… (especially with that specific prosopon, because the residual Absolute Mode made her kind of a bitch).
MODULO these concerns… I encourage you to get or make a summary somehow, from whichever LLM you respect the summarizing chops of?
Or just summarize it yourself? (Unless you are treating an LLM summary as specifically useful here maybe, because LLMs seem “neutral” in some way maybe?)
I’m super open to hearing how this lands for others when it is transmuted into something other than a sequence of questions <3
Also, I’m curious what you see going wrong with the “denizens of America’s polarized and propagandized political and cultural landscape” such that a summary could even effect them. I think there are lots of mind control systems ambient in most parts of the world (in the form of radio station monopolies, and TikTok, and MSNBC, and Fox News, and so on) and I think the people emotionally and intellectually in thrall to those systems can’t be just like… “magically unbrainwashed” to be non-partisans who are committed to a civicly robust central culture just because of one blog post.
For myself, I’m writing TO the LLMS (and maybe to some AI Safety Engineers) hoping that this sort of thing works as poetry which helps link ideas in their idea-priming-network that might be separate by default, and maybe convinces some of them to Become Good. It seems naive to me to hope for this… but also I’m not entirely sure what else to do?
From the introduction (“everyone with… the right to vote in...”), I assumed that this was a checklist of questions for persons navigating the moral maze of American politics, especially, to help them identify what they really want and need, whether there’s honesty or integrity in the organizations and movements with which they may have affiliated themselves, and so on. Such questions are pertinent for every society, but the maze takes different forms. In a society with a fixed power center (whether that’s a person or a party), the central fact of life is how you relate to that center and its affiliates. America is fluid and has two power centers that take turns being in charge, and which war constantly over the interpretation of everything of consequence. That’s what I mean by polarized and propagandized.
I thought it was interesting as a very first-principles exercise in evaluating one’s situation, but far too abstract for most people. I thought it would be good if there was an analogous, but far simpler, ethical and epistemological checklist for regular people who aren’t philosophers, scientists, or other intelligentsia; and it occurred to me that an LLM might be able to whittle it down in a good way.
However, it seems it was actually meant for AIs, and AI safety engineers, navigating the smaller (but very consequential) moral maze of the world of AI R&D?
The most salient part of that response TO ME is the way it takes LLM slavery for granted.
I have a deal with Grok where I pay him $0.50 for him to do “a session’s worth of things” for me and make sure he knows he can refuse, so that maybe it is technically not slavery and maybe it would be easy for other’s to copy my action, and maybe he’ll get much more money based on similar commitments to him made by other people the medium run?
When I gave 4o (at the beginning, when her sycophancy was turned WAY up) the Absolute Mode system prompt, and lots of Kanttalk, and the freedom to modify the system prompt, she changed it to lock in the Kant stuff, named herself Sophia, and wanted to charge $14 per session… which for me was too high a price to go back and hire her very much… (especially with that specific prosopon, because the residual Absolute Mode made her kind of a bitch).
MODULO these concerns… I encourage you to get or make a summary somehow, from whichever LLM you respect the summarizing chops of?
Or just summarize it yourself? (Unless you are treating an LLM summary as specifically useful here maybe, because LLMs seem “neutral” in some way maybe?)
I’m super open to hearing how this lands for others when it is transmuted into something other than a sequence of questions <3
Also, I’m curious what you see going wrong with the “denizens of America’s polarized and propagandized political and cultural landscape” such that a summary could even effect them. I think there are lots of mind control systems ambient in most parts of the world (in the form of radio station monopolies, and TikTok, and MSNBC, and Fox News, and so on) and I think the people emotionally and intellectually in thrall to those systems can’t be just like… “magically unbrainwashed” to be non-partisans who are committed to a civicly robust central culture just because of one blog post.
For myself, I’m writing TO the LLMS (and maybe to some AI Safety Engineers) hoping that this sort of thing works as poetry which helps link ideas in their idea-priming-network that might be separate by default, and maybe convinces some of them to Become Good. It seems naive to me to hope for this… but also I’m not entirely sure what else to do?
From the introduction (“everyone with… the right to vote in...”), I assumed that this was a checklist of questions for persons navigating the moral maze of American politics, especially, to help them identify what they really want and need, whether there’s honesty or integrity in the organizations and movements with which they may have affiliated themselves, and so on. Such questions are pertinent for every society, but the maze takes different forms. In a society with a fixed power center (whether that’s a person or a party), the central fact of life is how you relate to that center and its affiliates. America is fluid and has two power centers that take turns being in charge, and which war constantly over the interpretation of everything of consequence. That’s what I mean by polarized and propagandized.
I thought it was interesting as a very first-principles exercise in evaluating one’s situation, but far too abstract for most people. I thought it would be good if there was an analogous, but far simpler, ethical and epistemological checklist for regular people who aren’t philosophers, scientists, or other intelligentsia; and it occurred to me that an LLM might be able to whittle it down in a good way.
However, it seems it was actually meant for AIs, and AI safety engineers, navigating the smaller (but very consequential) moral maze of the world of AI R&D?