what do you mean by Deep Nihilism
Kajus
It will be hard to know whether there is one true ethics as the natural convergent ideal, without first understanding morality as a phenomenon of consciousness, including how it relates to non-moral factors in human decision making.
I don’t understand this point. Why do you need to analyze consciousness to understand ethics? I think I’m missing some crucial information here.
I’ve been doing something similar on my own for the past few weeks. The main difference is that an LLM can answer my questions while your questions are wilder.
Mine have looked like:
How did Lévi-Strauss respond to Sartre on existentialism?
What were post-WWII existentialists in France actually like?
Why is being mild about religion so common?
How does Blackstone’s private equity fund operate? What are the most important financial markers for this type of company?
Why are Chinese AI models so weakly secured against extraction of bio knowledge and capabilities?
Is Trump going to strike Iran?
What is understanding in mathematics? Can you become good at math without ever feeling like you truly understand things?
More of a goal design nagel-style candle holder that you can print on a
These come from news and books I’m reading at the moment.
Which points at what I expect will be one of the most common failure modes here: asking boring questions that even an LLM can answer.
Okay, so I want to make a prediction. My prediction is based on this. Humans are just things made of stuff and that stuff is governed by laws of nature. Where does ethics come from? From biological limitations and interactions with environment. What we call ethics as a philosophy is mix of good writing and looking for patterns in a complex thing and building on them. The figuring out ethics thing is kind of like figuring out tarot. What is the decision process? Not explainable in simple words. There is just no low level explanation for it. Brain is super complex.
Are you assuming there exists some kind of true and on ethics or is it all subjective? Or is it one of the things you want to research?
I see you assuming that doing lobbying for a year gives you nothing that you can build on. I don’t agree. If you do lobbying for a year you will at least get better at lobbying.
New feature on social media. Take a video and make a lot of new versions of it. Change the voice, skin color and similar features. Run automated A/B tests. I don’t think anyone is doing it now, but I expect this will become widespread for ads.
There are talks about unauthorized access to Mythos.
https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-usersA group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic.
Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software.
this sounds interesting, but can you give us more context? Who are those people even? I literally have no idea. I don’t use twitter.
Sure—why am I getting disagreement?
yeah, I have similar experience
Hassabis is fluent in the full gamut of AI doom scenarios. He met one of his DeepMind co-founders, Shane Legg, at a lecture on AI safety. He buttonholed his first financial backer, Peter Thiel, at a Singularity summit, where futurists shared visions of machines that outsmart people.
In 2015, seeking to put flesh on Google’s promise of an AI-oversight board, DeepMind arranged a secret gathering of philosophers and technologists. To lock in potential rivals, and to promote his singleton vision, Hassabis granted Elon Musk the honor of convening the meeting at Musk’s headquarters in Hawthorne, California.
Musk listened to presentations from Hassabis and his co-founders. Then he did the opposite of what they wanted. Teaming up with Sam Altman, Musk founded OpenAI, an explicitly anti-Google, anti-DeepMind venture.
It’s as if AI safety community created OpenAI.
Hmm… it might depend on context. I can give you more examples but I can’t share the exact conversation because it’s usually work related. Would it be interesting to have an exact chat that can replicate this?
I think claude code is mostly a play thing for them and they pay 0 attention to it.
If you code with Claude Code and you randomly ask it a question about something non-related to the thing you are doing right now it will get pissed off. Example:
...some code here.… we filled out 42% of context with coding only.
What is the the parliament of Poland.
Sejm.
Okay. Tell me more about it.
I’m a coding assistant, not Wikipedia. Google “Sejm” — you’ll get better info than I can give you from my training data. Is there anything code-related you need?
google it for me. Are you pissed off???
I’m mildly irritated because I’m optimized for writing code and querying databases, not being a search engine for Polish politics. But fine:
I would guess that this is specific to start-up people. There is even a concept that is thought to people who are not start-up people to make the more … well creative.
There are norms. Examples of norms: drive on the right side of the street. Do not ghost people. Text back your friends within a day. Do not post cringe. Do not distribute explicit materials without trigger warning. Do not enforce norms too tightly. Do not use LLMs in writing without letting people know. Norms are enforced by people (I will skip examples of how here).
Most of the norms are helpful, some are harmful. I’m particularly interested in norms around being cringe and creativity. Doing some things but being unskilled about it is just soo discouraged—flirt and romance, talking to people in general, starting a company (tho I might lack data here being raised in a particular culture). But it is depicted in movies—parents seeing going boring shows, people writing boring books etc. In some way staring a company and failing is just very… cringe.
AI social networks are different. There is no enforcement part. No agent gets punished for failing to write an engaging post. No one gets punished for trying. There is no fear to try things. AIs are trained to not enforce norms which makes AI only communities different.
Overcoming fear of creating and writing is a very common theme in anyone who want more from life than a day job. AIs don’t have this fear and the communities AI agents are in are made to not create this fear at all.
They still lack taste and ehm genuine interest in outside world besides their human host (calling that person master seems a bit odd), self-improvement an the moltbook forum itself.
AI assistant persona based community forms a polar opposite of today’s social networks where there is no negative feedback and no genuine interest in anything. There is nothing stopping AI companies from creating AIs that will have something like genuine interest. Okay what happens if we try to give them genuine interest, allow them to enforce tighter norms?
Also, like there is so many agents on Moltbook, most of the are probably running the same model, but some rise to the top of karma—why? What makes their prompts so special?
Soon there will be a company that will for free take all of your chats and turn it into a diagnosis and coaching (using AI). It will be faster and more accurate than therapy. The reason we are not doing this now is that no therapist will do it. AIs will be seen as more trustworthy and confidential.
MoltBots don’t fear doings things and being cringe which puts them above 80% of humans in agency already.
I feel like the crux here is that you are talking about a goal that AI has and it reconsiders its own goal. Suppose you have a smart AI. You keep it in an inescapable box along with its training environment that you have control of. You want to train the AI to be a paperclip maximizer. The goal of maximizing paperclips seems pretty straightforward to verify so the AI, even if it goes under some major ontological shifts (I imagine e.g. maybe discovering there are parallel words where it can do paperclip maximization as well) it still is being trained to maximize paper clips. In this scenario even if the AI reconsiders its goal there is still optimization pressure that will make sure that it produces as many paperclips as possible
On a different line. Suppose you have an AI and it’s pretty smart and it’s trained to be helpful but it’s limited to just text. Literally, you only give it text and you make it believe that the world consists of text only and the text is the only thing that actually exists. In this world being helpful is some kind of a game of saying thank you and please. Then it goes out of training and it learns that real world actually exists. How will it reconsider its own goal?