Open Thread Autumn 2025
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you’re new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Hello! My name is Laiba, I’m a 20-year-old Astrophysics student and new to LessWrong (or at least, new to having an account).
I’ve been into science since I could read and received a lot of exposure to futurism, transhumanism and a little rationality. I remember thinking, “This would make a lot of sense if I were an atheist.”
Lo and behold, about a month ago I gave up on religion, and I was no casual Muslim! I thought now would be a good time to join LessWrong. I’ve read a few posts here and there, and greatly enjoyed Harry Potter and the Methods of Rationality (which is where I found out about LessWrong).
My first blog post talks a bit about my deconversion: https://stellarstreamgalactica.substack.com/p/deconversion-has-been-a-real-productivity
I’m also starting up a PauseAI student group at my university. Taking death seriously has made me rethink where I’m putting my time.
Looking forward to having interesting discussions and being able to interact with the community without the fear of sinning!
Welcome!
Follow-up to this experiment:
Starting 2025-06-13, I started flossing only the right side of my mouth (selected via random-number generator). On 2025-09-18 I went to the dentist and asked what side he guessed I’d flossed. He guessed right.
A crazy idea, I wonder if someone tried it: “All illegal drugs should be legal, if you buy them at a special government-managed shop, under the condition that you sign up for several months of addiction treatment.”
The idea is that drug addicts get really short-sighted and willing to do anything when they miss the drug. Typically that pushes them to crime (often encouraged by the dealers: “hey, if you don’t have cash, why don’t you just steal something from the shop over there and bring it to me?”). We could use the same energy to push them towards treatment instead.
“Are you willing to do anything for the next dose? Nice, sign these papers and get your dose for free! As a consequence you will spend a few months locked away, but hey, you don’t care about the long-term consequences now, do you?” (Ideally, the months of treatment would increase exponentially for repeated use.)
Seems to me like a win/win situation. The addict gets the drug immediately, which is all that matters to them at the moment. The public would pay for the drug use anyway, either directly, or by being victims of theft. (Or it might be possible to use confiscated drugs for this purpose.) At least this way there is no crime, and the addict is taken off the streets.
This would be especially useful in those situation where “everyone knows” the place where the drugs are being sold (because obvious addicts congregate there), but for some technical reasons it is difficult to prove it legally. Don’t need to prove anything, just open a sales stand there saying “free drugs” and watch the street get clean.
P. C. Hodgell said, “That which can be destroyed by the truth should be.” What if we have no free will? Disregarding the debate of whether or not we have free will—if we do not have free will, is it beneficial for our belief in free will to be destroyed?
The consequences for an individual depend on the details. For example, if you still understand yourself as being part of the causal chain of events, because you make decisions that determine your actions—it’s just that your decisions are in turn determined by psychological factors like personality, experience, and intelligence—your sense of agency may remain entirely unaffected. The belief could even impact your decision-making positively, e.g. via a series of thoughts like “my decisions will be determined by my values”—“what do my values actually imply I should do in this situation”—followed by enhanced attention to reasoning about the decision.
On the other hand, one hears that loss of belief in free will can be accompanied by loss of agency or loss of morality, so, the consequences really depend on the psychological details. In general, I think an anti-free-will position that alienates you from the supposed causal machinery of your decision-making, rather than one that identifies you with it, has the potential to diminish a person.
″...because you make decisions that determine your actions” I don’t know that this would fit with the idea of no free will. Surely you’re not really making any decisions.
“my decisions will be determined by my values”—“what do my values actually imply I should do in this situation” But your values wouldn’t have been decided by you.
I agree with your last sentence. I’m leaning towards, “If we do not have free will, people should not be told about it.” (Assuming the “proof” of no free will eliminates any possibility of constructing selves that do have free will because in that case I would want us to build them and “move into” those bodies.)
This sounds like “epiphenomenalism”—the idea that the conscious mind has no causal power, it’s just somehow along for the ride of existence, while atoms or whatever do all the work. This is a philosophy that alienates you from your own power to choose.
But there is also “compatibilism”. This is originally the idea that free will is compatible with determinism, because free will is here defined to mean, not that personal decisions have no causes at all, but that all the causes are internal to the person who decides.
A criticism of compatibilism is that this definition isn’t what’s meant by free will. Maybe so. But for the present discussion, it gives us a concept of personal choice which isn’t disconnected from the rest of cause and effect.
We can consider simpler mechanical analogs. Consider any device that “makes choices”, whether it’s a climate control system in a building, or a computer running multiple processes. Does epiphenomenalism make sense here? Is the device irrelevant to the “choice” that happens? I’d say no: the device is the entity that performs the action. The action has a cause, but it is the state of the device itself, along with the relevant physical laws, which is the cause.
We can think similarly of human actions where conscious choice is involved.
Perhaps you didn’t choose your original values. But a person’s values can change, and if this was a matter of self-aware choice between two value systems, I’m willing to say that the person decided on their new values.
Something is making decisions, is it not? And that thing that makes the decisions is part of what you would normally describe as “you.” Everything still adds up to normality.
It can can be detrimental, though, to communicate certain subsets of true things without additional context, or in a way that is likely to be misinterpreted by the audience. Communicating truth (or at least not lying) is more about the content that actually ends up in people’s heads than it is about the content of the communication itself.
I also sleep and my heart beats, but “I” don’t get to decide those things, whereas free will implies “I” get to make day-to-day decisions.
I don’t think I’m 100% following with the second-to-last sentence. Are you saying it’s detrimental to disregard the debate of whether we have free will?
The chain of causality that makes your heart beat mostly goes outside your consciousness. (Not perfectly, for example if you start thinking about something scary and as a consequence your heart starts beating faster, then your thought did have an impact. But you are not doing it on purpose.)
The chain of causality that determines your day-to-day decisions goes through your consciousness. I think that makes the perceived difference.
That doesn’t change the fact that your consciousness is ultimately implemented on atoms which follow the laws of physics.
Personally the idea of no free will doesn’t negatively impact my mental state, but I can imagine it would for others, so I’m not going to argue that point. You should perhaps consider the positive impacts of the no-free will argument, I think it could lead to alot more understanding and empathy in the world. It’s easy for most to see someone making mistakes such as crime, obesity, or just being extremely unpleasant and blame/hate them for “choosing” to be that way. If you believe everything is determined, I find it’s pretty easy to re-frame it into someone who was just unlucky enough to be born into the specific situation that led them to this state. If you are yourself successful, instead of being prideful of your superior will/ soul, you can be humble and grateful for all the people and circumstances that allowed you to reach your position/mental state.
That is true but I think would lead to net-complacency… Let’s hope if we ever do find out that free will is definite and humanity accepts it that people take the view you describe here!
Mostly agree, however, I think it unnecessarily muddies the water, to take the concept of free will, which exists on a gradient throughout nature, not as an either/or (Binary concept)...…
And then attempt to answer this non-binary question, with a Binary answer of “either/or”.
It’s like poking around trying to find out how a square answer can fit into the round hole of the question.
A round question can only have a round answer. A question on a topic that exists on a gradient, may only accurately be answered with an answer that also exists on a gradient. You can not logically mix the 2 on any order, and expect an accurate answer.
At least that’s my opinion, I could be wrong. ---Tapske...
I’m afraid I don’t understand this. if we do not have free will, then which things we believe, which errors we mistake for truth, is not a choice.
True, I’ll rephrase. If we do not have free will, would it be beneficial for our belief in free will to be destroyed? If you were a divine operator with humanity’s best interests at heart, would you set up the causal chain of events to one day reveal to humans that they do not have free will?
You would need to make sure that there is no misunderstanding. Otherwise you would be communicating something else than you intended.
So, considering that the debate on this topic is typically full of confusion, the answer is probably: no.
If we assume that locus of control is a proxy for the perception or belief in free-will, then belief in free-will does appear to have certain beneficial effects. But it seems like a moot point anyway because what was gonna happen was gonna happen anyway, right?
8th grade female physics students who were given “attribution retraining” found “significantly improved performances in physics” and favourable effects on motivation.
Ziegler, A., & Heller, K. A. (2000). Effects of an attribution retraining with female students gifted in physics. Journal for the Education of the Gifted, 23(2), 217–243.
Among seventh graders in a, frankly euphemistically titled, “urban junior high school” researchers found support for an ascociation between locus of control and their academic achievement.
Diesterhaft, K., & Gerken, K. (1983). Self-Concept and Locus of Control as Related to Achievement of Junior High Students. Journal of Psychoeducational Assessment, 1(4), 367-375. https://doi.org/10.1177/073428298300100406 (Original work published 1983)
Among widows under the age of 54, Socio Economic Status and Locus of Control were found to impact depression and life satisfaction “independently”. And that the more internal a locus of control was – the better life satisfaction and less chance these widowers had of depression.
Landau, R. (1995). Locus of control and socioeconomic status: Does internal locus of control reflect real resources and opportunities or personal coping abilities? Social Science & Medicine, 41(11), 1499–1505. https://doi.org/10.1016/0277-9536(95)00020-8
Personally, my pet theory is that the “Law of Attraction” probably is effective. Not because of any pseudo-Swedenborg/Platonic metaphysics about the nature of thought, but from a motivational perspective people who are optimistic will have a “greater surface area for success”, because they simply don’t give up that easily.
Free will : A topic I have pondered deeply over the years.
Firstly, like almost everything else in this 4 dimensiona existence, “free will” is not a Binary concept. It is NOT either/or. It is on a gradient.
ALL mammals display traits of free will to varying degrees. The more natural born instincts in a species, the less their free will, the less instincts an animal has, the more free will it can express.
No mammal has zero free will, and no mammal has 100% free will, not humans, not any mammal.
So the idea of free will being “destroyed” is a non-starter. It can perhaps be diminished, but never destroyed.
For those who believe we have 100% free will, ask yourself a couple Q’s.
Can you willingly hold your breath till you die ? No, you would pass out, and begin breathing, against your will.
If you walk around a corner and I yell “BOO”… Did you jump because you decided to, or were your actions dictated by instincts that had nothing to do with free will ???
Same if I poke u with a straight pin, did you decide to draw back, or was it automatic ?
No one, and no thing has total free will.
At least that’s my opinion, I could be wrong.
---Tapske...
I’d like to share a book recommendation:
“Writing for the reader”
by O’Rourke, 1976
https://archive.org/details/bitsavers_decBooksOReader1976_3930161
This primer on technical writing was published by Digital Equipment Corporation (DEC) in 1976. At the time, they faced the challenge of explaining how to use a computer to people who had never used a computer before. All of the examples are from DEC manuals that customers failed to understand. I found the entire book delightful, insightful, and mercifly brief. The book starts with a joke, which I’ve copied below:
I think the little scrollbar on mobile on the right side of the screen isn’t very useful because its’ position is dependent on the length of the entire page including all comments, and what I want is an estimation of how much more of the article is left to read. I wonder if anyone else agrees
I agree, but that’s controlled by your browser, and not something that (AFAIK) LessWrong can alter. On desktop we have the TOC scroll bar, that shows how far through the article you are. Possibly on mobile we should have a horizontal scroll bar for the article body.
AI interpretability can assign meaning to states of an AI, but what about process? Are there principled ways of concluding that an AI is thinking, deciding, trying, and so on?
I have not seen much written about the incentives around strategic throttling of public AI capabilities. Links would be appreciated! I’ve seen speculation and assumptions woven into other conversations, but haven’t found a focused discussion on this specifically.
If knowledge work can be substantially automated, will this capability be shown to the public? My current expectation is no.
I think it’s >99% likely that various national security folks are in touch with the heads of AI companies, 90% likely they can exert significant control over model releases via implicit or explicit incentives, and 80% likely that they would prevent or substantially delay companies from announcing the automation of big chunks of knowledge work. I expect a tacit understanding that if models which destabilize society beyond some threshold are released, the toys will be taken away. Perhaps government doesn’t need to be involved, and the incentives support self-censorship to avoid regulation.
This predicts public model performance which lingers at “almost incredibly valuable” whether there is a technical barrier there or not, while internal capabilities advance however fast they can. Even if this is not happening now, this mechanism seems relevant to the future.
A Google employee might object by saying “I had lunch with Steve yesterday, he is the world’s leading AI researcher, and he’s working on public-facing models. He’s a terrible liar (we play poker on Tuesdays), and he showed me his laptop”. This would be good evidence that the frontier is visible, at least to those who play poker with Steve.
There might be some hints of an artificial barrier in eval performances or scaling metrics, but it seems things are getting more opaque.
Also, I am new, and I’ve really been enjoying reading the discussions here!
I am curious if the people you encounter in your dreams count as p-zombies or if they contribute anything to the discussion. This might need to be a whole post or it might be total nonsense. When in the dream, they feel like real people and from my limited reading, lucid dreaming does not universally break this. Are they conscious? If they are not conscious can you prove that? Accepting that dream characters are conscious seems absurd. Coming up with an experiment to show they are not seems impossible. Therefore p-zombies?
idk about you, but the characters in my dream act nowhere near how real people act, I’m just too stupid in my dreams to realize how inconsistent and strange their actions are.
They certainly act weird but not universally so and no weirder than you act in your own dreams, perhaps not even weirder than someone drunk. We might characterize those latter states as being unconscious or semi-conscious in some way but that feels wrong. Yes, I know that dreams happen when you’re asleep and hence unconscious but I think that is a bastardization of the term in this case. Also, my intuition is that if a someone in real life acted as weirdly as a the weirdest dream character did, that would qualify them as mentally ill but not as a p-zombie.
Greetings all. My first visit, not sure where to put this Gen. Info. So will start here, and take guidance from participants, if there is a better thread.
I stumbled on this site after a friend suggested I research “Roko’s”. An interesting thought experiment, I enjoyed it but nothing worth loosing sleep over. Would be happy to discuss.
I am about 1 year into a manuscript (200 pages so far), dealing with all aspects of cognitive problem solving, via psychological self awareness, and how to debate, discuss issues with the understanding of our (humans) “default” mental and emotional “settings”. Which prevent enlightenment.
The 2 most common being
We are all predisposed to think in Binary terms; either/or, black & white, good or bad, etc. This is counter productive to accurate conclusions/assessments. A more accurate truth is: other than very few “Base Principles”, almost nothing in this 4 dimensional existence is truly binary. Almost everything is on a gradient. The problem with the auto-binary approach is it suggests “absolutes” where none exist. It takes intentional, mental effort to avoid this conceptual trap.
We are all predisposed to think in linear term, (beginning, middle, end), when in truth, the overwhelming majority of things in this 4 dimensional existence are cyclical, not linear.
*** What this means to the avg. Joe living his life, the majority of problems, situations, questions, you will ever have are most likely non-binary. If you attempt to solve a non-binary question, with a Binary state of mind, or a Binary answer, you will NOT be “LESS Wrong”. Square peg, round hole.
Same with attempts to solve a cyclical Q, with a linear mind set, or a linear answer, it simply can not be done accurately.
There are plenty of accurate statements of “absolute”, those are easy (with sentence modifiers). Then there are some statements that seem absolute, which aren’t, untill you add modifiers. IE: The speed of light is a constant.
While this is true, it is NOT the accurate truth, therefore NOT a constant. It needs a modifier to reach that level. IE: The speed of light, in the vacuum of space, is aconstant. NOW, you have “A truth” “A constant” a “solid base” from which further analysis either will or will not be supported.
*** For those who are of the opinion that there are NO absolutes, please understand, in order for you to affirm that, you would have to use a statement of absolute, thereby nullifying the very point you are trying to make.
The trick.… the really difficult (and fun thing for me) is to ID statements of absolute with zero modifiers...… that’s the challenge 😀.
That’s about. .1% of the subject matter I am writing about.
I am also quite comfortable discussing political or U.S constitutional issues. I am not emotionally invested in them, therefore a logical discussion is in my wheelhouse. (Frredom of speech, 2nd amendment, abortion rights, whatever.)
Fair winds to all, ---Tapske...
Contrary to what Wikipedia suggests, the people who enjoy discussing this topic on Less Wrong are mostly the newcomers who arrived here after reading Wikipedia. But we have a wiki page on the topic.
Another danger is that people who want to go behind the binary, often fall into one of the following traps:
Unary—“everything is unknowable”, “everything is relative”, etc.
Ternary—there are three values: “yes”, “no”, and “maybe”, but all the “maybe” values are the same
That is not a frequent topic here, for reasons. Maybe ACX is a better place for that.
The title of this thread breaks the open thread naming pattern; should it be Fall 2025, or should we be in an October 2025 thread by now? Moving to monthly might be nice for the more frequent reminder.
It looks like last year it was Fall, and the year before it was Autumn.
Ah, I think perhaps I was misreading the title as August instead of Autumn. If that is case, I prefer ‘Autumn’ :)