Open Thread Winter 2025/26
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you’re new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Hello All,
New to LW and still reading through the intro material and getting a hang of the place. I am ashamed to admit I found this place through Reddit—ashamed because I despise Reddit and other social media.
I came here because I cannot find a place to engage in long form discussions about ideas contrary to my own. I dream of a free speech platform where only form is policed, not content. Allowing any idea to be voiced no matter how fringe as long as it adheres to agreed upon epistemic standards.
Anyways, I know LW probably is not that place but it is adjacent. It seems most people here want to discuss AI research but hoping to find some communities outside of that topic.
Hello and welcome! There’s a few of us around who discuss things other than AI research, myself among them. I suggest looking at the filtering options for the front page; it’s the gear next to Latest, Enriched, Recommended, and Bookmarks. I filter the AI tag pretty heavily down.
If you want to lean into voicing fringe ideas around here, I’d suggest reading the LessWrong Political Prerequisites and maybe Basics of Rationalist Discourse. They’re not universally agreed upon, but I think they do make for a decent pointer to the local standards.
In case folks missed it, the Unofficial LessWrong Community Census is underway. I’d appreciate if you’d click through, perhaps take a survey, and help my quest for truth- specifically, truth about what the demographics of the website userbase looks like, what rationality skills people have, whether Zvi or Gwern would win in a fight, and many other questions! Possibly too many questions, but don’t worry, there’s a question about whether there’s too many questions. Sadly there’s not a question about whether there’s too many questions about whether there’s too many questions (yet, growth mindset) so those of you looking to maximize your recursion points will have to find other surveys.
If you’re wondering what happens to the data, I use it for results posts like this one.
I feel like the react buttons are cluttering up the UI and distracting. Maybe they should be e.g., restricted to users with 100+ karma and everyone gets only one react a day or something?
Like they are really annoying when reading articles like this one.
Yeah, I agree with this. I think they are generally decent on comments, but some users really spam them on posts. It’s on my list to improve the UI for that.
Do you have any thoughts on those UI improvement written down anywhere?
I’ll admit to being one of the users that really spams reactions on posts. I like them as a form of highlighting for review and as a form of backchannel communication. I would be much happier if people would use more reacts towards me. So I would be upset with UI modifications to restrict reacts, but fully support updates to make the UI around viewing reacts cleaner and more useful.
I wrote a longer comment with some feature suggestions. If you have time it would be nice to hear your thoughts.
Part of it is the “vulnerability” where any one user can create arbitrary amounts of reacts, which I agree is cluttering and distracting. Limiting reacts per day seems reasonable (I don’t know if 1 is the right number, but it might be, I don’t recall ever react-ing more than once a day myself). Another option (more labor-intensive) would be for mods to check the statistics and talk to outliers (like @TristanTrim) who use way way more reacts than average.
[EDIT: I think issues stem from different people using reacts in different ways and having different assumptions about their use. I think I am probably using them in a less common way than other people, but I also find myself believing I am using them in a better way than other people. As such, I am trying to put in effort to communicate my POV. I would appreciate if anyone who disagrees with me would do so with a higher bandwidth signal than just pressing the “Agreement: Downvote” button. Perhaps by using some inline reacts on my comment?]
Haha! Sorry if I’m bothering anyone! ☺♡
I really like reacts and am bothered in essentially the opposite direction as Sodium in that I think reacts are a very useful backchannel communication, and see it as a minor moral failing that most users do not use them more.
I think it’s great that many reacts are based on LW ideals for discourse. I don’t know exactly how they are managed, but I think they could be even more valuable if there was some team that reviewed how people are currently using them and then improved and updated react descriptions and usage guides based on that. A descriptivist approach.
I also think a prescriptive approach would also be good. People should be suggesting concepts for reacts that they think would be valuable for communication, and people should be figuring out how to promote proper use of reacts.
I do agree that relevance may be an issue. I would like it if everyone would drop ~10 reacts while reading a post, but then, if all of them showed in the UI, it would be too noisy to make sense of easily. I think there are a few ways around this:
[EDIT: I’ve discovered on the triple dot menu it is already possible to select for inline reactions to hide all, hide downvoted, or show all. I think a plausible sane modification of this would be to make the default to hide under 2 or 3 votes and always show reactions to highlighted text. However I think some kind of more complicated scheme could be better.]
Allow users to toggle all reacts on/off. This would be easy to implement and help with the current problem of some users feeling reacts are distracting, but would not help if more people react more.
Change opacity of reacts based on the sum of user karma, so people are not distracted by the opinions of relatively unknown people like myself, unless lots of other people agree.
Make react visibility “subscription” based, so you would only see the reactions of people you are subscribed to.
React “subscription” could be modified for your own posts if you want to see all the reactions to what you have posted but you only want to see particularly relevant reactions while reading other peoples posts.
React “subscription” could be modified with a “karma threshold” so you also see reactions by any user with sufficient karma.
Another issue, I don’t know if this is the case or not, but if each react on your comment or post shows up as it’s own entry in the notifications list, then that would be annoying because it would make it hard to see the more important notifications. So reacts should probably be batched like karma is somehow. (And really, I think a bunch of improvements could be made to the notifications UI.)
All that said, I strongly oppose restricting who can use reacts and how many reacts they can use. Rather, more people should be encouraged to use more reacts more competently and the UI for viewing / ignoring reacts should be improved.
My two cents, I’m happy with the amount of reacts I usually see and would probably enjoy about 20% more.
Thank you for chipping in your two cents!
Hello,
I’m very happy to be here!
Unfortunately I’m only just bringing LessWrong into my life and I do consider that a missed opportunity. I wish I had found this site many years ago though that could have been dangerous as this could be a rabbit hole I might have found challenging to escape, but how bad would that have actually been? I’m sure my wife would not have been thrilled. My reason for coming here now unfortunately, especially at this point in time, is very unoriginal. In the last eight months I’ve taken what was a technology career possibly in its waning years, into a new world of wonder and exploration, and yes I’m talking about AI. I’ve been in technology for over 30 years and certainly have paid a little bit of attention to machine learning and AI over this time span but somehow just kind of missed what was really going on in the last two years. I think I was overwhelmed by the level of hype that I was running into and how shallow it often seemed talking about magical prompts that would give you miraculous results and I just assumed that things weren’t really in a very good space. Yet I was very wrong and I’m glad I didn’t wait even longer to discover the true state of things, though not all of it is good.
I’ve been working for 6 months using AI all day long at my day job, using Claude Code and many other tools doing development and platform engineering work. It’s really been in the last a couple months that I’ve started to look more seriously what I found compelling in the world of AI and I kept coming back to one of my earliest observations formed during my early re-engagement of AI this year. That was an instinct that hit me right away after discovering what the new world of LLMs had to offer and that was that they were very clearly to me fundamentally flawed this wasn’t based on any deep understanding of the training process of how LLMs work, though it was reinforced based on my expanding understanding of this subject. It first started as I did extensive experiments in my use of AI to do work. I’ll cut to the chase and just state that it seemed clear to me that it was highly unlikely that LLM’s were going to lead to AGI, or at least as I view it.
Learning and knowledge has always been a very dear and important topic for me. I have never stopped picking apart my understanding and model of how learning works, at least for myself, and what makes the process more constructive, healthy, and valid. In reading some of the Sequences, though I have just barely scratched the surface, it is clear this is community I’m excited to have discovered and one I’m looking forward to participating in. While I easily accept and can be content at a new AI carrier that mostly involves development and engineering in the world of LLM’s, my real interest lies in trying to imagine explore the space of what in my mind would have an actual chance at achieving AGI. I’m not interested in just building towards a challenge, this point is relevant as I started to think building something to match against ARC-AGI would be a great way to learn and explore, I’m more interested in trying to work out an idea of how an AI model could not only do real learning, reaching actual comprehension, but is capable of building its own world model, one distilled nugget of understanding at a time.
One goal of this work was formulate this vision mostly in isolation as a great way to really stretch my mind and see where I could go on my own. I digress, but this is the direction that led me here. I was talking to a few people at a local AGI event and they recommended that my first article on my vision would be ideal for LessWrong.
I while I’m still days from having that article ready, I had an experience this morning that inspired me to write a quick article that seemed like a good first post for this site. I made sure I digested the guidelines, especially the one on LLM generated content. I do most of my writing that involves bring lots of pieces together with the aid of AI, mostly to help organize, make larger edits, and to help me analyze my own writing. That was the case with the piece I wrote today and posted here. It was rejected, and while I have nothing at all critical to say of the reviewer, especially considering the work load that must be present these day, the main stated reason was the LLM policy. Put simply, this work was my content and words. I just copied the everything in this comment other than this last bit into JustDone and it declared that it was 99% AI content. I wrote every word of this in real time in the comment box of this page. While I can make no claim to understand the process the moderator used to make their determination, I hope to get this figured out before I am ready to post my piece on distillation of knowledge into a world model. I fear that an old wordy and writer like myself often sounds more like an AI than a modern human. :-)
Sorry for the overly wordy first post, but I look forward to interacting and collaborating in the future!
Seems like JustDone gives abnormally high AI content estimations. Plausibly this is to scare you into using their “text humanizer” in which an AI re-writes what you wrote to make it seem less like an AI to an AI… I weep for humanity.
I’d recommend reading and commenting until you have enough karma to submit your post to the LW editor who can more straightforwardly tell you why your post would or wouldn’t be rejected.
PS: I would like to encourage you, like everyone, to stop focusing on AI capabilities and instead focus on AI interpretability and preference encoding.
I have some time on my hands and would be interested in doing something meaningful with it. Ideally learn / research about AI alignment or related topics. Dunno where to start though, beyond just reading posts. Anyone got pointers? Got a background in theoretical / computational physics, and I know my way around the scientific Python stack.
Hello, I am an entity interested in mathematics! I’m interested in many of the topics common to LessWrong, like AI and decision theory. I would be interested in discussing these things in the anomalously civil environment which is LessWrong, and I am curious to find out how they might interface with the more continuous areas of mathematics I find familiar. I am also interested in how to correctly understand reality and rationality.
Hi!
What sorts of mathematics are you interested in? I’m interested in topology and manifolds which I hope to apply to understanding the semantics of latent spaces within neural networks, especially the residual stream of transformers. I’m also interested in linear algebra for the same reason. I would like to learn more about category theory, because it seems interesting. Finally, I like probability theory and statistics because, like you, I’d like to “correctly understand reality and rationality”.
Hello! I chose the name “derfriede” for LW. This is my first post here, which I am happy about. I have read some of the introductory materials and am very interested.
What interests me? First of all, I want to explore the topic of AI and photography. I study the theory and philosophy of photography, look for new approaches, and try to apply a wide variety of perspectives. I think it’s useful to address the question of what AI cannot do. It’s very similar to researching glitch culture. Okay, I’ll stop here for now, because I just want to get acquainted.
Have a nice day, wherever you are!