see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
I see I’ve been scooped on Barrayar; I concur with the rec and add that the child emperor also spends some time onscreen.
The Inda series by Sherwood Smith, which I read recently, doesn’t feature the mainline protagonists having children during events (well, actually, now that I think of it some of them do get pregnant during, but mostly the kids don’t get born until things are wrapping up) but does have a lot of parent characters who engage in the various bloody battles, political intrigues, etc. that their political situation requires and are otherwise well-fleshed-out. (There’s also normalized polyamory). It’s incredibly long, so you’ll be waiting awhile for any specific features, but by that same token there are lots of different parent-child relationships. That said, it’s centrally about childless-for-most-of-it Inda, so YMMV.
Because it’s against LW policy:
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant. [emphasis mine]
But why listen to me when you could listen to the pocket PhD?
Why do some people think that’s bad? Roughly:
It breaks the social contract of discussion.
On forums like LW/StackExchange/etc, the implicit deal is: “You are reading my thoughts.”
If you post raw model output, readers are actually getting: “Here is a generic sample from a text generator, lightly prompted by me.”
That feels like misrepresentation, especially if not disclosed.It’s extremely cheap spam.
Human-written comments cost time and attention.
Model-written comments are nearly free and can be produced in unlimited quantity.
If everyone does that, discussion quality drowns in fluent but shallow text. Downvoting “ChatGPT-y” comments is partly a defense mechanism against being flooded.
Low epistemic reliability.
Models confidently hallucinate, oversimplify, or miss key cruxes.
When a human writes, they can be challenged: “Why do you believe that?” and they (usually) have some model of the world behind it. With a raw LLM comment, there often isn’t a stable belief or understanding behind the words—just next-token prediction. That undermines the goal of rigorous reasoning.Skill atrophy and shallow engagement.
If you mostly outsource your arguing/thinking to a model, you don’t get better at reasoning or writing. From the community’s perspective, you’re contributing less original thought and more “generic internet essay”.Style + content are often generic.
LLM text has a distinctive “smooth, polite, yet vague” feel. People go to niche forums for idiosyncratic, deeply-thought comments, not for something they could get by clicking “generate” themselves.
Why the “why walk when you can bike?” analogy doesn’t quite fit
Biking vs walking:
Both are you moving under your own power.
Biking just makes you faster/more efficient.
Using raw AI output is more like:
Sending a delivery robot to a meetup in your name and letting it talk for you.
You gave it the address and a topic, but you don’t fully control what it says moment-to-moment.
Using AI as a tool (drafting, brainstorming, checking math, summarizing sources) and then carefully editing, fact-checking, and putting your own reasoning into the result is more like using a bike or calculator.
Dumping unedited Claude/ChatGPT output as a comment and treating it as “your contribution” is what people are objecting to.So: it’s not that “biking” (using AI tools) is inherently bad; it’s that outsourcing the whole comment to the AI and presenting it as your own thought breaks norms around effort, honesty, and epistemic quality, and communities push back on that.
I don’t have that option myself, as someone without existing sequences. However, a google turned up https://www.lesswrong.com/sequencesNew , which seems to do the trick.
Oh, definitely! But that’s how users who want it to e.g. help with their physics theories or pretend it’s in love with them typically act.
I think, when someone feels negatively toward a post, that choosing to translate that feeling as “I think this conclusion requires a more delicate analysis” reflects more epistemic humility and willingness to cooperate than does translating it as “your analysis sucks”. The qualifier, first of all, requires you to keep in mind the fact that your perceptions are subjective, and could be incorrect (while also making it clear to other people that you’re doing so). Trying to phrase things in ways that are less than maximally rude is cooperative because it makes interacting with you more pleasant for the other person. Using words that aren’t strongly valenced and leave the possibility open that the other person is right also means that your words, if believed, are likely to provoke a smaller negative update about the other person; you do increase your credibility by doing so, but I’m skeptical that this cancels out that effect. (Also: it’s impossible not to make decisions about how you phrase things in order to communicate your intended message, and given that this is impossible, I think condemning the choice to phrase things more nicely is pretty much the opposite of what one should do.) As for the part where it makes you look good, the other person can look equally good simply by being equally polite. Of course if they respond with insults this might be bad for their image, but being polite makes insults less tempting for the typical interlocutor.
If I open a post in the modal, and then click on the title of the post (which is a link to the post itself), this closes the modal. I expected it to open the non-modal version of the post in the same tab, and would prefer this.
Good for you. I think you’re stupid.
I really think it’s more about having autism-typical personality traits, which do often play badly with conventional schooling but aren’t particularly caused by it.
Sure. Goodness knows we don’t need to redebate creationism every time. But softening your phrasing isn’t sneering. It’s acting to make your words less upsetting and more pleasant to hear. That is very nearly the opposite of sneering. (It also, in cases where you insert qualifiers, has the valuable effect of making you look more reasonable should you happen to be wrong.)
No, it isn’t. It is possible to disagree with people on the object level. I realize that there exist people who cannot descend below simulacrum level three, but the world is not filled with them.
Using polite phrasing instead of rude phrasing to communicate your issues with a post isn’t veiled sneering, it’s exercising manners. It’s frustrating to hear that you’re reading these emotional overtones onto perfectly normal ways to soften one’s phrasing.
EDIT: Also, acknowledging that you might be wrong has important semantic differences from not doing that.
I tried that prompt myself and it didn’t replicate (either time); until the OP provides a link, I think we should be skeptical of this one.
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we’re all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
You didn’t say “you didn’t say your probability was <1%”, you said “You should’ve said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong.” However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn’t think superintelligence would be built within 20 years, because she said that, and that she didn’t think superintelligence was likely to kill everyone because she said that too).
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that’s fine, but it’s not her error.
She did say this in her original comment. And it’s not really similar to denying the black death, because the black death, cruciallly, existed.
If my parents had known in advance that I would die at ten years old, I would still prefer them to have created me.
There are actually quite a few more, though most of them feature her being isekaied elsewhere; https://glowfic.com/characters/12823?view=posts should show you ~all of them.
When I scroll down a little bit, the text on the feed preview for 2025 Prediction Thread gets wider so it clips off the background:
Well you see. If I couldn’t have a good conversation with someone I would not be turned on.