Sure, some of those. But also I just expected parenthood to change me a bunch to be better suited to it. Like, it’s a challenge such that rising to it transforms you. With babysitting you’re just skipping to a random bit pretty far into the process, not already having been transformed.
JustisMills
As a data point I was extremely confident I wanted kids, and didn’t especially vibe with most babies/never had changed a diaper before my kid was born, and my confident prediction, so far, was if anything an underestimate. I doubt a week of babysitting would have changed my intent whatsoever, but it would probably have been stressful and not that fun.
As an audience member, I often passively judge people for responding to criticism intensely or angrily, or judge both parties in a long and bitter back and forth, and basically never judge anyone for not responding.
When I’ve responded to criticism with “oh, thanks, hadn’t thought of that”, I haven’t really felt disapproved of or diminished. Sometimes the critic is just right, and sometimes they just are looking at the topic from another angle and it’s fine for readers to decide whether they like it better than mine. No big deal. I don’t really see evidence that anyone’s tracking my status that hard. I’d rather make sure nobody’s tracking me being unkind though, including myself.
(This comment is offered up as a data point from the peanut gallery. I have no idea if it’s representative! If you reply, it may make me happy, but if not I won’t mind.)
It’s grown on me as I’ve edited more technical stuff; growing up in the US literary tradition it always seemed elegant to put punctuation inside, though now that I read more stuff like LessWrong it does feel kinda imprecise/hand-wavey somehow. Like, wait a minute, that guy you’re quoting didn’t put a comma in there!
Yeah, I messed around with Typst for the first time recently. There’s a whole dang world out there!
Yeah, for me it solves itself instantly once I actually notice it in any given case. I mostly try not to think of myself as an expert at stuff (luckily for that, I rarely am!), but there are weird psychological incentives with being a professional at stuff, I think.
Having kids is a huge part of it for me. Came by it but absorbing a philosophy where it mattered a lot from my parents, which I liked and accepted; I’ve always aspired to be pretty similar to them. Now I’ve got a baby and yep, as I hoped or better.
I value many other things too, which have been pretty stable since high school. Helping the global poor somewhat, creating art, having friends, eventually getting married and being a good spouse, and some idiosyncratic imagination stuff. These things have all seemed obviously good to me for so long I don’t really know where they came from. Just following gradients as a kid and imprinting on (I think good) role models, maybe. I think I endorse it all, though.
I doubt this helps your particular quest, but that’s my answer, at least right now!
I haven’t noticed anyone else come out and say it here, and I may express this more rigorously later, but, like, GPT-5 is a pretty big bear signal, right? Not just in terms of benchmarks suggesting a non-superexponential trend but also, to name a few angles/anecdata:
It did slightly worse than o3 at the first thing I tried it on (with thinking mode on)
It failed to one-shot a practical coding problem that was entirely in javascript, which is generally among its strengths (and the problem was certainly in principle solvable)
It’s hallucinating pretty obviously when I give it a 100 page or so document to read and comment on (it references lots from the document, but gets many details wrong and overfixates on others)
It’s described as an automatic family of models that naturally picks the best for the situation, which seems like exactly what you’d do if nothing else was really giving you the sauce
The main reddit reaction seems to be that the demo graphs were atrocious, which is not exactly glowing praise
All the above paired with the fact that this is what they chose to call GPT-5, and with the fact that Claude’s latest release was a well-named and justified 0.1 increment
I’m largely with Zvi that even integrating this stuff as it already exists into the economy does some interesting stuff, and that we live in interesting times already. But other than what’s already priced in by integrations and efficiency optimizations, progress feels s-curvier to me today than it did a week ago.
30 people is the aim!
I think the main disanalogy with startup incubators is that we’re not necessarily betting for successful bloggers to increase their earning power; probably the modal respondent (given the community skews tech) can currently make as much money as a top 1% blogger already (though to be clear not necessarily a top .1% blogger).
I think the analogy that’s better is to a meditation retreat or athletic conditioning program, or something, which afaict do pretty often charge (or at least have a strongly suggested donation in the meditation case); part of the point of charging (other than the obvious reason of just paying for stuff) is to make sure people are really bought-in/personally motivated to get the most out of it possible.
(I am helping with this event, but I don’t speak for Ben here)
Thoughts on how the sort of hyperstition stuff mentioned in nostalgebraist’s “the void” intersects with AI control work.
Yeah, I agree this is more “thing to try on the margin” than “universally correct solution.” Part of why I have the whole (controversial) preamble is that I’m trying to gesture at a state of mind that, if you can get it in a group, seems pretty sweet.
Ah, we may just have different definitions of rich, or perhaps I’m a bit of a spendthrift! Or, I suppose, I might just go to cheaper restaurants. I’m thinking of checks in the like, $150-$200 range for the party, which isn’t nothing but as an occasional splurge doesn’t really fuss me. I guess if you do it 5x per year on a 50k household income (about the local median in my city, I think) that’d be about 2% gross. Not cheap, but also not crazy, at least for my money intuitions.
Do you think occasionally buying a meal for a small group requires being rich? I don’t think I’m rich, but I can manage it without much strain. At least occasionally!
Thank you for the book recommendation! It seems likely I would find it interesting.
For LessWrong posts specifically, there’s the feedback service.
This isn’t personalized, but I also have suggestions for people in the general LessWrong cluster here.
Yep! I use 4 Opus near-daily. I find it jumps ahead in consistent ways and to consistent places, even when I try to prod it otherwise. I suppose it’s all subjective, though!
I actually think LLM “consistency” is among the chief reasons I currently doubt they’re conscious. Specifically, because it shows that the language they produce tends to hang out in certain attractor basins, whereas human thought is at least somewhat more sparse. A six-year-old can reliably surprise me. Claude rarely does.
Of course, “ability to inflict surprise” isn’t consciousness per se (though c.f. Steven Byrnes on vitalistic force!), or (probably) necessary for such. Someone paralyzed with no ability to communicate is still conscious, though unlikely to cause much surprise unless hooked up to particular machines. But that LLMs tend to gravitate toward a small number of scripts is a reminder, for me, that “plausible text that would emanate from a text producer (e.g. generally a person)” is what they’re ruthlessly organized to create, without the underlying generators that a person would use.
Some questions that feel analogous to me:
If butterflies bear no relation to large predators, why do the patterns on their wings resemble those predators’ eyes so closely?
If that treasure-chest-looking object is really a mimic, why does it look so similar to genuine treasure chests that people chose to put valuables in?
If the conspiracy argument I heard is nonsense, why do so many of its details add up in a way that mainstream narratives’ don’t?
Basically, when there’s very strong optimization pressure toward Thing A, and Thing A is (absent such pressure) normally evidence of some other thing, then in this very strong optimization pressure case the evidentiary link breaks down.
So, where do we go from here? I’m not sure. I do suspect LLMs can “be” conscious at some point, but it’d be more like, “the underlying unconscious procedures of their thinking is so expansive that characters spun up inside that process themselves run on portions of its substrate and have brain-like levels of complexity going on”, and probably existing models aren’t yet that big? But I am hand waving and don’t actually know.
I will be much more spooked when they can surprise me, though.
I guess I was imagining an implied “in expectation”, like predictions about second order effects of a certain degree of speculativeness are inaccurate enough that they’re basically useless, and so shouldn’t shift the expected value of an action. There are definitely exceptions and it’d depend how you formulate it, but “maybe my action was relevant to an emergent social phenomenon containing many other people with their own agency, and that phenomenon might be bad for abstract reasons, but it’s too soon to tell” just feels like… you couldn’t have anticipated that without being superhuman at forecasting, so you shouldn’t grade yourself on the basis of it happening (at least for the purposes of deciding how to motivate future behavior).
I think there’s a weak moral panic brewing here in terms of LLM usage, leading people to jump to conclusions they otherwise wouldn’t, and assume “xyz person’s brain is malfunctioning due to LLM use” before considering other likely options. As an example, someone on my recent post implied that the reason I didn’t suggest using spellcheck for typo fixes was because my personal usage of LLMs was unhealthy, rather than (the actual reason) that using the browser’s inbuilt spellcheck as a first pass seemed so obvious to me that it didn’t bear mentioning.
Even if it’s true that LLM usage is notably bad for human cognition, it’s probably bad to frame specific critique as “ah, another person mind-poisoned” without pretty good evidence for that.
(This is distinct from critiquing text for being probably AI-generated, which I think is a necessary immune reaction around here.)
One thing is, babies very gradually get harder in exactly the way you describe! Like, at first by default they breastfeed, and don’t have teeth, which is at the very least highly instinctive to learn. Then they eat a tiiiny bit of solid food, like a bite or two once a day, to train you. So you have gotten way stronger at “baby eating challenges” by the time the baby can e.g. throw food. Likewise they’ll very rarely try to put stuff in their mouths early on, then really gradually more and more, so you hone that instinct too. Even diapers don’t smell bad the first couple of months! Hard to overestimate the effects of the extremely instinct compliant learning curve.