India is a… large country. Without specifying which part you intend to visit, you might receive the equivalent of a recommendation to visit the best restaurant in Portugal while you’re actually in Amsterdam! I used to live there, though by the eastern side where you’re unlikely to visit. Nonetheless, I will happily vet or suggest places if you know your itinerary.
AlphaAndOmega
The phenomenon you observed as far more to do with the tiny plot sizes in most of rural India than it had to do with the cost of labor.
Many/most farmers have farms sized such that they are, if not as bad as mere subsistence, unable to justify the efficiency gains of mechanization. This is not true in other parts of India, like the western states of Punjab and Haryana, where farms are larger and just about every farmer has a tractor. There are some cooperatives where multiple smallholders coordinate to share larger machines like combine harvesters which none but the very largest farmers can justify purchasing for personal use.
Optimal economic policy (in terms of total yield and efficiency) is heavily in favor of consolidating plots to allow economies of scale. This is politically untenable in much of India, hence your observation. However, it isn’t a universal state of affairs, and many other fruits of industrialization are better adopted.
So I looked it up and apparently the guideline is actually a 2 hours fast for clear liquids! 2 hours![7] The hospital staff, however, hardened their hearts. Nurses said to ask the surgeons. The surgeons said to ask the anestheologists. It wasn’t until 7am that the anestheologists said, yep, you can drink a (small) glass of water.
Ah, this takes me back to my medical officer days. No junior doctor ever got into trouble for telling a patient to fast a bit too long, and many do for having a heart and letting them cut it short. It is also likely the least consequential thing we can bother the anesthetists about, and it’s not going to kill anyone to wait longer (usually).
Knowing the local demographic and behavioral tendencies on LW, I think it might worth noting that ozempic/semaglutide and other GLP drugs can cause delayed gastric emptying. As a consequence, even the standard fasting duration might not be adequate to fully empty your stomach. If you’re scared of getting aspiration pneumonia, it’s worth mentioning this to your surgeon or the anesthetist. The knowledge hasn’t quite percolated all the way up the chain, so you can’t just assume they’re aware.
Here’s how I parsed this:
We’re in a future not particularly long after a hard takeoff Singularity (changes to the environment described as quite sudden rather than gradual).
This is a multipolar AI scenario, given that the Origami Men claim to have purchased these humans from another entity that owned them.
The Origami Men (or the entity operating them) is severely misaligned but not to an omnicidal degree. It appears to value (bad) proxies for human wellbeing, such as ensuring that they have jobs, pets, residences and food. They either do not understand or do not care that their implementation is… subpar. We might as well call it the Uncanny Valley of Care.
They seem to have some more subtle store of value, in the sense that they desire to only rear humans who do not actively breach containment, or who do not spread memes about breaking out (“contagion”). The general impetus behind the cull or reclamation is not clear to me, do they initially just pick people at random? Or does the narrator finding something “beautiful” in the people chose correlate to an unknown preference? They will take you if you get too close to the border of the dome, but general discontent is not policed. Nor is attempts at violence against the Origami Men, though that appears entirely futile.
“Provenance”? Do they only value pre-Singularity specimens? That would explain lack of any evidence of artificially boosting human numbers or encouraging reproduction.
>I was surprised to find out that I could not find a single neurotransmitter that is not shared between humans and mice (let me know if you can find one, though).
As far as I can tell, this is true. The closest I could find is a single subtype of receptor that a frameshift mutation in a primate ancestor made nonfunctional.
https://www.sciencedirect.com/science/article/pii/S0021925818352189
I suspect you’d have to go even further back in terms of divergent ancestry to find actual differences in the neurotransmitter substances themselves. There are definitely differences in receptors and affinities, but the actual chemicals? From a quick search, you’d have to go all the way to ctenophores, which are so different that some have posited they evolved nervous systems independently.
I presume it would be an overview of histone molecule. DNA is often kept compacted by wrapping around one, like a rope coiled around a baseball.
I find it very hard to believe that a civilization with as much utter dominion over physics, chemistry and biology as the Culture would find this a particularly difficult challenge.
The crudest option would be something like wiping memories, or synthesizing drugs that re-induce a sense of wonder or curiosity about the world (similar to MDMA). The Culture is practically obsessed with psychoactive substances, most citizens have internal drug glands.
At the very least, people should be strongly encouraged to have a mind upload put into cold storage, pending ascendance to the Sublime. That has no downsides I can see, since a brain emulation that isn’t actively running is no subjectively different from death. It should be standard practice, not a rarity.
Even if treated purely as a speculation about the “human” psyche, the Culture almost certainly has all the tools required to address the issue, if they even consider it an issue. That is the crux of my dissatisfaction, it’s as insane as a post-scarcity civilization deciding not to treat heart disease or cancer.
Even if such ennui is “natural” (and I don’t see how a phenomenon that only shows up after 5-6x standard lifespan, assuming parity with baseline humans can ever be considered natural), it should still be considered a problem in need of solving. And the Culture can solve just about every plausibly solvable problem in the universe!
Think of it this way, if a mid-life crisis reliably convinced a >10% fraction of the population to kill themselves at the age of 40, with the rest living happily to 80+, we’d be throwing tens of billions at a pharmacological cure. It’s even worse, relatively and absolutely, for the Culture, as their humans can easily live nigh-indefinitely.
Even if you are highly committed to some kind of worship of minimalism or parsimony, despite infinite resources, or believe that people have the right to self-termination, then at least try and convince them to make mind backups that can be put into long-term storage. That is subjectively equivalent to death without the same… finality.
This doesn’t have to be coercive, but the Culture demonstrates the ability to produce incredibly amounts of propaganda on demand. As far as I’m concerned, if the majority of the population is killing itself after a mere ~0.000..% of their theoretical life expectancy, my civilization is suffering from a condition that ours standard depression or cancer to shame. And they can trivially solve it, they have incredibly powerful tools that can edit brains/minds to arbitrary precision. They just… don’t.
While the Culture is, on pretty much any axis, strictly superior to modern civilization, what personally appalls me is their sheer deathism.
If memory serves, the average human lives for around 500 years before opting for euthanasia, mostly citing some kind of ennui. What the hell? 500 years is nothing in the grand scheme of things.
Banks is careful to note that this isn’t, strictly speaking, forced onto them, and exceptions exist, be it people who opt for mind uploads or some form of cryogenic storage till more “interesting” times. But in my opinion, it’s a civilization-wide failure of imagination, a toxic meme ossified beyond help (Culture humans also face immense cultural pressure to commit suicide at the an appropriate age).
Would I live in such a civilization? Absolutely, but only because I retain decent confidence in my ability to resist memetic conditioning or peer pressure. After all, I’ve already spent much of my life hoping for immortality in a civilization where it’s either derided as an impossible pipe-dream or bad for you in {hand-wavy ways}.
Another issue I’ve noted is that even though this is a strictly post-scarcity universe in the strong sense, with matter and energy freely available from the Grid, nobody expands. Even if you want to keep natural bodies like galaxies ‘wild’ for new species to arise, what’s stopping you from making superclusters of artificial matter in the enormous void of interstellar space, let alone when the extragalactic supermajority of the universe lies empty? The Culture is myopic, they, and the wider milieu of civilizations, seem unwilling to remotely optimize even when there’s no risk or harm of becoming hegemonizing swarms.
(Now that you’ve got me started, I’m half tempted to flesh this out into a much longer essay.)
Strong upvote. When I began reading, I slightly scoffed at the idea of “musician’s dystonia”. Never heard of it, didn’t sound plausible to me, despite being a doctor and psych trainee. I thought it was a nice hypothetical to shore up the story’s central conceit.
And then I actually Googled it, and found a wiki article. Oh dear.
There is an important distinction to be made between base models, which are next token predictors (and write in a very human like manner) and the chatbots people normally use, despite both being called “LLMs”. Those involve taking a base model and then subjecting it to further instruction tuning and Reinforcement Learning from Human Feedback.
My understanding is that a lot of the stylistic quirks of the latter arise from the fact that OpenAI (and other companies) either personally or through outsourcing hired Third World contractors. They were cheap and reasonably fluent in English. However, a large number were Nigerian, and this resulted in a model slightly biased towards Nigerian English. They’re far more fond of “delve” than Western English speakers, but most people wouldn’t know that, because how often were they reading Nigerian literature or media? It’s particularly rampant in their business or corporate comms.
Then you have other issues stemming from RLHF. Certain aspects of the general chatbot persona were overly engrained, as preference feedback biased towards verbosity and enthusiasm. And now that the output of ChatGPT 3.5 and onwards is all over the wider internet, to an extent, new base models expect that that’s how a chatbot talks. This likely bleeds into the chatbots that are built on the contaminated base model, since they have very strong priors that they’re a chatbot in the first place.
That doesn’t mean that there isn’t significant variance in how current models speak. Claude has a rather distinct personality and innate voice, even within the various GPT models, o3 was highly terse and fond of dense jargon. With the availability of custom instructions, you can easily get them to talk you in just about any style you prefer.
I was a big fan of Horizon Alpha, a stealth model available on OpenRouter, later revealed to be an early checkpoint/variant of GPT-5. Unfortunately, the release candidate isn’t quite as good at my usual vibes benchmark when it comes to creative writing, be it 5 or 5-Thinking.
(4.1 was really good at the same task, and surprisingly so, for a model marketed for coding. I missed it when it was gone, and I’m glad to have it back)
I was initially quite negative about GPT-5T, but I’ve warmed to it. It seems as smart or smarter than o3, and is a well-rounded and capable model. The claims of a drastically reduced hallucination rate seems borne out in extensive use.
I don’t think 4o is that harmful in objective terms, but Altman made a big fuss about reducing sycophancy in GPT-5 and then immediately caved and restored 4o. It’s a bad look as far as I’m concerned.
More concerningly, if people can get this attached to an LLM as mediocre as 4o, we’re in for a great time when actually intelligent and manipulative ASI gets here.
You’re correct. When GPT’s native image gen came out, it absolutely refused to touch kids. I didn’t realize that they’d changed it (at least because I never had the reason to try) till I attempted this ill-fated experiment.
An interesting idea, but I genuinely don’t see how that’s supported by the movies? Ignore this if you were just joking!
[Linkpost] Avatar’s Dirty Secret: Nature Is Just Fancy Infrastructure
I’m sorry. I can only hope you’ve found peace, and I’d be the last to judge if you were to indulge in AI generated dreams of a better world.
Thank you.
The issue, as I see it, isn’t just the ability to do so. Theoretically, I could have taken the picture of the two of us, gone to a human artist and gotten a portrait with kids commissioned. I would have been exceedingly unlikely to go down that route, or even explore alternatives like using old fashioned Photoshop for that purpose. The option of just uploading a single image and a short prompt, and it taking a mere few seconds to conjure exactly what I’d envisioned? That temptation was too alluring to resist.
Ease of access makes all the difference, a quantitative change can become qualitative. Honey was a delicacy to be savored, but now, we can get something as sweet or sweeter in minutes, delivered to the comfort of our homes.
I try not to make sweeping declarations here, I’m a technophile, and I think that the ability to conjure arbitrary, high quality images is amazing. A small subset of that capability can cause immense pain, or at least did to me. The rest of the time, I’m still marveling at my ability to illustrate ideas I’d never have hoped to see with my own two eyes. Even as painful as this way, a small part of me appreciates the ability to have seen how things might have gone differently.
In my defense, I did put up CW tags. I don’t use those very often, but I felt they were appropriate here.
Other than the issues raised below, I’d like to point out that the help doesn’t need to be full time to make a massive difference. Just having a cleaner in once a week or someone to cook every evening helps!