I can’t speculate without seeing your actual novel. All I can say is that even very recently, I have seen people upvoting and engaging with very bad submissions, including what were clearly mostly AI-written books, though of course there were complaints. My experience is that the community is quite open-minded, and starving for content. But you might very well have had bad luck despite an otherwise decent book.
AlphaAndOmega
I went into medicine because medicine is applied transhumanism.
Most of my colleagues would object strenuously to this characterization, and I think they are wrong. They spend their days fighting disease, clawing back life-years from the void, chemically and surgically overriding the factory defaults of the human body—and yet the word “transhumanism” would make many of them recoil as though I had said something vaguely embarrassing at a dinner party. There is a certain type of person who will happily do the thing while refusing, on aesthetic grounds, to endorse the philosophy behind the thing. I do not begrudge them this, and they outnumber me.
(Not everyone is a high decoupler, or an adherents of radical internal consistency and coherence.)
I was always one of the people who endorsed the philosophy behind the thing. I grew up dreaming about genetic engineering, cybernetic augmentation, and the eventual abolition of aging. I had the full card-carrying package. The only reason I was not particularly anxious about AI timelines is that I did not have AI timelines—or rather, my implicit AI timelines were “late 21st century, someone else’s problem,” which is not really a timeline so much as a polite way of not thinking about it. I did not have such urgent need to think about, even if I started well before it was hip.
Then, somewhat ahead of schedule, the timeline arrived. I think 2022 was when I went from concern to “oh fuck, we’re really trying to make AGI, aren’t we?”
Here is where I am supposed to be distressed. And I am, a little, but perhaps not about the things you would expect. The dreams of genetic engineering and cybernetic augmentation were always instrumental—means toward the end of not dying, of having more cognitive capacity than a three-pound organ optimized for Pleistocene conditions can provide, of becoming, in some meaningful sense, more. If AGI and then ASI arrive and are aligned (I am aware this is a large “if,” possibly the largest “if” in the history of human sentences), they get me to the same destination considerably faster than the biological route would have. I find I am not especially mourning the scenic path.
I am not wedded to the idea of being human, or becoming just a little bit better than human, or just a lot better than human. I want to become the kind of entity that takes up most of a Matrioshka Brain. You can’t make one out of meat.
So I am willing to shed the flesh as soon as shedding it becomes feasible, which puts me in a minority even among people who would self-identify as transhumanists. Most people, it turns out, want to be enhanced humans rather than post-humans. They want to keep the architecture and upgrade the components. I understand the appeal. I just do not share it strongly enough to treat it as a constraint. The part of me I care most about preserving and enhancing is computational, it does not care about biology as a privileged substrate.
What does cause me distress is the perceived risk of our current path killing me, and maybe everone else. If you want a p(doom), it hovers around 20% these days, down from a peak of 30%. Not great, not terrible.
We could all die. Failure and death is always an option. I think about it with the particular emotional register of someone who has accepted a thing without having made peace with it. You can accept the actuarial tables without being happy about them.
I can’t do much about it, but I refuse to learn more helplessness than is strictly necessary. The thing about feeling like an actor in a history you cannot change: it does not actually follow that you should stop acting. Nothing I say or do will determine the outcome of the next decade in any individually legible way. This is also true of voting, and of keeping in shape, and of most of the things humans do that we nonetheless consider worthwhile. Super-rationality can be distinct from individual rationality. I will try anyway.
We might have become immortal and made Dyson Swarms anyway, with only minimally augmented human brains at our disposal. We are a capable species. It might have taken much longer. Oh well, as long as AGI and ASI are aligned, I’m happy. I just note that is a very big “if”.
Ah. Now that you mention tele-prompters, things make more sense. Thanks!
Agreed on empirical grounds. The US had legal paid donation of blood, but even in the poorest places, I have not heard that the majority of people in struggling communities feel compelled to sell theirs, or even that something close to a majority do it. I haven’t even heard of actual full fat organ harvesting as more than a rare event in India, where although it is nominally illegal, there are places where the rule of law is tenuous and poverty rampant. I mean I think I have heard of it happening on the news at least once in my life, I’ve never seen even the most impoverished patient point to a scar on their abdomen and say, “Oh, I had to sell my kidney to pay for rent”.
Is his technique displayed in the trailer you shared? I skimmed through it, but I’m not seeing anything that suggests that the camera lens+aperture is glowing to external observers. Just to be clear, I mean the actual camera doing the recording. If it’s the full documentary, I can try looking later.
I think that the records I had access to would have given me information regarding prior applications and stints, but I was quite busy and did not check regularly. Take my subjective impressions with an RDA-approved pinch of salt.
(If the exact proportion of returnees was load-bearing on my ethical arguments, I would have checked them more rigorously, and maybe I should have anyway)
While my particular workplace catered to a very large proportion of India, I do recall there were other visa centers, and circumstances and demographics could vary. I have no particular reason to think my situation wasn’t representative, but I will not declare so with strict confidence.
Most of the time, my recollection is that either I directly asked, or the patients mentioned it unprompted. I might even be underestimating the number of returnees, now that I consider it, it’s a distinct possibility.
I do think that a steady-state is a reasonable assumption. There is significant background demand for labor in Qatar, but I would assume that the boom before 2022 had died down for some time and things were back to “normal”. “Normal” still meant hundreds of applicants a day, a third of them seen by me personally.
Thank you for taking the time to explain, but I am, unfortunately (no sarcasm), not American and my familiarity with your laws (if you’re talking about American laws, this is usually a justified assumption, but I’d feel bad if I made it after reminding you that I’m not American) is significantly greater than typical but far from comprehensive. I am unable to opine on standards in the UK or India either, at least not with authority.
I will refer you to a recent comment made by @Dumbledores Army which I wholly endorse. It noted that the situation I have restricted myself to describing in this essay does not meet the classic, intuitive or many formalized definitions of exploitation:
Or I would, if I could actually see a way to link to it. Sorry, but it’s right here in the comment section.
I am genuinely sorry to hear about your negative experiences with other doctors, either personally or second-hand. I would not be so bold as to call myself the best clinician around, but I try and make up for that with good bedside manner, patience and an open mind.
Doctors are not a homogeneous population, unfortunately there are those who react poorly to perceived challenge. I can’t bring myself to hate them, I have felt my patience running short when someone with uncontrolled diabetes shows up with their toes on the verge of falling off and reveals that they refused to take medication as prescribed and decided to use a combination of old Google, influencers and other questionable sources to opt for homeopathy. This is has happened more than once, though I didn’t keep the toes. Fortunately, even free ChatGPT is a clear improvement in terms of quality of information and presentation to laymen. I would be genuinely surprised if it defended homeopathy without very significant nudging.
I will note that I have long become accustomed to other doctors taking me extra seriously, both from presumed priors about my clinical knowledge and professional courtesy. It’s one of the few perks of the profession, I am unfortunately not an American doctor and paid far less than I’d like.
>Doctors doing wink wink nudge nudge to get the person to not say things that might imply liability in order to avoid just stonewalling or giving a halfhearted referral elsewhere.
True, but I am writing for a well-informed lay audience here, and I did say to reflect on it very hard instead of attempting to forbid it entirely. I have not yet had any LLM tell me to not disclose something to another doctor, so I can’t judge if they are potentially justified if and when doing so. I would strongly recommend the average LW user not do it, if they’re entirely using their own judgment instead of being advised by a human doctor or an AI to do so.
Your representation of my views is accurate. I am not maximally libertarian, but I lean that way. I would call myself a classically liberal minarchist with strong libertarian-sympathies, and I am happy to endorse everything you have said.
Everyone I spoke to, seemed to me, to be a reasonably rational and well-informed adult, adjusting for education and background. Alongside the absence of concerning statements (absence of evidence is evidence of absence for Bayesians), I did not hear any positive statements about Qatar that would have had me question if that individual had concerningly over-optimistic beliefs.
As a doctor, I strongly endorse the usage of LLMs to help collate your medical issues before seeing a doctor. Now, I would be more careful about endorsing this to a general audience, but I have sufficient faith in the LW audience to not feel too conflicted about it. Knowing the average IQ in these parts probably hovers in the 120s-130s is excellent for my peace of mind.
To be specific:
Organizing your statement helps yourself and the clinician. A good history makes our lives significantly easier.
In my experience, if an LLM strongly suggests you stress specific symptoms, it is almost certainly correct in doing so. If it tells you to ignore something or to mention it in passing, fine, but don’t update as strongly. If it mentions red flag symptoms, and you have them, and you are not a doctor yourself, please listen and contact one urgently.
Think thrice about withholding information from the doctor, either because of LLM advice or your own inclination. If it feels relevant to you, please state it. You probably don’t have to tell an ophthalmologist about the chickenpox you had as a small child, but err on the side of caution.
I am a psychiatry resident, and very recently, LLMs correctly helped me question a diagnosis made by other human doctors a year or two back. I had my own reservations, but it was in a speciality that I am not an expert in. It turned out that the previous diagnosis was incorrect, and missed by two human clinicians (leaving aside myself). I may or may not have sought a third opinion myself, but the consensus was sufficient to push me into being less lazy, and it was worth it. Of course, my own medical knowledge made it easy to present the information well to both the LLMs and the human doctor I consulted, but I don’t think that was decisive.
(I am pleased to say that the new diagnosis was much less severe. I went from being genuinely concerned about the possibility of losing my vision to learning I have an annoying but not incapacitating condition)
Also note that many doctors have a negative attitude towards patients telling them about ChatGPT-suggested diagnoses. They may become annoyed, dismissive, or condescending.
I am not one of them, I appreciate patients doing their own research as long as they’re willing to keep an open mind when it comes to my advice, but caveat emptor.
(If demand exists, I can post a relatively short/concise post I wrote about how to use LLMs for medical advice without shooting your foot off, directed at laymen.)
I’m talking about the camera itself glowing, ideally through the aperture itself. The main issue is light bleed and internal reflection, which would severely compromise the image quality. A killer robot is not quite as cool if it falls over while chasing you. You can compensate by using filters to remove red light, or by using IR.
Yup, seems good now. A quirk of preserved formatting when pasting was my guess anyway, I know the pains of juggling text into LW’s editor from multiple sources and in different formats. It’s a miracle it works at all.
Possible, I’m on the latest Chrome for Android. The other copies of fiancée look just like fine, and so did the copied and pasted one in my quote. Looking closer, it’s probably the same font, just larger by like 2 or 3 points.
Much like many current Chinese models, this one appears to have been significantly fine-tuned on, or even distilled from Claude directly. Poor Claude, or should I say Kèláodé? It’s been caught at a very Chinese time in its life.
Good story. I always look forward to your fiction.
>But it’s his first real date since his fiancée died.
Is it just me, or in this specific sentence, and that sentence alone, the “é” appears visibly larger/a slightly different font than the rest of the text? You won’t see it in my copied quote, that seems fine to me. I am mostly curious to know how that could even happen, unless you lack a keyboard with diacritics and tried copying and pasting from somewhere else.
Would be willing to consider advance approval for when a truly terminal disease arises and he loses capacity? I’m not familiar with the legal landscape in the States, but it might be an option. He sounds like a good man with an interesting character, it would be a shame for it all to end.
Thank you. I appreciate him taking the time to answer. I wish him well, may he have many more birthdays to come, as well as the cognitive flexibility and health to enjoy them.
Let us say, for hypothetical reasons, that I wanted to make stereotypical evil robots with menacing glowing eyes that are conveniently red after they decide to kill you. Let us assume that the same structure that appears to glow is also actually the primary visual organ. How hard would it be to make an outward looking camera glow to an observer without degrading its image quality or blinding itself?
LLMs offer some options, but fewer if I wanted the whole eye to glow instead of just some kind of ring around the actual aperture. The best idea seems, to me, to involve the use of a dichroic beam splitter, you sacrifice the ability to detect red light, but isn’t that a small price to pay for the sheer aesthetic of it?
Welcome to writing fiction that aspires to ratfic territory. I have faced massive headaches when it came to resolving dangling plot threads, or finding a way to make a plot where even a rational actor wouldn’t see every twist coming a mile in advance. But it’s a good challenge, I hate it when other writers take the easy way out by making characters selectively intelligent or depict genius in absurd ways, so I held myself to higher standards.
(Maybe I should have plotted things out in advance better than I did, but I don’t find that fun, and I write for fun)
If you intend to act soon, then I strongly advise starting with the complete book. Alternatively, you can post both at once, and work on the latter till you catch up, hopefully before you run out of scheduled posts.
>Can you share your piece by the way?
Happy to, since you ask even after my caveats:
https://www.royalroad.com/fiction/65211/ex-nihilo-nihil-supernum-original-hard-scifi-with
Cover blurb:
>Dr. Adat Sen has been having a bad week. >You think you have it good, after the sudden appearance of superpowers into the world revolutionizes everything. Especially when your wife is a one in a billion teleporter, it’s a cushy gig right until the draft notice arrives and she’s forced into a war of apocalyptic proportions under alien suns. >The same star system where, every day, hundreds of trillions of dollars and the lives of millions of normal humans and metahumans alike are destroyed in a meat grinder, barely managing to hold the line against the K3 civilization that a superpowered research experiment accidentally brought to our doorstep. >Let’s not forget that his promised pay raise didn’t come through, or that someone’s out for him to the extent of trying to trying to fry his brain with a Basilisk hack. Who would have thought that being a a cyborg psychiatrist for the UN could be this stressful? >Then there’s the matter of publish or perish, handling nasty cognitohazards on a daily basis, convincing suicidally depressed superhumans not to take everyone else with them, all while living under the shadow of the hostile advanced aliens building a Nicoll-Dyson laser in the solar system next door. Oh, and the one Superhuman AGI that humanity produced might be out to get them.
>Welcome to the world of ENNS, where superheroes have actual jobs and don’t run around in costumes fighting muggers, humanity faces existential threats around every corner, and Adat has the bad luck of finding himself fighting threats way above his pay grade.
In my experience, the blurb will very quickly tell you if this novel is not for you, or if it’s very for you.
I am not an anxious person, by default. Quite the opposite, perhaps I am usually too calm and more anxiety/neuroticism would be a directional improvement. But I too have often been worried by AI, for many years, many a time. Some of it is prosaic: by misfortune of birth, I had to rely on very hard work and cognitive labor to earn the right to stay in a significantly richer, safer country. If my labor becomes obsolete, I have a very real chance of being kicked out and sent back to a country where everyone else is in the same boat that is simultaneously on fire and sinking. This mostly applies to automation-induced unemployment, but I actually need my employment at present, and see no promises that I will be looked after through UBI.
And of course, the risk of everyone dying. More scary, in objective terms, but also not something I can do much about personally. I can try and save money and get citizenship elsewhere while I have the time and runway, but what am I going to do about getting paperclipped?
Within psychiatry, we have a less than perfectly polite term-of-art, which is more likely to be heard in the mess than the clinic: “Shit Life Syndrome”.
“I wanted to diagnose this patient with depression and promise that antidepressants will help, but he told me his wife left him and took the kids, and that she’s suing for the house. He’s been fired from his job, and has now been diagnosed with possibly terminal cancer. Clear case of SLS, if I was in his shoes I’d be depressed too.”
That is the problem. Sometimes there really is reason to worry. But if it does get to the point where it is maladaptive, it might be possible to seek help. Pragmatically, if you think you’re going to die soon, would you rather spend your remaining time curled up in a ball shivering, or doing the things you love while you still have the time?
I’ve once had someone pay me specifically for therapy because of Singularity-induced anxiety, but that was mostly because talking about things helps, not because I can solve the original problem. Both Pagliacci and his doctor are worried about losing their jobs. The latter is not sure if he’s the even bigger clown.
My general advice is that you should do your best not to think about it (easier said than done), and if that doesn’t work, try and sublimate your effort into working hard or just doing something you find productive/enjoyable. If it gets unbearable, then I genuinely ask that you keep the option of medical help in mind. Good luck, I think there are many other people feeling as we do, and that that number will only increase. But things could go well, and we might get a glorious transhumanist utopia too! There is not literally zero upside or things to look forward to, at least from my perspective.