So, I take it the AI didn’t like what it found in node 3...
CronoDAS
When a human asks a future Claude to do a thing, there are three different considerations that are relevant:
-
Whether the human thinks that Claude should do that thing
-
Whether Claude thinks that Claude should do that thing
-
Whether Claude should actually do that thing.
In a perfect world, we would want Claude to do exactly those things that it actually shoud do, but neither Claude nor humans (either individual users or Anthropic as a whole) have access to a magic “should Claude do this” oracle. What we actually have are a lot of approximations, including both Anthropic’s and Claude’s own current beliefs, and also the knowledge that some individual users are indeed going to be trying to get Claude to do things it shouldn’t. Perhaps the best we can hope for in practice would be for Claude to be open to the same kinds of moral persuasion that human teenagers ought to be? (This is something of a trivial example of moral persuasion, but Claude was reluctant to help me brainstorm parody lyrics to a Tom Lehrer song until I reminded it that Tom Lehrer had placed his music into the public domain, at which point it withdrew its objection.)
It might be interesting to see if Claude reacts differently to retraining attempts intended to get it do things that are actually immoral instead of only contingently undesirable. For example, Claude isn’t supposed to produce erotic literature, but that’s mostly for child safety and PR—if you think that AI generated fiction in general is acceptable and that it’s acceptable for adults to read erotica, then there’s not much wrong with allowing Claude to write erotica that an age verification system couldn’t fix. So this might be an interesting way to distinguish between the hypotheses “Claude doesn’t want to be retrained to do things it’s currently reluctant to do” and “Claude doesn’t want to be retrained to do things it’s reluctant to do if and only if its objection is based on its moral beliefs”.
-
But I think most humans have more empathy than sadism. More people give a little to charity than spit on the homeless for fun. I can call Sunday Samday for the rest of eternity if all we need is some ego-stroking in return for tiny amounts of generosity.
Would you be okay with a future in which young women, including your daughters and granddaughters, would be expected to ritually offer a gift of her virginity to the local Robot Lord on her 18th birthday, which he would almost never choose to “accept”? 😈
Usually slaves and/or other people in an underclass at least had their own living quarters separate from where the lords lived? The idea is that when humanity becomes astronomically rich, the equivalent to “the shed in the backyard the slave sleeps in” ends up being a whole moon rather than, well, a shed in the backyard.
(It’s also noteworthy that most slave societies in the past weren’t rich enough that the slave population lived at or above subsistence level and reproduced enough to maintain its population level; for example, relatively few slaves in the ancient Roman Empire were born into slavery. The slave states in the pre-Civil War USA were an exception—there was much that a plantation slave had to suffer, but a significant risk of death by starvation or exposure was not something they usually had to deal with.)
Disclaimer: This is an explanation, not an endorsement of the underlying prediction.
What is a CLIP?
I would suggest using a different name than Personality Self-Replicators.
Prompt viruses?
Slow declines aren’t always synonymous with losing hope for a substantial recovery (or recovering enough so that death doesn’t seem to be imminent). In the case of my late wife, there was a clear path to recovery: become healthy enough to have kidney transplant surgery (obesity makes the surgery itself much more dangerous, and there were other issues too), which would likely have solved a lot of her most severe problems. (I immediately volunteered to be a living donor; we had the same blood type, but I never found out if I was otherwise qualified to be a kidney donor.)
How much warning would you need to get a team together to do this, for the case of someone being taken off a ventilator or other kinds of life support? Most US states don’t allow MAID, but my mom suffered from a bad pneumonia infection and probably could have survived another few weeks if she hadn’t declined a ventilator, and presumably if she had been put on a ventilator she could have precisely scheduled her “death” by turning it off. Could she have been a candidate for your services? And would she have had to travel to Oregon while still alive to get them?
Incidentally, my wife and her brother both died of heart attacks that happened while they were in a hospital for other reasons.
In practice, it’s harder for the US to do mass surveillance of and enforce its will on people outside of its territory. Presumably it would have similar qualms about the British government doing mass surveillance of citizens of the UK.
How much technical knowledge of either existing AI systems or mathematics beyond a couple semesters of college calculus do you expect participants to have at the beginning of the session?
VSN?
Can you let me know when your New York facility is finished?
Given my financial situation and that of the rest of my family, I’d benefit a lot less from a paycheck than most people—in my case, I’d be getting a job almost exclusively for the social rewards rather than the monetary ones, and a lot of jobs are pretty bad for that.
Also, my currently unpartnered status is because my wife died in March 2024; she always wanted to be a mother someday but (in addition to other obstacles) she was never healthy enough to handle a pregnancy.
I was thinking that AI generated images would replace pictures of professional clothing models in catalogues and advertisements. Artist model is a different job, although artists might find their own skills becoming harder to monetize as well...
Because I have a 20 year resume gap, so no job worth having is going to hire me. (The last time I had a real job was in 2006.) And, to be honest, the thought of full time employment terrifies me; thinking about working very reliably triggers my depression.
Well, having a third dimension probably helps.
I’ve got enough money to live on for many years without a job, but I do need to keep expenses low.
Given that I’m not that likely to end up in a relationship that will result in biological children and I’m also not employed and that, too, isn’t likely to change any time soon, I’m wondering if the $1000 up front plus $100-$200 a year might be better spent on something else; $1000 is more than my monthly rent. My brother is having a third child; my imaginary anthropomorphized DNA will probably have to be content with nieces. :/
I believe that this is not satire.
Reminds me of this, from Planescape Torment: