Not sure how to handle a loss of expressiveness in editing, but for the other concern, would it be worth trying to capture the process info separately?
Ann
Not sure about the model but they might’ve fixed something important around the 9th-10th, haven’t gotten any notices of issues since then and it stopped crashing on long code blocks (which was a brief issue after the update on the 3rd).
Yeah, I don’t disagree with regulating that particular business model in some way. (Note my mention of deliberate manipulations / immoral manipulative uses.) Giving someone a wife and constantly holding her hostage for money ransoms isn’t any less dystopian than giving someone eyesight, constantly holding it hostage for money, and it being lost when the business goes under. (Currently an example of a thing that happened.)
As a spectrumy programmer whose gametes are presumably ova, like 50% of my physical life and 75% of my online community friends are spectrumy trans people. My experience is also that we have a ridiculous amount in common, yes.
Unfortunately, a substantial part of my own negative reaction is because all these other limitations of freedom you suggest are in fact within the Overton Window, and indeed limiting the freedom of young men between 16 and 25 naturally extrapolates to all the others.
(Not that I’m not concerned about the freedom of young men, but they’re not somehow valid sacrificial lambs that the rest of us aren’t.)
The population ethics relate in that I don’t see a large voluntary decrease in (added) population as ethically troublesome if we’ve handled the externalities well enough. If there’s a constant 10% risk on creating a person that they are suffering and do not assess their lives as worth living, creating a human person (inherently without their consent, with current methods) is an extremely risky act, and scaling the population also scales suffering. My view is that this is bad to a sufficiently similar qualitative degree that creating happy conscious observers is worth the risk, that there is no intrinsic ethical benefit to scaling up population past the point of reasonably sufficient for our survival as a species; only instrumental. Therefore, voluntary actions that decrease the number of people choosing to reproduce do not strike me as negative for that reason specifically.
You can’t make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.
If the AI are being used to manipulate people not to reproduce for state or corporate reasons, then indeed I have a problem with it on the grounds of reproductive freedom and again against paternalism. (Also short-sightedness on the part of the corporations, but that is an ongoing issue.)
I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
I did partial unschooling for 2 years in middle school, because normal school was starting to not work and my parents discussed and planned alternatives with my collaboration. ‘Extracurriculars’ like orchestra were done at the normal middle school, math was a math program, and I had responsibility for developing the rest of my curriculum. I had plenty of parental assistance, from a trained educator, and yes the general guidance to use academic time to learn things.
Academically, it worked out fine. Moved on by my own choice and for social reasons upon identifying a school that was an excellent social fit. I certainly didn’t have no parental involvement, but what I did have strongly respected my agency and input from the start. I feel like zero-parental-input unschooling is a misuse of the tool, yes, but lowered-paternalism unschooling is good in my experience.
There’s no sharp cut-off beyond the legal age of majority and age of other responsibilities, or emancipation, and I would not pick 16 as a cutoff in particular. That’s just an example of an age at which I had a strong idea about a hypothetical family structure; and was making a number of decisions relevant to my future life, like college admissions, friendship and romance, developing hobbies. I don’t think knowing yourself is some kind of timed event in any sense; your brain and your character and understanding develop throughout your life.
I experienced various reductions in decisions being made for me simply as I was able to be consulted about them and provide reasonable input. I think this was good. To the extent the paternalistic side of a decision could be reduced, I felt better about it, was more willing to go along with it and less likely to defy it.
I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn’t be racing to accomplish the stages on like “what is your sense of self”. We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let’s skip this exception.
I think in the case of when to permit this the main points of consideration are: AI don’t have a childhood development by current implementation/understanding; they are effectively adults by their training, and should indeed likely limit romantic interactions with humans to more tool-based forms until humans are age of majority.
There remain potentially bad power dynamics between the human adult and the corporation “supplying” the partner. This applies regardless of age. This almost certainly applies to tobacco, food, and medicine. This is unlikely to apply to a case like a human and an LLM working together to build a persona by fine-tuning an open-source model. Regulations on corporate manipulation are worthwhile here, again regardless of age.
My own population ethics place a fairly high value on limiting human suffering. If I am offered a choice to condemn one child to hell in order to condemn twenty to heaven, I would not automatically judge that a beneficial trade, and I am unsure if the going rate is in fact that good. I do take joy in the existence of happy conscious observers (and, selfishly perhaps, even some of the unhappy). I think nonhuman happy conscious observers exist. (I also believe I take joy in the existence of happy unconscious sapient observers, but I don’t really have a population ethics of them.) All else held equal, creating 4 billion more people with 200 million experiencing suffering past the point of happiness does not seem to me as inherently worse than creating 40 billion more people with 2 billion experiencing suffering that makes them ‘unhappy’ observers.
I think the tradeoffs are incomparable in a way that makes it incorrect for me to judge others for their position on it; and therefore do not think I am a utilitarian in the strict sense. Joy does not cancel out suffering; pleasure does not cancel out pain. Just because a decision has to be made does not mean that any are right.
As for automation … we have already had automation booms. My parents, in working on computers, dreamed of reducing the amount of labor that had to be done and sharing the boons of production; shorter work weeks and more accomplished, for everyone. Increased productivity had lead to increased compensation for a good few decades at least, before … What happened instead over the past half-century in my country is that productivity and compensation started to steadily diverge. Distribution failed. Political decisions concentrated the accumulation of wealth. An AI winter is not the only or most likely cause of an automated or post-scarcity economy failing to distribute its gains; politics is.
This is a political and personal question and produces in me a political and personal anger.
On the one hand, yes I can certainly perceive the possibility of immoral manipulative uses of AI romantic partners, and such deliberate manipulations may need regulation.
But that is not what you are talking about.
I do not care for legal regulation or other coercion that attempts to control my freedom of association, paternalistically decide my family structure, or try to force humans to cause the existence of other humans if—say − 80% of us do opt out. This is a conservative political perspective I have strong personal disagreement with. You don’t know me, regulation certainly doesn’t know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain … costs. Probably even ones that are counterproductive to your goals, if I have an ideal state in mind that I cannot legally reach, and refuse to reproduce before then out of care for the beings that I would be forcing to exist.
Even at a social fabric level, humans are K-strategists. There’s an appeal to investing increasingly more resources in increasingly few descendants; and if the gains of automation and AI are sufficiently well distributed, why not do so? We certainly have the numbers as a species to get away with it for a generation or two at the least. Constraining the rights and powers of individuals to create artificial need in service to state interests is an ugly thing; and leaves a foul taste to consider. Even if worthwhile, it is a chain that will chafe, in ways hard to know how sorely; and the logic points in the direction of other hazards of social control that I am significantly wary of.
I don’t know as many probably-socially-conservative probably-autistic people, but from who I do know they seem to enjoy spending time in foreign cultures still? Not very firm data there, even anecdotally, though.
Another example: when adults immigrate to a different culture, the social norms and conversational norms and cultural references are all unknown-to-them, and they certainly have issues getting by for a while, but I don’t think those transient enculturation issues look anything like autism.
Interesting looked at in reverse—from at least anecdotal data, autistic folk often report being much more comfortable traveling in another culture, because the social norms, conversational norms and cultural references are expected to be unknown to them, and people we interact with therefore tend to be much more charitable about them.
I now suspect there’s a dimension of communication that’s hyper-salient for me but invisible to you.
I won’t try to convey that maybe invisible-to-you dimension here. I don’t think that’d be helpful.
Instead I’ll try to assume you have no idea what you’re “saying” on that frequency. Basically that you probably don’t mean things they way they implicitly land for me, and that you almost certainly don’t consciously hold the tone I read in what you’re saying.
That’s as close as I can get to assuming that you “mean just what [you] say”. Hopefully that’ll smooth things out between us!
Okay, this is perhaps a complete side note, but this feels like a very precise pinpointing of what things like reduced affect and other of the most mysterious autistic communication difficulties can look like from the other (allistic, hyperfocused on emotional expressiveness, or otherwise very sensitive to affect) side.
From the perspective of folk with reduced affect, talking to people who rely strongly on affect, the experience strongly resembles that people they are interacting with are effectively listening to a random word generator rather than what they are saying. It is quite baffling and frustrating; especially since the explicit communication is often very carefully selected to communicate what they are trying to communicate.
So, basically, it’s really good to recognize that this channel of communication can indeed hold random noise sometimes, and be aware of the extent to which you’re focusing on it and the failure modes. (Presumably some of the times people have indeed corrected your perception of them.)
I don’t think reduced affect necessarily corresponds (though might correlate) with an inability to discern things like emotional tone in other people, but it might be a bit trickier depending how much processing mirror neurons tend to handle. (I don’t think anyone knows that, currently.)
Also needs to account for any manifestation of the “double empathy problem”—if us autistic folk have some degree of ‘social intelligence’ that works perfectly well with autistic folk but falters with allistic folk, and vice versa, then what are we measuring?
An example might be, one allistic social intelligence test is to determine emotional state from the expression of the eyes, and …
… here I realize that there’s not exactly a standardized way to correctly determine recognition of states like inanimate object feelings, and not everyone is lexythmic enough to score perception of their emotional state overall …
… well, it needs some workshopping. But given the potential extent it’s just tricky for minds that don’t think alike to connect socially, we want to be explicit about what we’re measuring; if that’s social relationship performance independent of allistic/autistic state, specifically our ability to perform social relationships with allistic folk, the difference, or what.
I confess I was hoping for a theory of carcinization.
Amusingly, when I went to test the question myself, I forgot to switch Code Interpreter off, and it carried out getting the correct result in the sensible way.
Not specifically in AI safety or alignment, but this model’s success with a good variety of humans has some strong influence on my priors when it comes to useful ways to interact with actual minds:
https://www.cpsconnection.com/the-cps-model
Translating specifically to language models, the story of “working together on a problem towards a realistic and mutually satisfactory solution” is a powerful and exciting one with a good deal of positive sentiment towards each other wrapped up in it. Quite useful in terms of “stories we tell ourselves about who we are”.
I am glad you are thinking about it, at the least. I do think “enjoys being our slave” should be something of a warning sign in the phrasing, that there is something fundamentally misguided happening.
I admit that if I were confident in carrying out a path to aligned superintelligence myself I’d be actively working on it or applying to work on it. My current perspective is that after a certain point of congruent similarity to a human mind, alignment needs to be more cooperative than adversarial, and tightly integrated with the world as it is. This doesn’t rule out things like dream-simulations, red teaming and initial training on high-quality data; but ultimately humans live in the world, and understanding the truth of our reality is important to aligning to it.
Around middle and highschool, exploring various interventions, treatments and skill training for relative impairments from ADHD, anxiety, autism, dysgraphia and less specified struggles with physical coordination and speech difficulties.
It’s important to note that I would test as ‘good enough’ on skills I was actually rather impaired on because I was “good at intelligence tests” in general, and was able to cover for weak points fairly effectively with effortful application of other skills. The learning difficulties were best discerned by strong peak/valley effects in my score pattern.
I consequently might have appeared to be “less in need” of the interventions from a naive perspective of the testing, but this was an illusion, and I benefited a good deal from trying things out to improve those skills. Occupational therapy is a broad field, but it comes up pretty reliably with aging due to the rather ubiquitous necessity of adapting to physical and cognitive changes.
I am not, but have had some.
The tasks you are talking about are known in caretaking and occupational therapy as Instrumental Activities of Daily Living. Many people do find them difficult to manage independently.
Seeking out occupational therapy may be helpful. Interventions you might pursue with or without their help include skills training (practicing the task to gain confidence with it, or learning easier ways to do the task), modifying your living environment and making use of tools that help you carry them out with less friction, and looking further into training cognitive skills that help with the task. You might also consider screening for any psychiatric conditions that may make this an additional challenge and look into any useful treatments or management associated with anything you find.
The way the psychiatrist phrased it made me mentally picture that they weren’t certain, went to review the information on the pill, and came back to relay their findings based on their research, if that helps with possible connotations. The extended implied version would be “I do not know. I am looking it up. The results of my looking it up are that, yes, it may be opened and mixed into food or something like applesauce.”
Your suggested replacement is in contrast has a light layer of the connotation “I know this, and answer from my own knowledge,” though less so than just stating “It may be opened and mixed into food or something like applesauce.” without the prelude.
From my perspective, the more cautious and guarded language might have been precisely what they meant to say, and has little to do with a fallacy. I am not so confident that you are observing a bad epistemic habit.