I think chatbot addiction is a different issue. I think what people are usually pointing out when they talk about AI psychosis are phenomena where (like the post says) people become delusional, or their delusions seem to be shaped by exposure to the chatbot. My sense is that there’s also a related phenomenon where people might become less mentally stable and more likely to take violent actions that they wouldn’t have otherwise considered, even if they had the false beliefs that they now have, but idk if that’s right
RationalElf
Is it related to them being really obsessive about maska relative to other groups? Are they people who are unusually obsessive about health and negative externalities that people can have on one another?
Based on which stimuli provoke anxiety, sounds autism-adjacent? E.g. for me, eating varied foods doesn’t provoke anxiety (which I think is pretty normal)
Re (1) and (2) I thought humans had created misaligned AI that took over and scanned everyone’s brains then sold us to aliens that care about humans in sub-par ways. Hence it being far from the singularity but feeling close from the humans’ perspectives, and the origami men going somewhere that seems more organic than machiney
Analyzing the possibility of a country executing a strategic strike on a piece of infrastructure seems extremely difficult from celebrating the destruction of that piece of infrastructure.
But if my nonfungible traits are her cup of tea, fungible traits don’t seem to do much of anything!
In other words, fungible traits are the fallback when a girl doesn’t like you very much. They’re literally only worth considering if you assume she doesn’t like you as a premise.
This seems very contrary to my experience and that of other women I know (and makes little sense in the abstract. Your “fungible” and “non-fungible” traits literally funge against one another in people’s assessments; why wouldn’t they?
E.g. I’m a married woman. My husband is my favorite person; I love his “nonfungible traits”; his creativity, his humor, his abiding commitment to making the world better, his refusal to give into motivated cognition, his unerring integrity.
If he were one standard deviation less attractive, I’d probably never feel physically attracted to him, having sex with him would disgust me, and he’d be one of a bunch of nerds I feel vaguely guilty I’d never consider dating because they’re obviously great people.
Of course most women care about money and comfort and attractiveness (which affect your life in many ways other than social status!) while they also care about good character and humor and EQ… doesn’t almost everyone? When you assess a job, doesn’t comp and location trade off somewhat against the company culture and how much you expect to like the work?
I personally often visually react to someone’s appearance changing without meaning to (e.g. “wow, you got a haircut!”) then feel pressured to compliment it to make the situation less awkward. I expect a lot of other people do this. And basically no one will come up to you and, without prompting critique your appearance. So you will get very positive-slanted feedback when it’s unsolicited (and probably when it’s solicited too).
Are you counting species we might have driven to extinction a long time ago (e.g. in prehistory when humans first got to continents other than Africa) or just in the less 200 years or something?
I also don’t think “you should always be more empathetic” or “more empathy is always good”, I’m just trying to explain what I think is a useful definition of empathy and how to do it that carves reality at its joints.
(similar to what other people have said, mostly trying to clarify my own thinking not persuade John) I think a more useful kind of empathy is one meta level up. People have different strengths, weaknesses, background, etc (obviously); their struggles usually aren’t exactly your struggles, so if you just imagine exactly yourself in their position, it generally won’t resonate.
So I find it more helpful to try to empathize with an abstraction of their problem; if I don’t empathize with someone who e.g. has adhd and is often late and makes lots of mistakes on detail-oriented tasks, can I empathize with someone who struggles with something that most people find a lot easier? Probably, there are certainly things I struggle with that others find easy, and that is frustrating and humiliating. Can I empathize with someone who feels like they just can’t get ahold of aspects of their life, no matter how hard they try? Who feels like they really “should” be able to do something and in some sense it’s “easy”, but despite them in some sense putting in a lot of work and creating elaborate systems, they just can’t seem to get those issues under control? Absolutely.
I’m not saying this always works, and in particular it frays when people are weak on things that are closest to my sacred values (e.g. for me, trying in a sincere way to make the world a better place; I feel most disgust and contempt when I feel like people are “not even really trying at all” in that domain). For John, that might be agency around self-improvement. Then I find it helpful to be even more meta, like “how would it feel for something I find beautiful and important to be wholly uninteresting and dry and out-of-reach-feeling? well there are certainly things others find motivating and important and beautiful that I find uninteresting and dry and out of reach… imagine if I were trying to pursue projects loaded on one of those things, it’d feel so boring and exhausting”.
I get the vibe that John thinks more things are more “in people’s control” than I do and a lot of other commenters do (probably related to hightly valuing agency). Like yeah, in theory maybe the people on your team could have foreseen the need and learned more ML earlier, but they probably have a lot of fundamental disadvantages relative to you at that (like worse foresight, maybe less interest in ML, maybe skill at teaching themselves these kinds of topics), like in theory you could be better at flirting but you have a lot of disadvantages relative to e.g. George Clooney such that it’s unlikely you’ll ever reach his effectiveness at doing it.
I’m not saying everyone is equally skilled if you think about it or all skills are equally important or you shouldn’t trying to use keen discernment about people’s skills and abilities or some other nonesense. I’m saying I think empathy is more about trying to look for the first opportunity to find common ground and emotional understanding.
I think generalizing from fictional evidence puts the conversation off to a bad start because John uses a misleading intuition pump for how empathy would play out in more realistic situations
I agree within the context of the video, but the video feels like it straw-persons the type of person the woman in the video represents (or rather, I’m sure there are people who err as clearly and foolishly as the woman in the video, but the instances I remember observing of people wanting advice not problem solving all seemed vastly more reasonable and sympathetic) so I don’t think it’s a good example for John to have used. The nail problem is actually visible and clearly a problem and clearly a plausibly-solvable problem.
How do you know the rates are similar? (And it’s not e.g. like fentanyl, which in some ways resembles other opiates but is much more addictive and destructive on average)
Did this case update you to think “If you’re trying to pass a good bill, you need to state and emphasize the good reasons you want to pass that bill, and what actually matters”. If so, why? The lesson I think one would naively take from this story is an update in the direction of: “if you want to pass a good bill, you should try to throw in a bunch of stuff you don’t actually care about but that others do and build a giant coalition, or make disingenuous but politically expedient arguments for your good stuff, or try to make out people who oppose the bill to be woke people who hate trump, etc”?
Relevant quotes:
The opposition that ultimately killed the bill seems to have had essentially nothing to do with the things I worry most about. It did not appear to be driven by worries about existential or catastrophic risk, and those worries were not expressed aloud almost at all (with the fun exception of Joe Rogan). That does not mean that such concerns weren’t operating in the background, I presume they did have a large impact in that way, but it wasn’t voiced.
....
I am happy the moratorium did not pass, but this was a terrible bit of discourse. It does not bode well for the future. No one on any side of this, based on everything I have heard, raised any actual issues of AI long term governance, or offered any plan on what to do. One side tried to nuke all regulations of any kind from orbit, and the other thought that nuke might have some unfortunate side effects on copyright. The whole thing got twisted up in knots to fit it into a budget bill.
How does this relate to the question of which arguments to make and emphasize about AI going forward? My guess is that a lot of this has to do with the fact that this fight was about voting down a terrible bill rather than trying to pass a good bill.
If you’re trying to pass a good bill, you need to state and emphasize the good reasons you want to pass that bill, and what actually matters, as Note Sores explained recently at LessWrong. You can and should also offer reasons for those with other concerns to support the bill, and help address those concerns. As we saw here, a lot of politicians care largely about different narrow specific concerns.
[acknowledging that you might not reply] Sorry, I don’t think I understand your point about the MtG questions: are you saying you suspect I’m missing the amount (or importance-adjusted amount) of positive responses to Nate? If so, maybe you misunderstood me. I certainly wouldn’t claim it’s rare to have a very positive response to talking to him (I’ve certainly had very positive conversations with him too!); my point was that very negative reactions to talking to him are not rare (in my experience, including among impactful and skilled people doing important work on AIS, according to me), which felt contrary to my read of the vibes of your comment. But again, I agree very positive reactions are also not rare!
Or, to put it another way: most of the people that like Nate’s conversational style and benefit greatly from it and find it a breath of fresh air aren’t here in the let’s-complain-about-it conversation.
I mean, we’re having this conversation on LessWrong. It’s, to put it mildly, doing more than a bit of selection for people who like Nate’s conversational style. Also, complaining about people is stressful and often socially costly, and it would be pretty weird for random policymakers to make it clear to random LW users how their conversation with Nate Soares had gone. How those effect compare to the more-specific selection effect of this being a complaint thread spurred by people who might have axes to grind is quite unclear to me.
At the very least, I can confidently say that I know of no active critic-of-Nate’s-style who’s within an order of magnitude of having Nate’s positive impact on getting this problem taken seriously. Like, none of the people who are big mad about this are catching the ears of senators with their supposedly better styles.
I believe that’s true of you. I know of several historically-active-critic-of-Eliezer’s-style who I think have been much more effective at getting this problem taken seriously in DC than Eliezer post-Sequences, but not of Nate’s or Eliezer’s with respect to this book in particular, but I also just don’t know much about how they’re responding other than the blurbs (which I agree are impressive! But also subject to selection effect!). I’m worried there’s substantial backfire effect playing out, which is nontrivial to catch, which is one of the reasons I’m interested in this thread.
I appreciate you writing this, and think it was helpful. I don’t have a strong take on Nate’s object-level decisions here, why TurnTrout said what he said, etc. But I wanted to flag that the following seems like a huge understatement:
The concerns about Nate’s conversational style, and the impacts of the way he comports himself, aren’t nonsense. Some people in fact manage to never bruise another person, conversationally, the way Nate has bruised more than one person.
But they’re objectively overblown, and they’re objectively overblown in exactly the way you’d predict if people were more interested in slurping up interpersonal drama than in a) caring about truth, or b) getting shit done.
For context, I’ve spoken to Nate for tens of hours. Overall, I’d describe our relationship as positive. And I’m part of the rationalist and AIS communities, and have been for more than 5 years; I spend tens of hours per week talking to people in those communities. There are many nice things I could say about Nate. But I would definitely consider him top-decile rude and, idk, bruising in conversation within those communities; to me, and I think to others, he stands out as notably likely to offend or be damagingly socially oblivious. My sense is that my opinion is fairly widely shared. Nate was one of the participants in the conversation about AI safety that I have ever seen become most hostile and close to violence, though my impression was that the other party was significantly more in the wrong in that case.
I don’t know what the base rates of people being grumpy post interacting with Nate are, and agree it’s a critical question. I wouldn’t be surprised if the rate is far north of 15% for people that aren’t already in the rationalist community who talk to him about AIS for more than an hour or something. I would weakly guess he has a much more polarizing effect on policymakers than other people who regularly talk to policymakers about AIS, and am close to 50-50 on whether his performance is worse overall than the average of that group.
I feel bad posting this. It’s a bit personal, or something. But he’s writing a book, and talking to important people about it, so it matters.
(Idk why I’m replying to this 2 years later). I forgave him for what I think are pretty normal reasons to forgive someone. A combination (1) of he’s been a good friend in many respects over the years and so has a bunch of “credit” and I wanted to find a path to our relationship continuing, (2) nothing like that ever happened again so I believe it was really aberrant and unlucky or he took it really seriously and changed, (3) like I said above it wasn’t that harmful to me and seemed less harmful than a lot of stuff a lot of other people do so it seemed like it should be in the “forgivable actions” reference class.
If I’d been the only woman in the world I probably would have forgiven him more quickly but I felt some need to punish him extra on behalf of the women who would have suffered more from what he did to me than I did.
I mean humans with strong AGIs under their control might function as if they don’t need sleep, might become immortal, will probably build up superhuman protections from assasination, etc
I’m glad this helped you, and think it’s cool you wrote up this recommendation, and I wish people did more of that sort of thing.
I felt very disappointed by this show. It fell into a lot of anime tropes I find cringey and misleading, but worse, I felt like the characters acted very irrationally and uncarefully, and in my opinion aren’t good role models of rationality.
E.g to pick a few early not-very-spoilery points, they don’t optimize their first deliberate de-stoning, and even though it’s known that when stone people break they die, they choose to carry a stone person they value highly, including running with them through the forest (which seems like it could easily have resulted in tripping and breaking them) instead of un-stoning in situ. Senku contends that Taiju shouldn’t let himself die to save Senku because both their skillsets are needed, but Taiju’s is about being physically strong (vs Senku being exceptionally smart and good at science) which is clearly a more common skillset (and easier to identify in petrified people).
Also, Senku contends that counting is “simply the rational thing to do” but that doesn’t seem obvious at all; for most people, that seems pretty unlikely to be the right approach to maintaining sanity.
Not necessarily a counterpoint to your main point, but Lightcone’s headquarters is not in San Francisco. It’s in Berkeley, which is a small city of its own with a very different vibe than most of San Francisco (it’s greener, less dense, more suburban, fewer tall buildings, fairly walkable and cute in most parts.)