This essay is introspective and vulnerable and writing it was gutsy as hell. I have nothing substantive to contribute with this comment beyond that.
25Hour
ooh, just noticed this comment.
Re: the first bit, all feedback was given in front of all the men and all the models. There weren’t really secrets about what the models thought and their brutality could be spectacular to witness. As could Lynn’s. Responses to guys at the end ranged from “you seem kinda like a child to me” to “you’re so anxious and i wish you could chill the fuck out” to “you seem really sexually inhibited and i think you should work on that” to “i really enjoyed working with you :)” to “you are the most insecure man I have ever met and need to learn to take any kind of criticism or you will never grow.”
Re: the second bit.… hmm. Yeah, I do sometimes wonder about this. Like, the problem with social anxiety is your social calibration is broken, and the problem with non-normies is they ALSO have broken social calibration but ALSO have poor baseline behaviors.
I have no idea how non-normies can train social grace.
I’m interested in doing this; I quite enjoy gamedev and would love the opportunity to dovetail that interest with something actually important. Is there a writeup anywhere of what specifically the tabletop exercise entailed? It seems like the AI-2027 summary contains precious little mechanical information.
Yeah, I feel very seen in this moment. I spent a long time polishing this blog post and while i think some of that was well-spent, some of it was just neurotically changing words around while being concerned about how it would be received.
I guess it feels harder in writing than in speech because in writing you in principle could spent forever polishing a piece before publication.
Yeah I was having a really rough time trying to find a good synthesis. I think I arrived at my current setup because my default behaviors are pretty good if and only if I’m not freaking out about how I’m coming across; I think this is broadly typical but definitely not universal.
Agreed that there is an important difference between trying to force a specific micro-scale interaction to go well (no! bad!) vs trying to set up rules in such a way that interactions go well for you in general.
Ultimately the reason so much of this post is autobiographical is that while I suspect the mechanism of social anxiety that I posit is generally correct, the method by which I am resolving it is probably somewhat specific to me.
I have different challenges than other people, and so different kinds of explicit goals might work for me than you, in terms of threading the needle between “resolves anxiety by giving me explicit internal standards by which I can judge my behavior” versus “enables me to function happily as a social being without an unacceptable probability of unpleasant blowback.”
I think the important thing is to have standards of behavior for yourself that are fundamentally objective (ish) and totally under your control. I don’t necessarily know what that looks like in your case, though.
Yeah. Rejection sucks, social humiliation sucks.
I do agree with you about exposure therapy; I think it’s important in the sense that it gets you reps on this stuff, I just don’t think it necessarily functions alone without conscious reordering of your goals away from “control others’ internal state”
I saw several people at LessOnline concerned explicitly about their social phobia about trying to ensure they were never boring anyone, and I don’t think that is solvable without explicitly abandoning “never bore anyone” as a goal (an alternative goal structure might be “ensure I am always giving adequate space in the conversation”, which is fine because that is a goal you can easily implement and verify.)
IMPORTANT CHANGE: I’m moving this to May 4th because April 13th is on Passover and Jewish people can’t eat pizza on Passover.
oh wait you mean the email invite. Yeah, that’s a great point, i’ll kick that off again.
sorry, didn’t see this previously! No, we actually still have those roughly weekly. We post them on the discord at https://discord.gg/m2xJcuC937
Primarily people come to this on the discord, so I just have this on lw for visibility
Hey people! Sorry, due to uber related issues going to be a few minutes late. Shouldn’t be more than 10 though.
So this all makes sense and I appreciate you all writing it! Just a couple notes:
(1) I think it makes sense to put a sum of money into hedging against disaster e.g. with either short term treasuries, commodities, or gold. Futures in which AGI is delayed by a big war or similar disaster are futures where your tech investments will perform poorly (and depending on your p(doom) + views on anthropics, they are disproportionately futures you can expect to experience as a living human).
(2) I would caution against either shorting or investing in cryptocurrency as a long-term AI play; as patio11 in his Bits About Money has discussed (most recently in A review of Number Go Up, on crypto shenanigans (bitsaboutmoney.com) ), cryptocurrency is absolutely rife with market manipulation and other skullduggery; shorting it can therefore easily result in losing your shirt even in a situation where cryptocurrencies otherwise ought to be cratering.
Worth considering that humans are basically just fleshy robots, and we do our own basic maintenance and reproduction tasks just fine. If you had a sufficiently intelligent AI, it would be able to:
(1) persuade humans to make itself a general robot chassis which can do complex manipulation tasks, such as Google’s experiments with SayCan
(2) use instances of itself that control that chassis to perform its own maintenance and power generation functions
(2.1) use instances of itself to build a factory, also controlled by itself, to build further instances of the robot as necessary.
(3) kill all humans once it can do without them.
I will also point out that humans’ dependence on plants and animals has resulted in the vast majority of animals on earth being livestock, which isn’t exactly “good end”.
This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?
Hey, I really appreciated this series, particularly in that it introduced me to the fact that leveraged etfs (1) exist and (2) can function well as a fixed proportion of overall holdings over long periods.
Is the lesswrong investing seminar still around/open to new participants, by any chance? I’ve been doing lots of research on this topic (though more for long-term than short-term strategies) and am curious about how deep the unconventional investing rabbit hole goes.
It’s a beautiful dream, but I dunno, man. Have you ever seen Timnit engage charitably and in-good-faith with anyone she’s ever disagreed publicly with?
And absent such charity and good faith, what good could come of any interaction whatsoever?
This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith. They are also probably a single-digit number of people, albeit a group really good at getting under techies’ skin.
Re: blameless postmortems, i think the primary reason for blamelessness is because if you have blameful postmortems, they will rapidly transform (at least in perception) into punishments, and consequently will not often occur except when management is really cheesed off at someone. This was how the postmortem system ended up at Amazon while i was there.
Blameful postmortems also result in workers who are very motivated to hide issues they have caused, which is obviously unproductive.
Oh it’s totally vibes based. Also if I were writing it today I’d say I no longer experience social anxiety as a meaningful part of my social life (since at its core I’ve learned to simply associate the feeling of “anxiety” with the knowledge that I am now attempting the doomed mind control task, which has acceptable replacement behaviors that I can use instead.)
Perhaps more usefully, I think it’s reasonable to just track how often you feel the need to take micro-breaks in social interaction where the breaks involve going to places where other people aren’t, like the restroom or to your room. These are extremely clear tells because if you don’t feel like talking, you could just sit down in a common area seat, but socially anxious people frequently don’t do that because they’re afraid of being judged for not socializing.
Probably the most useful thing I track nowadays is how often I am successful in (1) detecting the presence of anxiety-as-mind-control and (2) redirecting the objective to either “say true thing” or “do the thing I would want to do if I could not be judged for it.”