Thank you
Szeth
.
Thank you for the response. My compulsion is more like constantly rehashing the low p(doom) arguments in my head or even reopening the same LessWrong posts; eg I could probably recite Garriga-Alonso’s “Alignment Will Be Easy By Default” post word for word. Short term it offers some reprieve, but long term the main effect is just keeping me thinking about x-risk for longer.
This does differ from traditional OCD compulsions in a way I think works in my favor. When I leave my house I quadruple check the lock, convinced in some strange way that if I don’t check again the door will be unlocked and someone will rob me. The compulsion “works” insofar as checking relieves the anxiety because I believe, in some strange way, that it causally matters. Rehashing old arguments doesn’t have that property. It gives me temporary relief in a way that’s characteristic of a compulsion, but I don’t believe on any level that my thoughts causally affect our chances of making it out of this alive.
Because of that, I’ve started trying to just let the “we are all going to die” thought sit in my head without fighting it. Not repeating to myself that empirically alignment seems easier than anticipated, not rehearsing arguments about takeoff speeds. Just having the thought and not engaging. It’s been quite successful so far. The lock checking is a different story; I kinda can’t fight that impulse yet since checking the lock, at least in my mind, does have a causal effect on whether my house gets robbed. But I’m optimistic therapy can help there.
By “for worse” i mean that it is probably taken more seriously because AI capabilities have advanced so much, which is not something that is good imo
Thanks for the comment, though my name is Szeth because Seth was taken. I didn’t even enjoy The Way of Kings that much.
AI X risk is almost perfectly designed to oneshot people with OCD. The combination of absurdly high stakes, uncertainty, and a community dedicated to discussing it 24⁄7 with inquiry considered a virtue is tailor made to trigger one’s OCD. Further, it’s not exactly a subject that therapists would take seriously though, for better or for worse (and definitely for worse), I imagine that’s starting to change.
I had wondered for a while why I responded so much worse on a mental level than basically anyone else and, upon being diagnosed with Obsessive Compulsive Disorder, finally have an explanation. The typical treatment for OCD beyond medication is quite effective and called Exposure Response Therapy. With it, you practice not partaking in your compulsions and noticing that nothing bad happened. This sort of thing would work for someone obsessed with cleanliness but not someone who thinks there’s a 30% chance that we will all be dead in 10 years. That being said, my dealing with X risk is very characteristic of OCD in a way which makes me think I could respond well to therapy, however the treatment would likely, though not necessarily, be more bespoke than is typical.
In terms of how I was specifically affected: I would run and rerun the same anti-doom arguments in my head over and over, and was literally unable to stop thinking about it to the point of being dysfunctional. I’m better now, though not where I’d like to be.
I was wondering if there is anyone here who with OCD who would be willing to talk with me about their experience and what sorts of treatments they did to get better or put me in contact with someone who could. I would be willing to pay up to 1000$ for this, though given the unverifiable nature I’m not sure how I would go about this.
I just looked at the Google trends and it appears the term “existential threat” was very rare up until about 2009 and then steadily increases which does track well with OPs theory.
Interesting. Existential threat could make sense because AI is clearly a threat to the existence of SAAS companies and whatnot. Alignment is trickier to square and LW influence could definitely be the best explanation.
“Existential risk” here doesn’t necessarily come from lesswrong. Using the phrase “existential risk” to refer to your company going out of business makes perfect sense as it’s literally a risk to your companies existence. Alignment is a trickier one but even there the phrasing makes enough sense that it could plausibly not be lesswrong inspired.
I would put “Doesn’t know Zvi, played magic” at number two. I think you might be overestimating either how well known Zvi is or how likely one would be to know Zvi conditional on playing magic in the late nineties/early 2000s.
This doesn’t make a ton of sense to me. You think it’s more likely that JD knows Zvi but didn’t play magic than played magic but doesn’t know Zvi?
JD Vance very possibly knows who Zvi Mowshowitz is. Vance used to play the Magic the Gathering deck “Yawgmoth’s Bargain” which was largely designed by Zvi. Not sure what, if any, implications this has but certainly an interesting piece of trivia.
going deep into gym culture and ‘looking jacked’ has actively negative marginal returns, including in terms of attractiveness and also the injury risk rises a lot.
The attractiveness claim here is overstated. For almost everyone, more muscle improves attractiveness. And for the small minority who might exceed mainstream preferences (eg amateur bodybuilders, ) their sexual prospects likely still improve because their social circles shift accordingly. Gym rate communities genuinely find more jacked more attractive, so someone who “overdoes it” by normie standards is typically selecting into contexts where that’s a plus, not a minus.
The one possible exception is short men, who can start looking disproportionate past a certain point. But even as someone who’s 5′5″ and fairly muscular, I’d say I’m probably still seeing positive marginal returns though I’m not 100%. Obviously women are also an exception to this but that’s a more complicated topic.
Szeth’s Shortform
Does anyone else feel like export control discussion at the moment is a bit hyperbolic? I don’t expect human level AGI to be achieved for about 5 years and by then the current iterations of GPUs will be obsolete. It certainly doesn’t give me confidence in this administration’s ability to do things, but selling China advanced chips now is probably fine if we stop in like 3 years. Which hopefully we will.
Great work as always. I’m not sure if I agree that we should be focusing on flourishing, conditional on survival. I think a bigger risk would be risks of astronomical suffering which seem like almost the default outcome. Eg digital minds, wild animals in space colonization, and unknown-unknowns. It’s possible that the interventions would be overlapping but I am skeptical.
I also don’t love the citations for a low p(doom). Toby Ord’s guess was from 2020, the Super Forecaster survey from 2022 and prediction markets aren’t really optimized for this sort of question. Something like Eli Lifland’s guess or the AI Impacts Surveys are where I would start as a jumping off point.
Thank you for the response.