LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
I don’t know that it’s actually targeting the stuff you specifically say here (because I think a lot of this isn’t actually the most useful version of rationality), but, I (and @Screwtape) am working on a rationality training site. I would compare it more to the older version of brilliant.org or codewars than duolingo.
Can you say a bit more about what you’d have wanted out of such an app?
I have plenty of complaints about this piece and wish Dario’s worldview/his-publicly-presented-stances were different.
But, holding those constant, overall I’m glad he wrote this. I’m glad autonomy risks are listed early on. One of my main hopes this year was for Dario and Demis to do more public advocacy in the sort of directions this points.
I also just… find myself liking some of the poetry of the section names. (I found the “Black Seas of Infinity” reference particularly satisfying)
Vaguely attempting to separate out “reasonable differences in worldview” from “feels kinda skeezy”:
SkeezyThe way this conflates religious/totalizing-orientation-to-AI pessismism with “it just seems pretty likely for AI to be massively harmful”. (I do think it’s fair to critique a kind of apocalyptic vibe that some folk have, although I think there’s also kind of similarly badly totalizing views of “AI will be our salvation/next-phase of evolution”, and if you’re going to bother critiquing that you should be addressing both)
That feels pretty obviously like a political move to try to position Anthropic as “a reasonable middle ground.” (I don’t strongly object to them pulling that move. But, I think there are better ways to pull it)Disagreement
Misuse/Bad-Actors. I have some genuine uncertainty whether it makes sense to be as worried about misuse as Dario is. Most of my beliefs are of the form “misalignment is real bad and real difficult” so I’m not too worried about bad actors getting AI, but, it’s plausible that if we solved misalignment, bad actors would immediately become a problem and it’s right to be concerned about it.
Unclear about skeezy vs just disagreeing
His frame around regulation, and it not-being-possible-to-slow-down feels pretty self serving, and/or confusing.
I agree with his caution about regulating things we don’t understand yet. I might agree with the sentence “regulations should be as surgical as possible” (mostly because I think that’s usually true of regulations). But I don’t really see a regime where the regulations are not relatively extreme in some ways, and I think surgical implies something like “precise” and “minimal”.
I find it quite weird that he doesn’t explore at all the options for controlled takeoff. It sounds like he thinks… like, do export controls and a few simple trade-embargo things are the only way to slow down autocracies, and it’s important to beat autocracies, and therefore we can only potentially slow down a teeny amount.
The options to slowing down are all potentially somewhat crazy or intense (not like “Dyson Spheres” crazy, but, like, “go to war” level crazy), and I dunno if he’s just not saying them because he doesn’t want to say anything too intense sounding, or he honestly doesn’t think they’ll work.
He reads something like “negative-utilitarian for accidentally doing costly regulations.”
...
This document is clearly overall a kind of political document (trying to shape the zeitgeist) and I don’t have that strong a take about what sort of political documents are good to write. But, in a world where political discourse was overall better, I’d have liked if he included notions of what would change his mind about the general vibe of “the way out of this situation is through it, rather than via slowdown/stopping.” If you’re going to be one of the billion dollar companies hurtling us towards unprecedented challenges, with some reasons for thinking that’s correct, I think you should at least spell out the circumstances where you’d change your mind or stop or naturally pivot your strategy.
There’s some related harder-to-track metric of “% code written by non-humans, which was a mistake.” (i.e. the code is actually kinda bad and the human would have done better to write it themselves).
I don’t feel very confident about any of this, but, I think it’s just sort of fine if not all posts are for all people.
In any other topic than politics, I think it’d be be fine to have a lower effort meta post trying to get traction on how to think about the problem, with the people who are already following a topic, before writing higher effort posts that do a better job being a good canonical reference. It’s totally fine for someone to write an agent foundations post that just assumes a lot of background while some people hash out their latest ideas, and people who aren’t steeped in agent foundations just aren’t the target audience.
It’s possible politics should have different standards from that such that basically every posts should be accessible, but, that’s a fairly specific argument I’d need to hear.
I agree it’d be bad if there were only ever political posts like this. I don’t know if I think it’d be bad if 10% or 20% or 50% of posts like this, would need to think about it more.
Thinking out loud about next steps.
So, I agree with all the commenters who be like “the listed questions feel like an oddly specific set of questions that are cherrypicked.” It’s not obvious what to actually do instead.
One angle is to try for more of a “world map” rather than a “US map” that is trying to ask general questions across history that a) make it easier to compare the US to other countries (Which seems relevant) and also forces the mindset of “see what are interesting things to notice across history” as opposed to “try to answer specific questions”)
Which, like, I still have no idea how to do.
But, it occurs to me OurWorldInData is already kinda trying to be this thing. Taking a quick look there, it seems like often there’s only relatively recent data (makes sense).
Their page on corruption does a decent job of laying out why the problem of asking “how corrupt are countries?” is hard, but, answers it a few different ways.
Nod. Agree with your object level take in the 3rd paragraph.
I think it’d have been dramatically more effort and mostly a different post to make the opening paragraphs to your satisfaction, and kinda the whole point of this post is to be able to write a second post that is more the type you want. (I also suspect you’re an outlier in the amount you’re not following Trump discourse already, none of the opening paragraphs are supposed to be new information for the reader)
Yeah I went to try to write some stuff and felt bottlenecked on figuring out how to generate a character I connect with. I used to write fiction but like 20 years ago and I’m out of touch.
I think a good approach here would be to start with some serial webfiction since that’s just easier to iterate on.
What is your concrete preference for what I had done with this post?
(this feels like a fairly generic response that’s not particularly engaging with the situation or post, which is specifically asking “how to get grounded”, with a description of my current ideas for doing so)
I think it’s a bad framing to treat “unprecedented moves to expand executive power” and “natural extension of existing trends” as the same mental bucket. The two are not the same. A key problem in the US is that the existing trends over the last two decades have been bad when it comes to expanding executive power.
I’m confused about what you mean here, the specific existing trend I was imagining was “unprecedented moves to expand executive power.” Which look different if they are on a steady trend, vs one guy radically doing much worse than trend.
Having sat on this for a night, I think basically yeah this posts’s framing doesn’t make sense as a way to engage with active Trump supporters.
Right now my main question is “should I spend more time thinking about this or go back to ignoring it and hope it isn’t too bad?”. I think if I decided to do that I’d probably expect “solve political polarization” to be a major piece of it and yeah I’d want to talk to a wider variety of people qualitatively.
I agree that baking in the framing into the initial question is bad, but, like, the framing is the reason why I’m even considering thinking more about this in the first place and I’m not sure how to sidestep that.
The point about “online arguments” vs “chatting with individual people” is well taken though.
A few people have noted “I don’t like that this post (and other recent ones) are blatantly talking about politics on LessWrong.”
It is pretty plausible to me that it’s not possible to get into mainstream political politics without some cascading effects that draw in the sort of person who wants to talk about politics on LW and is net negative.
(The mods do apply stronger standards to approving users who show up to talk about politics. So, this is not as immediately-failure-prone as you might expect. But, the risk from the longterm trends is pretty real).
I do think it’s just false that “LW doesn’t talk about politics.” The original Politics is the Mind-Killer post doesn’t say “don’t talk about politics”, it says “Don’t use unnecessary political examples.” We occasionally have talked about politics since the LW revival, about Covid, and “does the Ukraine situation have implications for whether East European LWers should evacuate” and “are we at risk of sudden nuclear war” and the first Trump administration. This post isn’t very unprecedented. You can argue it’s still bad. But, the status quo is empirically “political discussions happen when it seems important.”
...
But, I want to highlight a different issue. You might or might not think is bad, but I think you should be tracking:
...
Some people have asked “Where is all this sudden partisan framing coming from? It feels like the last couple posts are taking as a given that one should be worried about Trump destroying America without doing anything to justify that and assuming we’re all bought in.”
The answer is “Well, because we mostly don’t talk about politics on LW, all that conversation happened in person / private Slack, etc. One could put in a lot of work to lay out all the background assumptions when we go to write it up on LW, but, that’s an extra tax on getting write the important new bit one just thought of.”
This is perhaps analogous to how, for a few years, CFAR did most of their ideating in person, and then suddenly a significant chunk of LW authors were talking about Focusing or Doublecrux and taking them as obvious background concepts and everyone was like “wtf, what are these words, why are you so confident they matter?” and people were like “idk we’ve been talking about this for years, just not on LW because LW kinda sucks atm.”
Perhaps also: a lot of discussion of the AI safety space (around the same time, during the LW decline period) happened in person/private, and it involved a lot of “what’s the realpolitik about what OpenPhil will fund?” or whatever, and this resulted in some confusing conversations on LW.
The alternative to “stuff gets discussed on LW” is “it gets discussed elsewhere” and then the forefront of the rationalist conversation about what is important gets disconnected from the public, until, suddenly, it becomes important enough that it has to show up somehow, or leak through.”
This is coming up now (for me) all that background thought has led to me and some colleagues thinking “the Trump situation seems bad enough to be a contender for like Top-5 things to actually work on, alongside various flavors of Deal with AI”, and I’m trying to think through what that means. It also means I’m still trying to think about this cheaply/efficiently because I’m trying to decide whether to spend more time thinking about it. Which puts me in a weird place of “well, I’m just not gonna do the pretty exhaustive, methodical, ideal way of engaging with this question, at least at first, because the whole point is to reduce uncertainty on whether to invest in that sort of thing.”
I do think the various private circles I’ve seen this discussed have had more extreme filter bubbles than LW, and it’s actively useful to discuss it in public in places where it’s easier to get pushback from different corners of the ideologysphere.
These were in my model, it’s plausible I shouldn’t have posted this without putting more work into laying out the full model and trying to be fair / clear / ITT passing.
I edited the post to address a bit of this. In particular including:
[ETA] Of course, I know for many Trump supporters, the whole point is that he’s destroying a bunch of institutions that need destroying. I am actually pretty sympathetic to the idea that if you want a better government, you need to tear down the old one quickly. There might be enough differences of values here that there’s not much common ground to be had, but for me, the crux is that he seems to:
– Not merely be tearing down various bureaucracies, but, eroding norms like “there is supposed to be rule of law, generally.”
– It does not look like this is laying the way for anything good to follow, it looks like it’s just kinda making a more corrupt world....
> That seemed … like it was approaching a methodology that might actually be cruxy for some Trump supporters or Trump-neutral-
No? The pretense that media coverage is “neutral” rather than being the propaganda arm of the permanent education-media-administrative state is exactly what’s at issue.I agree the examples I listed there weren’t currently a methodology Trump supporters would agree with, the point was just that it felt pointed in a direction where I was like “oh, as long as I’m doing something comprehensive in this way, it’s probably worth putting in the extra work to find something that be cruxy for others.
I do disagree about “searching for instances of conflict between executive branch and courts” being something that’s particularly prone to media bias. I think most sides would agree there was conflict, just disagree on who was right, and media would report on it regardless just with different framing. (But I agree “seems like executive overreach” would definitely have that problem)
(I switched “non-sentient LLMs” to “ambiguously sentient” in response to Gears’ react)
What’s a good methodology for “is Trump unusual about executive overreach / institution erosion / corruption?”
No, it’s me expressing disagreement with your reasoning for “A few of these are, if somewhat unprecedented, not really institutional erosion, because they have a legitimate constitutional basis.”
because, a constitutional basis is necessary but not sufficient (because soft cultural norms are also important)
(But, this is an area I have not looked into enough to have a strong belief about the object level claims, just objecting to your reasoning as sufficient to prove the point you wanted to make)
Inspired by a recent comment, a potential AI movie or TV show that might introduce good ideas to society, is one where there are already uploads, LLM-agents and biohumans who are beginning to get intelligence-enhanced, but there is a global moratorium on making any individual much smarter.
There’s an explicit plan for gradually ramping up intelligence, running on tech that doesn’t require ASI (i.e. datacenters are centralized, monitored and controlled via international agreement, studying bioenhancement or AI development requires approval from your country’s FDA equivalent). There is some illegal research but it’s much less common. i.e the Controlled Takeoff is working a’ight.
If it were a TV show, the first season would mostly be exploring how uploads, ambiguously-sentient-LLMs, enhanced humans and regular humans coexist.
Main character is an enhanced human, worried about uploads gaining more political power because there are starting to be more of them, and research to speed them up or improve them is easier.
Main character has parents and a sibling or friend who are choosing to remain unenhanced, and there is some conflict about it.
By the end of season 1, there’s a subplot about illegal research into rapid superintelligence.
I think this sort of world could actually just support a pretty reasonable set of stories that mainstream people would be interested in, and I think would be great to get the meme of “rapidly increasing intelligence is dangerous (but, increasing intelligence can be good)” into the water.
I think I’m imagining “Game of Thrones” vibes but it could support other vibes.
I have not looked into these details enough to have an opinion, but, I think a lot of US institutions work via a mix of legal rules and implicit norms, and my sense is Trump was doing a lot of violating the norms that made legal rules workable
I’d be interested in something like “Your review of Serious Flaws in CAST.”
The worlds I was referring to here were worlds that are a lot more multipolar for longer (i.e. tons of AIs interacting in a mostly-controlled-fashion, with good defensive tech to prevent rogue FOOMs). I’d describe that world as “it was very briefly multipolar and then it wasn’t” (which is the sort of solution that’d solve the issues in Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most “classic humans” in a few decades.
TODO: Write a post called “Fluent Cruxfinding”.
In Fluent, Cruxy Predictions I’m arguing that it’s valuable to be not merely “capable” but “fluent” in:
figuring out what would actually change your decisions
operationalize it as an observable bet
make a Fatebook prediction about it, so that you can become more calibrated about your decisionmaking over time.
The third step is not that hard and there are nice tools to streamline it. But the first two steps are each pretty difficult.
But most of the nearterm value comes from #1, and vague hints of #2. The extra effort to turn #2 into something you can grade on Fatebook only pays off longerterm. So, I think probably you should focus on this before worrying too much about integrating Fatebook into your life.