Not as easily, if you’re trying to train them on tasks that look like the deployment environment, and they recognize the difference; they will learn how to behave within a narrow context that isn’t the one you need them to generalize to.
Davidmanheim
Your profile says “My writing is likely provocative because I want my ideas to be challenged.”
I’m sure there are places that would work for you, and you should probably go to those places, instead of here.
Do you not think this sort of social power is the most important?
I kind of think the power to actually kill people—legally via state imposition, or illegally, both made far easier with money, etc. matters far more. We’re just not used to worrying about being subject to that power. Correctly, for most of us.
But the implication of “we don’t need to worry about a type of power others won’t / can’t apply” is that excess use of social power is also largely irrelevant for those who don’t fear it. And it’s largely relevant to more socially powerful people—most humans don’t ever need to worry about the NY Times “exposing” them.
As noted in a different comment, Donald Campbell proposed that this was what humans often do in 1960; https://doi.org/10.1037/h0040373.
”1. A blind-variation-and-selective-retention process is fundamental to all inductive achievements, to all genuine increases in knowledge, to all increases in fit of system to environment.2. The many processes which shortcut a more full blind-variation-and-selective-retention process are in themselves inductive achievements, containing wisdom about the environment achieved originally by blind variation and selective retention.
3. In addition, such shortcut processes contain in their own operation a blind variation-and-selective-retention process at some level, substituting for overt locomotor exploration or the life-and-death winnowing of organic evolution”
It seems useful to flag when and where Lesswrong work has academic parallels or antecedents, and while doing a peer review, I realized there is a connection to Campbell’s 1960 paper, “Blind variation and selective retentions in creative thought as in other knowledge processes,” especially as extended by Simonton. Obviously, this isn’t a direct link or a source, but it seems to be a useful parallel construction of a related idea.
See: Simonton, D. K. (2011). Creativity and discovery as blind variation: C”ampbell’s (1960) BVSR model after the half-century mark.” Review of General Psychology, 15(2), 158–174. doi:10.1037/a0022912
Could you hide by default for the first hour, and make it a user interface option that defaults to off?
...but then you don’t extrapolate to tat trend going even further with more wealth?
Yes, AGI will be default be a disaster, but in the cases where it is not, it is hard for me to understand the argument here that everyone would be better off by default, that we couldn’t end up benefitting greatly. I don’t disagree with most of the piece, but this part seemed hard to beleive.
Thanks for the feedback, good to know.
The problem with Federalism isn’t that it doesn’t capture the idea, it’s that it also requires stable delegation and authority, which this post really isn’t about. It’d partly subsidiarity, or autonomy, but also state capacity mismatch, and Fukuyama’s discussions of decay, and institutional drift, or maybe closer to fragility of complex systems, and the way they fail? (Charles Perrow’s work, specifically.)
But I think you’re talking about it differently than any of those alone, and none have a simple term for this.
That’s a really good point.
I think it’s selection bias on the conspiracies finding those that are the most plausible or most successful, and accurate perception of how poorly the examples you’ve seen personally work out.
I don’t think the terminology was clear. (I finished 100% of the essay and got to this comment before I understood why you picked the word.)
In different words, if the garden is too large to defend...
Defense of territory is linear with border area, defense against internal threats is linear in area, defense against org scaling failures seems at least linear in the (already exponentially growing) size. So yes, EA has a huge problem with scaling more, and more, and more. This is mostly because scaling is very, very hard.
I think I lost your point now, this got very meta.
The thing that I find confusing is I feel like I get the vibe that is meant, but I don’t understand what the brightline or criteria are for when something is acceptable vs violates some norm.
I don’t think there is a bright line, there’s just a point being made about a gradient where discourse chess is on one side, and talking about object level facts in on the other. And I pointed out that on the chess side, people suck at getting what they want.
Similarly for the other points, I don’t really care about exact lines, I’m not being prescriptive.
I’m not really sure how to take your statement as not being an accusation of EAs doing the thing you are criticizing.
You should read the last paragraph of my response. I don’t need to ask what prominent EAs would think, since I know what they did, in fact, say. And you may not have been around, but if you look at older posts on the EA forum, this wasn’t exactly a secret, it was a declared intention. And looking back, Rob was complaining about this 5 years ago. And here’s complaining about related issues 7 years ago. So I agree that imputing this type of behavior, as an accusation, is worrying, but that’s different than pointing out when the behavior was in fact intentional.
I am stating, not implying, that the galaxy brain plan for stretching the Overton window backfired on them. But I’m not criticizing the position, I’m criticizing the efficacy of the strategy. Whether it was a luxury belief seems like a mostly separate point, though correlated in that such clever strategies do seem to be more likely to be pursued by the educated and wealthy.
I’m somewhat confused by what people mean by “strategic” in these discussions.
Apologies, I tried to make this clear: I am referring to “high-dimensional discourse chess” that requires asserting or assuming “we can model how public acceptability shifts and cleverly intervene to steer those shifts.” That’s not about communicating an idea, it’s about the goal of convincing people of something in order to have them react in order to change the pubic acceptability of another thing.
Let’s take the “DEFUND THE POLICE” example from the post.
Of course, not everyone who said “Defund the police” was playing games—some were true extremists, including leftist anarchists. But far more were being duped by those trying to play those games, advocating for what they thought was good based on social proof, almost always without a coherent replacement in their mind. That is, they generally weren’t advocating anarchy, they were advocating replacing police with something else undefined. It seems you are saying something similar; you dislike the current system, and say it should be replaced instead of reformed, but don’t have a clear argument for the details.
I think the best approach is to… leave aiside the dicussions about whether someone is being stategic, how truth-seeking they are, if they are “gaslighting” etc.
I agree—and if anything, think it’s exactly in line with the post’s conclusion? Specifically, in the post, I don’t argue that readers should dismiss anyone else’s views for playing discourse games, I argue that they should not attempt them.
The only reason I feel comfortable using the Defund the Police example is because the leaders of the movement were explicit in their intent to widen the window. For example, “Progressive Congresswoman Alexandria Ocasio-Cortez called the use of language like “defund” an “excellent choice” for those who have been trying for years to “prompt a national conversation” about police. “‘Refund’ or ‘reallocate’ didn’t do that,” she tweeted. The shocking language of the slogans can help shift the Overton Window, making what might otherwise be politically controversial interventions more palatable. ”
And I wasn’t guessing about EA either; I have been in the room, repeatedly, when senior people in EA talked about shifting the Overton window on AI risk. So yes, don’t accuse others of doing this, but that doesn’t mean you can’t call them out when they say they are doing it!
Many of those groups would have been left to starve to death a century ago. The idea that a rising tide doesn’t lift all boats is historically illiterate, and so you would need a more specific argument for it to make sense here in the case of (non-omnicidal) ASI.
Regardless of the question of AI impacts, and meaning, it’s simply untrue that greater wealth hasn’t changed the picture tremendously. Life expectancy has shot up. The idea that being richer didn’t matter is bullshit. Yes, it costs $5,000 to save a life today, and we haven’t saturated those opportunities. But in 1970, 55 years ago, we were in the middle of smallpox eradication, which like had a cost per life saved likely under $1,000, in current dollars. Oral rehydration therapy was becoming available, and was not yet widespread; it also likely had a cost in a similar range per life saved. And going back further, in the 1920s, Russia was having a massive famine. In 1921–23 there was a relief effort; the American Relief Administration probably saved several million lives as a cost of around $300 per life saved in current dollars.
I don’t think that advice is obviously wrong—the costs of being outside are real, and so there are definitely times when staying inside the Overton window is important. Not alienating policymakers seems like a good place to simply not talk about certain things, at least sometimes. (c.f. Eliezer’s Time article being laughed at during the White House press briefing. That worked out fine, Eliezer wasn’t trying to stay in the good graces of policymakers. Someone from CSET writing the same would have been a mistake.)
So it’s further evidence that EAs are sensitive to the questions, but not an example of what I think Rob was criticizing, and as noted, not something I think was the wrong call.
I’m not talking about a lack of hedging. Being too busy to think through and clearly present your thoughts wastes the time of others. And not following community norms isn’t bravery, nor is your lack of tact.
Frontpage comment guidelines:
Aim to explain, not persuade
Try to offer concrete models and predictions
If you disagree, try getting curious about what your partner is thinking
Don’t be afraid to say ‘oops’ and change your mind